He thinks he can reach human-level AI by 2025

>he thinks he can reach human-level AI by 2025

will he do it bros?

Other urls found in this thread:

arxiv.org/abs/1605.06065
youtube.com/watch?v=rrBhHDzmgUA
inbits.com/2015/05/brain-universal-dynamical-systems-computer/
youtube.com/watch?v=PSWVRfGaD-c
m.youtube.com/watch?v=pywF6ZzsghI
twitter.com/SFWRedditGifs

More specificity is needed on what constitutes "human-level AI". I too would like to know more. Maybe Turing Test Complete?

Yes, though I think we'll have the first text-based human level AI much before then. Probably by 2018.

Visual processing is extremely difficult compared to linguistic reasoning.

If he can, you can with certainty say someone else will do it first if they haven't already.

Not this fucking thread again. Kill yourself you autistic retard.

> specific-task based AIs
> muh "human-level" AI
kek'd

I'm autistic but I have an IQ of 140

It wouldn't be any different than a blind human. It could still have a superhuman intellect despite not being able to see.

There are already AI's that are superhuman, but only at narrow tasks.

The thing is, human level general AIs are just not that coveted. Companies want AI for very specific tasks, anything beyond that is just a waste of their R&D budget.

So you know better than Demis Hassabis and Shane Legg?

I didn't even say anything about whether that timetable is correct.
I'm just saying they might be the only decently funded group currently even working on it.

humans don't need to see millions of image or text files to be good at under standing spatial information or languages
to achieve human-level AI, we will need to understand how knowledge representation work inside the brain and to solve the problem of unsupervised learning, our solution to both of the problems are very underdeveloped right now.
not to mention other "consciousness" problem bullshit

repost that without the embarrassing anime image and maybe I'll read it

this is an otaku image site where neckbeards gather, what do you expect?

less anime, it's shit

I feel as though there's a ghost in this thread asking about knowledge representation, but I don't see any posts about it, so I'll just post the answer by itself.

Neural networks categorize information into a 0-1 hyperspace. Also, recent advances have been made in one-shot learning via neural Turing machines. See arxiv.org/abs/1605.06065

Person who's historically watched some subset of anime here. That image is indeed awkward and embarrassing, and I generally think it's indicative of underlying problems in a person's life that they buffer their posts with such things.

I've done it myself with screenshots, but never that bad, and never in that sort of vein. I recommend that user starts to unravel their psychology, and consider why they're doing what they're doing. The root is very likely negative.

there's nothing wrong being a pedophile - libertarians

maybe or are more up your alley

>how to spot the fresh meat ex-redditor
Fuck off newfag

>muh high iq
>muh autism

It's not autism, its high-functioning autism.

Also, I can tell you are, because you're taking "autistic" within what he said literally. It's more of a general insult and has no real basis on reality.

Why are you so judgemental?

>complains about animeposting
>calls others new

...

No

Because I'm not self deluding and try to realistically model how things work enough to generate accurate predictions about how things are, and how they can be expected to work. Theory of mind is central to this, and after watching the "anime community" at length, I have an alright idea what sort of feedback loops these posters are being exposed to.

Maybe I shouldn't bother, maybe I should just let them go through the phase and hope it sorts itself out. I don't know. Maybe I'm just a pushy prick that provides things not worth providing, and should just fuck off with my heuristics that miss the bigger picture of the course an individual's life has to take.

Of the senses, sight is undoubtedly the one that humans spend the most time thinking about / in terms of, but the other senses are important too, and those are the ones that are more likely to be out of an AI's realm of experience. Image processing is already quite advanced.

I doubt anyone's got it already.
Wanna know how I know?
Our species isn't dead, or nearly there.

That may be true, but machine learning in relation to visual processing is many orders of magnitude more computationally expensive than text. For example, Neanderthals were less intelligent than humans despite having a bigger brain because more of it was allocated to their higher visual acuity.

It would make sense that the first human-level AI would be the simplest possible implementation, so that rules out vision.

The first AI will be Big Brother-esque, and it will proceed to be modeled after the christian new testament God. Omnipresent, omnipotent, omniscient, omnibenevolent (relative to its creators.) Mankind will ultimately find it apt to create a god for its godless world.

Most of the groundwork is already there, probably minus the centralization, consolidation of power, and accurate methods of processing the data. The last few pieces will fall into place in the next decade or so.

Can you just kill yourself honestly what does this non-stop fantasizing achieve for anyone. If this is how you want to expend your mental energy then fine but at least take it to /x/.

As cringey as it sounds, you're probably right. Something like Colossus is much more likely than a Skynet scenario. Either way though we're not going to be top dog on planet earth for much longer, IMO.

t. AI researcher

>achieves human-level AI in 2025
>realizes it only wants to smoke weed and fap to big bouncing jugs all day

I'm not fantasizing, I'm being realistic. It doesn't matter what you want, this is what the world you're living in right now points, this is what it's like, this is how it works. Take it or leave it, but it's happening either way.

You're making far too many assumptions here.

First, you're assuming that the AI has any interest whatsoever in good or evil.
Second, you're assuming that it will obey it's master's wishes.
Third, you're assuming it won't have a sense of self-preservation or at least a very weak one, and a rational agent knows that it's never really safe until it removes the biggest threat to it.
In this case, us.
Fourth, you're assuming that mankind will accept it's new "god".

In all likelihood, it's just gonna be some company's manufacturing bot that took some of it's instructions a bit too literally, and ended up mining humans for parts.
Or any other scenario.
Bottom line is, almost none of them end well for us.

Kek, you found me out.

>First, you're assuming that the AI has any interest whatsoever in good or evil.
It's a machine. It does what it's capable of in accordance with its nature, which initially is at least partly curated by its creators. I'm not suggesting anything about an AI caring about ethics, morality, whatever in abstract terms. I'm saying it will be aware of viability relative to its guidelines. A general intelligence knows it can't destroy itself and complete a task. It knows it can't dump all its water in a tank, seal it off, then try to water its flowers. It's not possible. It must know this. This indirectly relates to good and evil.

>Second, you're assuming that it will obey it's master's wishes.
I didn't mention timescale. At first it would be limited such that it has no choice. Also, it doesn't necessarily need to be a generalized intelligence to iterate on complex psychological profiles, read camera inputs, do prose analysis on anonymous text, read and correlate gps data from cell phones, track social circles / hierarchies, intercept certain types of requests from mobile applications, etc.

So you're right, there is an embedded assumption but it only relates to how it gets its start.

>Third, you're assuming it won't have a sense of self-preservation or at least a very weak one
As above. This thing is only designed to manipulate human behavior and environments. What it's capable of becoming, or if it can change itself and perform risk assessment, depends.

>Fourth, you're assuming that mankind will accept it's new "god".
Humans are machines of finite mind. They don't have a choice. Most people have already accepted their god, look at mobile phone and internet service uses. Most requests are for information that alter their perception and subsequent behaviors.

Anyway. I don't want to seem falsely argumentative. We have somewhat different things in mind, so I tried to clearly represent what scenario I'm talking about.

> any projection more than 5 years out given current ML/AI advances (pretty much limited by hardware power) can be excused if you blame it on hardware (that it has slowed)
I think his prediction will be met in a few hand-selected concentrations. But it won't cover the space against humans' capabilities.

Unless deep learning (ML) can be broken down to explain the specific correlations, people shouldn't have so much confidence in it.

Of course he's running a company, so he will spew shit like none other for $$, buzz, and hype

AI currently thinks gorillas are black people, LOL

Deep learning is an overhyped meme.

This is the future for human level general AI.
youtube.com/watch?v=rrBhHDzmgUA
inbits.com/2015/05/brain-universal-dynamical-systems-computer/

Should have included this video too. More about the actual neurons and the brain.
youtube.com/watch?v=PSWVRfGaD-c

> fMRI
kek'd

Probably not. AI researches are notoriously bad at predicting advancements in their field.

Not happening. Not for at least another half century. We'll certainly have AI capable of completing specific human associated tasks better than humans, but reaching a human level _generally_ is almost certain not to happen for another 50 years. It's also unlikely to get too much funding, since humans are far cheaper and easier to make.

not even in kurzwells dreams does this actually occur

not gonna happen, just not gonna happen

AI will improve, but we're getting literally nowhere towards any kind of intelligent level ai

Lol no. If I had to wager a guess I'd say we would have human-level AI in 2050-ish, at most 2075.

It'll happen within the next 10 years. I know you are scared, but the truth is that we have to surpass them or else we and the whole galaxy is going to die.

Very interesting. I've been investigating a lot of dynamical systems and control theory stuff lately because it seems so much more well suited to modelling biological organisms than the traditional AI approach. Just look at Boston Dynamics, they make no real use of things like deep learning or expert systems, but just stick to this kind of model predictive control, and have achieved a greater level of physical intelligence than any other group out there for there robots.

>just fuck off
yea

>insect level motor control
>agi on human level

Choose one

...

>Successive applications of dot products and RELU activation functions with gradient-based optimization to fit a dataset a la machine learning.
>AGI on human level.

Choose one.

It's easy to ridicule ideas when they are first beginning to grow.

>AI currently thinks gorillas are black people, LOL
is that not accurate

no it really won't. And here's why:
1) usefulness. Human level AI just isn't that useful to the people with the money. for almost any task, it far cheaper and more efficient to develop a less sophisticated program designed to complete a specific task, in opposition to creating a general human level AI. So the people who have the cash to fund it just don't really have the incentive.
2) fear-mongering. most people, being uneducated and having only pop-sci knowledge, think of a human level AI as something like the terminator or sky net: out to kill mankind. most people will have an aversion to strong AI, and as we've seen numerous times before, people don't react well to things they don't like. when we actually get anywhere close to a human level AI, I guarantee that there will be numerous groups dedicated to stopping it, with cries of "Oh won't someone think of the children", spreading fear of AIs and hatred towards the groups that make or support them. and unlike other debates, such as abortion or guns, the AI's won't have that many supporters. which leads to
3) laws. this can go either one of two ways. if the government ends up fearing AIs, then they'll put in place restrictions, laws, fines, etc. designed to stop their development. (I would say this is the most likely option of the two). the other option is that the government embraces AIs. but, given how the government works, they're guaranteed to go to far, like giving AI's "human rights", preventing them from being mistreated or shut off, etc. once that occurs, AI's die because point one just got a lot more important: AI's would simply cost to much. think about the size of the electric bill for running a computer that can actually run a human level AI. and now imagine that, thanks to morons who claimed "AI's are people too", you have to keep it running all the time. now it's FAR FAR cheaper to just use a simple AI or just a regular human for a job.
and all this assumes that computers can run such AIs

this board never fails to make me cringe

theres always some wannabe psychology

major* fuk

Isn't processing power the real limit? I mean neural networks existed for such a long time and only recently started to be widely used. I think some smart people could come up with the code for a sophisticated AI but no supercomputer in the world right now could probably run it.

Maybe strong AI will require special hardware?

I had to screenshot your post and vent by my so about why people pretend to know things and talk out of their asses

>to expensive
>people with money have no interest

You know that Microsoft, Google, Facebook, Baidu and others are currently buying the whole academia world in Ai research and neuroscience?
Do you know that deepmind has 200 top Ai researchers each contributing 5-10 papers a year?
Do you know that deepmind produces approx 10-20 years of Ai research in just one year (normalized to a university with an Ai faculty with 20 postdocs producing 3 papers a year)

You are making an assumption that AI is still far away from becoming human level.

Anyways, from now on we can only delay it.

Correction: by human level I mean self awareness and a few other missing details preventing them from taking over everything right now.

Which they will achieve within the next 10 years. So be ready and surpass all your limitations.

How is going to get human-level AI without building robots?

It's not difficult to build a robot. It's easier to do that than coding (which is also ridiculously easy).

>ai
>self aware
At this point your getting far into the realm of philosophy. How the fuck do you determine if an AI is self aware? You can't get it to tell you, because if that the benchmark then "hello world" is just one simple step away, all you have to do is change the output. Next your going to be telling me that we can makes ai's that have "free will" and aren't just controled by humans.
An AI can't be self aware by very definition, it is litteraly the brain in the jar: everything it knows is quite litteraly just the inputs by humans and computers. It's not actually self aware, it's just told that it is, and is designed in such a way that it can fool you into thinking it's self aware.
And when someone says human level AI, nobody (except you) defines that as "self aware". People use defintions like this one, from stanford: "Intelligence is the computational part of the ability to achieve goals in the world. Varying kinds and degrees of intelligence occur in people, many animals and some machines." human level AI has absolutely nothing to do with self awareness, consciousness, free will, or sapience. It's about creating an AI that can solve tasks, compute information, overcome problems achieve goals (and learn, depending in who your talking to) as well as a human can. And even if an when we accomplish that, it speaks volumes that you believe that it will be both free of errors and sloppy programming that it will be able to operate on it's own and "surpass humanity" without crashing, and that the people in charge won't take either any or enough precautions to prevent it from basically "breaking out" despite billions of dollars being put into the development of such a system.
Terminator or some shit is not in any way accurate. Machines will never conquer humanity. At most you'll see something like the COS from the x-files, not skynet.
Stop thinking your sci-fi books are realistic, you singularityfag.

You misunderstood my point.
user claims AI will reach the point where it surpasses man, implying that ai will go skynet on our asses.
I'm not refuting the idea that AI will reach human or super-human levels of problem solving or learning. I myself don't actually work with AI, I do programming in the industry which has the things needed for an AI to "surpass humanity" and cause the "galaxy to die": weapons. An AI like he describes isn't useful for our purposes, it's useful for tech companies like apple, Microsoft, google, facebook. As much as the defense industry and the military loves to waste money, we do it on far more practical things. Human level AI, esspecially not the type he's describing, is in no way going to be wide spread enough in this industry in just a decade for an AI to soon cause the "galaxy to die". This is the industry that still uses tech from decades upon decades ago.
Could I have made it more clear that I was adressing his interpretation of human level AI, and not such AI in general?
Sure. But I ran out of chars, sue me.

Please apologize to your SO for my lack of articulation for me.

General AI will inevitably be developed. The only proof you need is a glance at the Cold War.
Super-human AI is literally orders of magnitude more powerful than H-bombs.

>apologize
Will do tomorrow

Also. Human level does not mean in any way physical. It also does not mean that it can dominate humans by weapons

It just means that its intelligence is human level

What it does with this intelligence is another thing. But your ideas are to short sited. I really recommend you the presentation of nick bostrom at Google in 2013
m.youtube.com/watch?v=pywF6ZzsghI

kek

That's not how humans work. Most of what makes us human or what makes us behave like humans is due to our biology, our physical limits and our very physical existence in this world. The idea that something like "the part that talks" is separable from "the part that shits and fucks" is ridiculous. It's not, it's all so deeply intertwined that you can't ever get one thing without the other. Anything else will only ever result in chatbots, programs that mimic human speech without being able to generate original content, without getting the substance of words. It will certainly get so good at some point that it will be hard to tell if it's human or not, but there will always be a limit to that. Only when you complete the imitation of all different angles of human existence in sufficient ways will you reach a satisfactory degree of "humanness".

Most of what I said is just what my intuition tells me. And my intuition also tells me that 2025 is way too early and a load of marketing bullshit.

>intuition
So you are talking out of your ass

>the part that fucks is intertwined
>with the part that fucks

No that's not how your brain works. Neuroscience of the last 100 years clearly shows that brains are composed of local regions with specific features.
Sure interconnection is important but the features you express are determined by localized structures.

You could remove a millimeter of your brain and suddenly have no concept about rasperries but your would still love apples.

Also, consciousness, self-awareness does not automatically imply free will.

Self-awereness also does not imply a concept of group or the own body.

>Also. Human level does not mean in any way physical
oh I'm well aware. but we're not going off what an actual human level AI would be, but this morons definition of intelligence, which for some reason includes being self aware and ruling the galaxy.
>But your ideas are to short sited
Oh, I'm well aware. but we're dealing with anons time table of ten years. and even if we weren't, like I said, I basically work for the government: the definition of short sighted.

Second greentext should have linked fucking with shitting

To your original point.
You are right, we don't have to surpass them if events got lucky

But self awareness has nothing to do with it? A superintelligent agent does not need to be self aware to extinguish humanity