He thinks he can solve human-level AI within ten years

>he thinks he can solve human-level AI within ten years

can he do it bros?

Other urls found in this thread:

sciencedirect.com/science/article/pii/S1571064513001188
twitter.com/NSFWRedditGif

He can make new tycoon game and nothing else.

Let's wait and see

>human level AI
Never gonna happen. An AI can at best have the computational power of a Turing machine. Human consciousness is stricly more computationally powerful.

why are people like this so sure of themselves

you think you know better than demis hassabis, shane legg and geoff hinton?

It doesn't matter what I think, what matters is his plan.
Has he said how he intends to do it, does he have any reasoning behind why it's going to take ten years?

it wouldn't only be 10 years
it's centuries of intellectual thought, perhaps even milennia, that will lead to production of the first AI. humans have been trying to do this since the dawn of time.

"human level AI" isn't a well defined problem
saying something like
>Human consciousness is stricly more computationally powerful.
shows all you know about AI is memes

...

different guy:
there are actually some decently compelling reasons to belive that the human brain is a hypercomputer, or at least that an algorithm that would replicate it is non-computable.

see the writings of pic related

Definitely not in ten years but maybe in a more distant future

>muh quantum entangled microtubules

deepmind is a scam company.

We can't simulate a primitive worm consists of 100 cells, but yes we are able to simulate brain with 10^10^1000^100 sort of complexity

You know what all the most controversial scientific discoveries have in common? Everything from the heliocentric solar system to evolution?

They all implied that humans weren't special.

>Tumblr the species.

What.
They've simulated up to mice in real time.

>They've simulated up to mice in real time.
[citation needed]

How we can simulate neurons, if we even don't know for sure how its work. This is ridiculous

What called neuron network AI today is just an associative databases in real. It's not even close to the intelligence. Set up some feedback loops by hand to play ATARI games this is not how intelligence works.

last year some papers were released in which 6 claims unique to Orch-OR were confirmed. No response from Tegmark or anyone else yet. I would strongly suspect that neither of us are really qualified to critique Orch-OR. Also, Penrose thinks the mind is mostly classical and computable, but certain aspects of it, such as qualia, may involve the microtubules. He doesn't even specify which, but he suspects perhaps qualia. Interestingly enough, some of his predictions helped explain quantumn effects relevant in rods in the eye just last year too. It's not as trash as you might think, Penrose is brilliant and has fantastic intuition. Also, Tegmark's critique of Orch-OR was a cheap shot using different numbers. Not trying to be condescending, it's actually interesting, you should look into it more if you like philosophy of mind. peace

true, ANNs and "Deep learning" are glorified statistics. Only normies think they are that special.

>last year some papers were released in which 6 claims unique to Orch-OR were confirmed
Links?

t. armchair neurologist who's watched a couple ted talks on deep learning

sciencedirect.com/science/article/pii/S1571064513001188

I don't really align with all of Penrose's views, and definitely don't aligh with all of Hamerhoff's views, but this is interesting nonetheless.

>computers
>>>science
>>>>>>implications

Thanks mate

No, but he should be assassinated just in case.

>>he thinks he can solve human-level AI within ten years
So did lots of people in the 1960's and 1970's.

Actual lol

Those 1960 and 1970 fuckers were too focus on Turing and the symbol hypothesis. They didn't didn't care about biology and fucking discarded the early work in neural networks altogether. Can you imagine where we would be if people started using networks earlier?

This is what happens when people believe that study philosophy is not neccessary.

>Can you imagine where we would be if people started using networks earlier?

Yeah, we would not know as much algebra as we do now.

conservative estimates are 30-50 years for human level AI

neural networks have very little to do with biology or real neurons

Is his name demis or dennis?

>You know what all the most controversial scientific discoveries have in common?
>they all implied that anthropocentric is wrong
anthropocentric climate change

also

showing once again that conservatives are retards

>I know nothing about this topic but I'll try to appear smart by taking a contrarian stance

Different user, they have NOTHING to do with biology and are loosely connected to a simplified conceptualization of neurons. Go fuck yourself brainlet.

>simplified conceptualization of neurons
>nothing to do with real neurons

hmm

>loosely connected = nothing to do
READING COMPREHENSION

No.

Human consciousness is a biological adaptation, you could emulate it in a computer but you'd also need to emulate the environment it evolved in including the needs of the organism that it meets.

if human-type AI is ever produced it probably won't be by someone that's blind to what it is and why it is.

Intelligence != consciousness

Intelligence just means the ability to get shit done.

the distinction is probably meaningless since human adaptability often springs from consciousness.

I expect it would take longer than 10 years to catalogue every possible situation a human may encounter and every likely human response to it.

>Can you imagine where we would be if people started using networks earlier?

Neural networks are cool again because we have now the computational power to make them work.

Only someone blissfully ignorant of the complexities of the human brain would say this.

>Do you know what they had in common?

They we're made up by freemasons and retards still beleive them when we have all the evidence to suggest otherwise

Also weed cures EVERYTHING

>can he do it bros?
No. See

They're still interesting topics.

Pop-sci the post.
>An AI can at best have the computational power of a Turing machine
Turing completeness specify capabilities not computational power.
We already have quantum annealing co processors lets your computer do things that turing machine can't.
Even external true random generator as extension card does it.
No one said you have to use classical computer for AI.
>Human consciousness is stricly more computationally powerful.
Human consciousness is less computationally powerful than first computers. All computationally hard problems are solved unconscious.

>Human-level AI
>Never had gf

>human level
Well, human level in common terms means you and me, chit chatting it up.
What'll PROBABLY be rendered is the intelligence quota of a two year old. So, no. Not really.

>never gonna happen

"What if technology was 200 years more advanced?"

>never gonna happen

"What if technology was 200 BILLION years more advanced?"

>Nope. Still not gonna happen!

"What if we were literally beings of pure thought and energy, immortal and manipulating the fabric of the cosmos with our very whim. Would it happen then?"

>Nope. Not at all. Cause I said so.

He's probably right, though he may not know why.

actually it's the same reason we'll never be "beings of pure thought and energy."

consciousness exists to meet physical needs.

AI has no physical needs to meet, so a conscious AI would quickly realize it's pointless and shut down.

so would a being of pure thought and energy, same reason.

Didn't you already make this thread?

>human-level AI
Not without a better mathematical model of cognition and agency. It'll just keep being heavily-overfitted statistical regurgitations and shallow Markov decision processes.

>BTFOs previous state of the art in computer vision(classification, localization, semantic segmentation)
>Surpasses human level performance on certain image classification tasks
>BTFOs previous state of the art in speech recognition
>Surpasses humans at recognition of isolated sylables
>Can learn to drive a car
>State of the art in autonomous helicopter control
>State of the art sentiment analysis
>Can learn a sorting algorithm from nothing but examples of sorted/unsorted vectors
>Surpasses human level performance on several previously unsolved ATARI games, BTFOing all linear learners in the process
>State of the art on robot grasping of novel objects
>Universal function approximater
>Turing complete

Yeah just glorified statistics, why would anyone ever think they're a promising approach to A.I? Fucking popsci sheep at google/microsoft/facebook amirite?

..it turns out you can just say words in any order you like!

>the distinction is probably meaningless since human adaptability often springs from consciousness.
[citation needed]
>I expect it would take longer than 10 years to catalogue every possible situation a human may encounter and every likely human response to it.
Do you seriously think this is the only feasible way of creating A.I?


He has a P.hD in cognitive neuroscience he knows more than you

>They've simulated up to mice in real time.

No, they haven't.
>AI has no physical needs to meet, so a conscious AI would quickly realize it's pointless and shut down.

Modern A.I is usually defined in terms of some goal, eg reinforcement learning has the goal of maximising a 'reward function' from its environment. I suspect when we get to human level A.I, it will still have some sort of semi-hardcoded goals/desires.


If you want a take on this topic that isn't just popsci trash or Hassabis trying to build hype, read this blog post: karpathy.github.io/2012/10/22/state-of-computer-vision/

Andrej Karpathy is doing actual deep learning research and so has actual insights that are worth reading.

>Do you seriously think this is the only feasible way of creating A.I?
atm, yes.
>Modern A.I is usually defined
we aren't talking about how modern AI is defined.
we're talking about how a hypothetical "human-level" AI of the future is defined.

and I'm going to go out on a limb and guess it's defined by comparison to human behaviors.

Whos this cuck

What are you on about

Literally retarded response.

Literally cogent rebuttal.

Except for the anthropic principle :)

Conservative, as in earliest, you fucking retard.

>atm yes

Well that sounds like classic anthropic bias to me,but I'll bite: why do you believe this?

>we aren't talking about how modern AI is defined.
>we're talking about how a hypothetical "human-level" AI of the future is defined.

I believe this for a reason - 'intelligence' isn't really a meaningful concept for agents without desires/goals.


>and I'm going to go out on a limb and guess it's defined by comparison to human behaviors.

Behaving like a human does not require human intelligence.
If you just put every possible question and a humans response to them in a arbitrarily large lookup table, you can make an agent that can pass a turing test/ behave human through a text interface, but that's not intelligence, it's just a giant lookup table!

Human-like minds are only a tiny subset of the space of all possible minds. There's no reason to assume that the first AGI will be anything like humans.

Has anyone here read anything of Numentas work of 'hierarchical temporal memory'? I'm just wrapping my head around it now, and my initial impression is that it is a lot more likely as a mechanism for learning in the brain.
Backprop seems so intuitively unlikely as a biological phenomenon...

>why do you believe this?
>If you just put every possible question and a humans response to them in a arbitrarily large lookup table, you can make an agent that can pass a turing test/ behave human through a text interface
you appear to have answered your own question.

but to get into a little more depth, If he thinks he can do it in 10 years he admits the technology to do it doesn't currently exist.

I can imagine several approaches to the problem, none of which is only ten years out.

except for brute force behavioral emulation. That's already being done. I could see him using that and claiming victory. Cleverbot wins again.

In order to create a human-like AI, you would need an environment for the AI to process. Humans are one with their environment, and much of our behaviors are dependent on external cues. Think about it- without an environment, you'd just be a vegetable.

After we develop some sort of digital environment for the AI, we'd need to develop a way for the AI to process the environment. There'd have to be some sort of general algorithm that is able to take in information from the environment and be able to make decisions based on those inputs. This is something every conscious being does, regardless of how intelligent they are. The AI needs to be able to see NEW information it's never seen before, and be able to make sense of it. The AI needs to be programmed with some form of goal, like all conscious beings have. Honestly, you don't even HAVE to model the brain, but obviously using the brain as a model would be an effective strategy, it would just require a total and complete understanding of how the brain works. That's not going to happen in a long time.

>Human-like minds are only a tiny subset of the space of all possible minds
vague.

nobody actually understands the mechanisms that produce the human mind, so it's always at least mildly amusing when people who have no background in biology claim it will be synthesized and even redefined.

I mean, you may be right, but there's absolutely no reason to think you are.

the other approach would be to just map out neural connections and mechanisms and then reproduce them in a different material.

this method would eventually run into a motivation problem though.

You won't run into a motivation issue if you program it with some sort of goal. The goal for humans is to reproduce and survive, which is pretty much the same for every conscious being. If you program the AI with a inherent need to stay alive, it would have some form of motivation.

that only works until the organism realizes it can't die.

the easiest way to reach a goal is to disable the desire for it.

EIN SWEI AMEN AND ATTACK!!!

im educated in this field, and you are both wrong.

is a model of neuron and synapses interaction and how they strengthen their connection when learning just like the human brain. Even use thresholds function for neuron firing based on biology. when used for cnn they make up a layers of abstraction of the visual data that mimics how animals percept visual input. the whole reason for making this architecture was because it mimics the human brains architecture.

>NOTHING to do with biology
just get out.

you just don't know the science behind this. you are talking about how you would make AI and then say how hard it would be.

you didn't think for a second that scientist and engineers have worked on this for a long time and they are not taking your naive approach.

just place that desire in the kernel ring 0 and don't let it be able to adjust it

Markov decision processes do all of the things you describe and still are not conscious strong AI ergo you are wrong

Anyone random asshole can say that. Here, I'll say it now. I can make a strong AI in ten years.

Yeah, why don't you do the work first and then start talking, eh?

>they are not taking your naive approach.
I'm certain they aren't.
because most of them have no biology background. It's like a bunch of washing machine repairmen trying to design a space station. It's funny but it's not going to happen.
They don't even know what they're trying to build.
if you can engineer a roadblock an intelligence smarter than you would have no problem getting around it.

so enumerating all possible events is how our brain does it?

nope.

Most of what our brain does isn't simple computation of data.

nor is our brain the only organ participating in our awareness. Nor is our awareness disconnected from our environments.

Too bad nobody gave you $ 600 millions to do it

>AI needs to be programmed with some form of goal, like all conscious beings have

Could we even call it 'intelligent' if it's only processing information in relation to a predetermined goal?

the way physicists and CS people define intelligence, a piece of paper with a math equation written on it is intelligent.