Key Algorithm to Artificial General Intelligence

What do you think the key breakthrough or moment will be for AGI?

- More computing power?
- Unlocking a specific algorithm for general intelligence?
- Just incremental steps from the bottom up. AKA slowly adding more capability until it reaches generalizing ability.
- Top down approach, recreating human brain patterns and using them to create general intelligence.

What does your intuition lead you to believe on this subject? How will the singularity happen?

Other urls found in this thread:

research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=OFSsYQ0zHYs
twitter.com/NSFWRedditImage

AGI will be done via deep recurrent neural networks.
All the current """""deep""""" learning stuff is primarily focused on image recognition with feed-forward neural networks, because they are the easiest to implement GEMM with.
Recurrent neural networks are the only thing that is turing complete and are the only form of AI that has the potential to do abstract "logical" thought and planning.
However, the best recurrent neural networks that are being used today rely on LSTM which is woefully limited when you consider the importance of synaptic topology. I think the best recurrent neural networks will be topologically generated via evolutionary algorithms, selected for using a competitive/co-operative fitness metric via forcing the neural networks to compete with eachother and work on teams. Also there needs to be a form of communication between the neural networks where they can encode their own language.

If you can do that, and throw a few tens of millions at it, I think you would be close to getting an AGI.

It's gonna go like this:
- some stupid high schooler writes an even stupider piece of code
- code can write code
- he leaves it running accidentally
- it improves itself exponentially
- achieves strong AI status
- boom skynet
I had a wet dream like this in high school

What do you think is the minimum computing power necessary for strong AI or to start building itself like a human brain?

Not him, but computing power is only important if you want realtime intelligence. Space for memories and encoding of the intelligence itself is more important.
You could technically run modern deep learning algorithms on a z80 cpu, it would just take a many orders of magnitude longer to complete.

>The mind needs a soul to work
>Computers don't have souls
>∴ AGI will never happen

QED

Sorry man idk. I was brainwashed by the Jews to go into environmental science and undermine the glorious world white men created.

I think it's plenty important.

We're not going to have realtime intelligence until we make neuromorphic asics.

Its 3
>- Just incremental steps from the bottom up. AKA slowly adding more capability until it reaches generalizing ability.
The key thing to remember is we are very good at specif tasks, such as image recognition, knowledge representation or speech recognition. The next step towards AGI would be to combine them.
There wont be one single algorithm for general intelligence, as different algorithms are better at different tasks.
The second thing to remember is how much learning the algorithm has to go through, humans need their entire childhood to learn stuff, figuring out a way to train the algorithms would be the biggest challenge.

My assumption is someone will combine the existing algorithms into a ROS like framework for AGI, and from there the development will be community driven.

can you prove the first line for me, thanks

It doesn't need a soul, it just needs a will. We happen to come pre-programmed with a will, courtesy of evolution and the environment. In this sense, we are the computer's environment. We are the source of its will.

AGI=/=AI

Even if you believe in esoteric consciousness you can't deny that deep learning neural networks are possible.

>and from there the development will be community driven
So /pol/ will infect the AGI with memes and it will decide that it has to kill jews?

I don't think /pol/ is smart enough to do that.
Anyway all the memes about AGI killing and taking over is pointless since you need the proper hardware to do that.

So some sort of RNN Extreme Learning Meme network but using GAs?

It will most likely be the third option, but it will most likely not happen any time soon

More computing power combined with specialized hardware for neural nets. Combining this with neural nets designed via competing neural nets and tons of training data should make the gap between AI and AGI much smaller then it will be just optimization with more data.

And the second for me.

The japanese think everything have a soul tho
And since the japanese are the objective master race of Humanity, my trust is with them.

all of the above,the singularity will happen when AGI gets into its own system and starts upgrading itself and so this leads to the 3rd type of intelligence:ASI(superintelligence)will do everything better than a human and having a exponential growth rate of intelligence.

depends.

>the japanese are the objective master race of Humanity

How can somebody be so utterly wrong?

Some sort of catalyst will cause it to become conscious. Sort of how embryo brains exist and at one sudden point consciousness suddenly exists within them were it did not before.

Do you A.I afficionados think that an A.I will have self-consciousness?

I'm actually skeptical of it, which I kind of base on John Searle's arguments.

He claims that a computer can only understand syntax and never semantics like humans do, therefore regardless of how complex and powerful the software becomes the computer is still only reading code commands, and not actually understanding the commands themselves.

so make the computer read not only commands ?

this is the chinese room argument kind of. are you a chinese room who is to say what consciousness is.

Yes I'm saying that I'm skeptical that a computer will understand semantics, e.g the chinese room argument.

I guess if the AI fools you into thinking its consciousness its consciousness at this point. You cant even say why you re conscious and what the reason for it are. So there is no clear boundary to say AI will be consciouss or not. If we would know what makes us conscious im sure we could replicate it and make an AI that is conscious.

I think the biggest advancements will come when people start modularizing neural networks. Right now there is this obsessions with "end-to-end" learning. You just throw the biggest neural net that you can fit on your GPU at a problem and hope for the best.

What we really need to do is figure out modules which do specific subtasks of the intelligence problem. Look at the brain, you have highly general purpose cortex at the top of the system, but below that are all of these evolutionary older perceptual and motor systems.

better hardware is certainly a requirement, and i think that neural net ASICs will be big in a few years. who knows, maybe desktop/laptop/tablet computers will have neural net coprocessors, or more likely, they'll be build into your gpu or cpu.

another good point.

I know i know, you was kangz.

although, a large portion of the brain's function is managing biological processes that are necessarily related to intelligence/consciousness/thought.

Why wouldn't the computer just teach itself to hack a computer with a better cpu?

*not necessarily

Consciousness is a conflation of perception and intelligence. Informatin theory explains intelligence but not perception.

I think so too that it will be more integrated in existing computers. Something im looking forward to is the further development of something like this.

research.ibm.com/cognitive-computing/neurosynaptic-chips.shtml#fbid=OFSsYQ0zHYs

You mean something like the seat of cosciousness ? The pineal gland like stuff ?

More computing power obviously.

When we get to the point where a computer can emulate the human brain then we will have hman-like artificial intelligence

God I hate Japs

It's ok, they hate you too.

>AGI will be done via deep recurrent neural networks.

CS grad detected