Concerning General Artificial Intelligence

The idea of General Artificial Intelligence is facinating to me. However, I am skeptical as to whether it will be achieved within our lifetimes. If you listen to futurists and philosophers such as Ray Kurt Kurzweil or Nick Bostrom lecture, or even read there work, you might be highly convinced of the ease at which we will soon be (are?) able to regularly emulate and exceed human level intelligence in a computer. The reason for my skepticism is three fold
> 1. The fundamental mechanization of intelligence/sentience on a microbiological level are still not fully understood.
> 2. The methodology and structure of the human mind into stratified hierarchical "nodes" as postulated by both mentioned authors as well as others is also not clearly understood.
> 3. Most, if not all, estimations of the space required to store all human memories have been achieved - However the speed of human thought relative to the complex conclusions we draw with limited information have yet to be matched in my opinion.

Now I see these as being a fundamental barriers in developing GAI for several reasons.
> 1. Emulating our own minds, arguably the most complex things we know to exist, is probably the path of least resistance to true GAI.
> 2. Neural networks function in a stratified, node like system modeled off of neurons but this retains the same problems mentioned before.
> 3. The speed of computation problem is quite frequently dismissed with mentions of the singularity and """Moore's Law""" both concepts I am extremely skeptical of seeing as neither is probable, rather just mere speculation on the future.

That's my take. Idk if Veeky Forums talks about AI but I've thinking about the possibility that there could be a GAI somewhere out there. That being said, I don't think it's IMPOSSIBLE, just unlikely.

What are your thoughts?

...

> implying artificial intelligence is /x/
Way to add to the discussion user

What is general ai and whats moores law. Do we really need to underrstand the brain for a.i?

>Do we really need to underrstand the brain for a.i?
Having some understanding of what a consciousness is might help us create one, yes

What do you mean by consciousness? Something like us or jist somin really smart? That can learn/recognise things?

Im just thinkong cant neural networks do the job if they big enough complex enough? The brain is complex after all.

The ability to think like a human. The ability to be creative, to think in abstraction and metaphor, to identify incongruities as easily as humans do. We're going to use it to solve problems so it can't be a slave to its programming. The ability to be adaptable, think outside the box and logically source and piece together the necessary data it needs to solve problems is essential

> Moore's Law
The assumption that computer power will double every year based on the shrinking of transistors, which is already facing subatomic limits.
> General AI
Theoretical artificial intelligence with the ability to master and learn tasks similar to the way humans do. Where as a self driving car uses task specific AI to drive, a general AI uses its mind to solve novel and abstract problems.

That's the implication of the work done by the people mentioned above as well as others. However, neural networks as we know today are very complex yet specialized. They are dedicated to specific tasks, i.e. driving, fixing photos, facial recognition, search result optimization, and contextual linguistics. All these tasks are incredible in their own right but pale in comparison to the theoretical general AI

So these networks are specified for specific problems? If its unsupervised learning shouldnt it be able to learn any kind of underlying structure in data?

I kind of think part of the reason humans are so good at it and the speed thing you said above is that we live in an incredibly rich world and once we learn its structure we can do incredibly rich and diverse and complex things matching that world. We can make assumptions on little data precisely because of the richness of prior knowledge we have on the world in our brains. Firthermore. Think about how long it takes for us to develop these thinking skills and intelligence. So fucking long. I only consider about 12 years you can have a decent conversation with a kid. Thats aeons. Prefrontal cortex fully develops around 25. I think this is one big barrier to a.i. us ourselves take long to develop and presumably because of the experience we need to navigate complexity.

I guess you can accelerate the experiences an a.i. is exposed to much more efficiently though now i thonk about it. Like way more.

They are initially trained with huge data sets. This is why huge data holders are capable of building the best networks.

This is precisely the barrier I'm talking about, we can accelerate Machine Learning (see Google's Go bot) however it seems that humans are 'preprimed' if you will you understand the data they will receive later on during maturity. Overcoming this could be done with multiple networks or something?

The greatest problem is how rigid neural network models are. Maybe if we allowed them to somehow create nodes, edges, and even layers by themselves we might get somewhere. Also, I'm not certain back propagation as implemented currently helps either.

Also, we'd likely need to actually create structures of neural networks to model modules which handles certain types of data and communicates between themselves much like our own brain.

We're miles and miles away.

If youre interested one popular theory in neuroscience says the brain works like a helmholtz machine in machine learning. Though the creator hinton says its too slow or something i think? Guess it obviously doesnt take into account the actual structure of the brain but its been used to make powerful predictions about psychology and neuroscience behaviour from a view of plausibility.
In some sense the brain has many areas that learn different things.

Theres a good paper by hinton - somin called how we learn features i dont know. But good review

...

...

Shit, I bought this book by mistake for a semester because the course literature was changed from the previous semesters.

Is it good?

Im not sure humans are prepromed per se. But certainly the brain is pre-organised into structures which learn different types of things (based on signal properties? And assymetrical connections?) (i.e. carve nature at its roots) which makes learning simpler probably. I also think arguably the human world we learn in is always going to be far more complicated than the datasets we give to a.i. which is a factor.

I'm definitely going to look into these, thanks.

I think these aspects of our own learning is what makes 'video game' type virtual environments hold the greatest promise for at a minimum development of stronger AI then we have today. An interesting take though, from what I understand from neuroscience that the regions of the brain do correlate with specific signals from birth. Those connections are defined by the DNA, leading to phenomena such as IQ differences and group aptitudes.

Yeah i agree. Would be interesting for an a.i. to learn about a continuous virtual environment from which it can explore. Would be interesting.

Yeah i defo see that. Im vision we see atleast 3 broad structures which vary in their sensitivity to spatial or temporal resolution which is probably genetic in the way you say at the cellular or connectional level. Overall the brain seems to be divided into "what" and "where" processing streams which seem to be genetically based.

>general AI
>ever going to be a thing
It's not even useful.

>she wears a tshirt of the people who take nude pictures of her
Earth-chan is such a SEMEN SLUT.

But its interesting no? Surely its useful in industry potentially?