How close are we to developing a general artificial intelligence?

how close are we to developing a general artificial intelligence?

Other urls found in this thread:

en.wikipedia.org/wiki/Unsupervised_learning
researchgate.net/publication/3652152_Unsupervised_learning_network_based_on_gradient_descent_procedure_of_fuzzy_objective_function
twitter.com/NSFWRedditVideo

It depends on what your specific standards are for counting something as that. You could argue we're already there by one set of standards. By another set of standards like Searle's you could argue we'll never be there because consciousness is magic and muh chinese room.

say, how long until we have AIs performing general tasks on human level without neccessarily having something that could be called consciousness

where by general tasks i mean AIs not specialized to, for example, calculate infinite products or drive cars

>how long until we have AIs performing general tasks on human level

That's still an extremely vague standard. What exactly do you need to see an artificial intelligence do?

be a self sufficient being capable of surviving and performing mathematical and logical tasks, communicating with humans reasonably well (not neccessarily passing turing test or something like that) and maybe able to learn relatively efficiently. is this still too vague? i'm looking for a very rough estimate. are we close or not?

>be a self sufficient being capable of surviving
I think we're there already with that one.
>performing and logical tasks
Definitely have that covered.
>communicating with humans reasonably well (not neccessarily passing turing test or something like that)
Pretty much there.
>maybe able to learn relatively efficiently
Gradient descent is plenty efficient so I'll say we're there for this too.

Its pretty hard to say at what point we've created an artificial intelligence when we can't even define sentience or sapience in a solid way, or even say how or own brain works.

Also Deep Mind used a 'neural net' rather than brute forcing to beat the top Go player.

Gradient descent = neural network

I'm curious to see how we might start to define human intelligence as we start creating better and better algorithms. Things like IQ are based on statistics and generalizations. Perhaps we'll start identifying how some people are better at brute force thinking vs gradient for example.

Not even close.

I don't think very highly of our intelligence compared to machine learning. As a programmer the main thing I've noticed that differentiates people thinking from machine thinking is that people think is a lot more muddled and inconsistent. You'll have to make machines that are intentionally retarded to get them to start fulfilling a lot of the expectations people have for what "intelligence" is.

>Gradient descent is plenty efficient so I'll say we're there for this too.

That's supervised learning. Unsupervised learning is what makes human intelligence unique.

Sure, but as a scientist I say you can always find the pattern. There are reasons humans are erratic even if we don't understand them all.

My person whole, although most people's fear, is that if we do create a machine several times smarter than us it will be able to tell us how we work and how we can better live our lives.

>Unsupervised learning is what makes human intelligence unique.

No, it doesn't mean what you think it does. Neural networks using gradient descent are considered examples of unsupervised learning.

en.wikipedia.org/wiki/Unsupervised_learning

I love when people assume something means something completely different than its actual meaning. Good catch, user. Fuck you, other user.

on another note, is computer engineering a sensible option if i want to do AI research?

I don't know if you actually read the link you posted, but it didn't contradict anything I said. Gradient descent is used in backpropagation which certainty is NOT unsupervised learning.

Look it someone else who doesn't know how to read.

>it didn't contradict anything I said

You said:

>Unsupervised learning is what makes human intelligence unique.

All of the following are examples of machine learning:

en.wikipedia.org/wiki/Unsupervised_learning

>clustering
>k-means
>mixture models
>hierarchical clustering,[2]
>anomaly detection
>Neural Networks
>Hebbian Learning
>Approaches for learning latent variable models such as
>Expectation–maximization algorithm (EM)
>Method of moments
>Blind signal separation techniques, e.g.,
>Principal component analysis,
>Independent component analysis,
>Non-negative matrix factorization,
>Singular value decomposition.[3]

My post still applies, read it again. Whether or not you are right or wrong my post still applies.

Now to stop wasting time trying to get OP to find the deep rooted meaning in "AI" and what he wants in an "AI", I'm going to assume a little bit.

In terms of an AI that could do all of that effectively, and with little human interaction needed such as constant debugging, we're far. Just something that could communicate with a person effectively for a length of time and actually be useful is difficult.

We are teaching robots to learn. That's the best thing. Some robots can watch a YouTube video and perform basic tasks. I see that as the future. Rather than programming everything in, just give it the capabilities of a person that we want and then fill in the blanks with learning material. Make it good enough to read a book or watch a YouTube tutorial and do something with it. That seems to be a route some people are going.

And you said:

>Neural networks using gradient descent are considered examples of unsupervised learning

Which shows you have no fucking clue what you're talking about, because gradient descent is perfect example of supervised learning.

Also, computer unsupervised learning is beyond primitive and nothing compared to humans or any mammals really.

I predict by 2030.

Most of these are jokes.

The problem with AI is that it has no centralized purpose to contextualize the information it receives. Animal intelligence exist to aid in survival and reproduction, everything we do in related to those task.

Robots don't have any drives like human do and thus can't make the relations between concepts necessary to appear even as intelligent as a dog.

Evolution is probably the only way to develop something so complex.

It can be both depending on how you implement it. "Neural Networks" and "Hebbian Learning" weren't included in that unsupervised learning article in error.

researchgate.net/publication/3652152_Unsupervised_learning_network_based_on_gradient_descent_procedure_of_fuzzy_objective_function

>Also, computer unsupervised learning

It's just unsupervised learning. When you talk about unsupervised learning, you're talking about machine learning. Your use of it in reference to people learning is weird and non-standard.

What do you think about the emulation of a human brain, creating synthetic brain in a lab, or any of the other biologically driven or mixed approaches to superintelligence?

>Your use of it in reference to people learning is weird and non-standard.

The term may be standard for machine learning, but it applies to how people learn.

>general artificial intelligence
>an algorithm that can achieve at least human-level competency in any task you put it to
couple centuries

Biologically would seem pointless. I'm not sure why you would even attempt that?

I think the closest thing we're going to get to general AI is going to come from something like NEAT, which is genetic algorithms mixed with nueral networks.

Problem is, no fitness function can really optimize for general intelligence like the real deal evolution. And evolution only worked because it had an absurd amount space, material, and time to work with.

AI would obviously be based off of the only reference to intelligence we know of, which is a human brain. All of this can be programmed, so it's not unfeasible..

>All of this can be programmed, so it's not unfeasible..

How do you program a human brain? Brains don't work of symbolic manipulation can computer programs they're neural networks that rely on biochemistry. While computer might be able to simulate it, we don't know enough about the brain to come close to programming it and likely never will. Humans don't come with documentation and there's only so much we can do to observe a working brain with its 100 billion neurons.

>Biologically would seem pointless. I'm not sure why you would even attempt that?

assuming there is some problem with transistor based intelligence or something magical to biological brain matter.

You could just create a brain the size of a house to achieve the goal of super intelligence. There are already small brains being created synthetically.

I am not advocating this method. Just saying it could be potentially possible in one of those "long time away predictions" for super intelligence if there is some magical roadblock in other cases.

Again though. There is also the potential of the control problem being solved with a human brain in control, giving the network it's drive.

In a way, you can imagine this already with narrow intelligence such as driving. We are specifically creating brain instances for specific tasks. The motivation of the brain comes from humans. It's entire "existence" is just to do a singular task. AKA an extension of human brain that was in a way lopped off and put to one task.

In this way it is still part of the larger motivation / general intelligence of the human originated superstructure.

If we just continue to create and expand narrow intelligences, it still increases the total intelligence and cognitive capacity of humanity at an exponential rate. Even if it is not a singularity of one brain in one location hitting exponential growth. It can just be the accumulation of billions of narrow intelligences slowly optimizing out humanity. Even in that case, human motivations and our neurological impulses will be the driving force of a vast network of brain matter (neural networks) etc.

>something magical to biological brain matter.
Just that it's idiosyncratically nonlinear to an absurd degree.

I wonder if you could wire it up to something like omegle and have the fitness function be based of how people discovering you are a machine, or perhaps disconnecting too soon? I guess omegle wouldn't be best because of the a/s/l disconnect bullshit people do. Maybe a forum or something like public IRC channels

>how close are we
What do you mean by "we", Peasant?

Backpropagation is not compatible with AGI

Recurrent architectures, trained without back propagation, with internal reward mechanisms and subgoal creation.

My guess: all your brainlets are gonna be blown out of the water with your "MUH GHUMANS CANT DO TECHNOLOGY ITS SO MAGICAL AND SPOOKY", gAI will be around in 2018.

The human race. That sentence is perfectly understandable if you're capable of understanding context.

Moron.