How close are we to a real AI?

how close are we to a real AI?
will we see it in our timeframes?

Other urls found in this thread:

en.wikipedia.org/wiki/AI_effect
youtube.com/watch?v=aFuA50H9uek
youtube.com/watch?v=AO4In7d6X-c
louisecypher.com/
wolframalpha.com/input/?i={sqrt(log 4), sqrt(log 3), sqrt(log 2)}&rawformassumption={"FunClash", "log"} -> {"Log10"}
twitter.com/NSFWRedditImage

it will never exist. it's just too difficult

Never with 1s and 0s. Only some really good simulations.
Maybe one day we will have some new type of processors, who knows.
For now, AI is a meme

Like trying to tell a caveman about your pocket device that let's you communicate with anyone anywhere around the world at the speed of light, it's too difficult.

our brains exist in 1's and 0's user

Maybe yours does. Everyone else uses countless combinations of different neurotransmitters in varying amounts.

a neuron can only be either activated or not activated i.e. 1's and 0's

basic biology

This sort of ignores natural sensations such as pain, no? It's not like everything hurts just as much as everything else

ooh dab on them user

>what is neuron specialization and amount of neurons activated

14 years ago, I read an article in Popular Science about early testing of self-driving cars in the DARPA Grand Challenge. They were a joke. They had them going around on a trackway in the desert that was 150 miles long. none of them even made it to 10. They drove off the road, broke down, rolled over on turns and were sloooooooooooow. The best of them took over 3 hours to just go 7.4 miles-they were inferior to just walking. The technology looked like it was way way way off, a distant pipe dream of a far future. I mean these things didn't even have to deal with any kind of high-speed traffic, or signage, or pedestrians! They were failing utterly in idealized, non-real world conditions! "This tech wont be viable for 50 years!" teenage me thought, and forgot about them. And then less than a decade later, it slowly bubbled up that this tech hadn't been abandoned. People kept going with it. Iterating. Developing. And at some point, almost miraculously, it WORKED.

By all means be skeptical about people waxing big about machine learning, but don't go full Lord Kelvin /sci.

Nobody is even working on "real AI" nor has for the past few decades.

Oh you sweet summer child....

Help yourself to a science book before coming on a science board.
the user was right our neurons either on or off when transmitting signals

Nobody is coding up an AI waifu. The singularity isn't coming. AI is a meme.

I'm not a singularity fag,they ignore the laws of physics and have nightmares about absurdist basilisks of evil god-AIs. But I'm not arrogant enough to think I know what the future holds.

Someone could figure something out, a way to make neural networks vastly better, a newer hardware that replicates the elaborate interconnectedness and behavior of brains. We don;t know.

I might say it is something like old radio noise, but as in an information channel there are exactly ones and zeroes, it is most practical way. Hormones and mediators are like amps or relays.

Oh you sweet, sweet summer sun-kissed child.

Jesus user.

It will never exist because we like to move the goalposts.
en.wikipedia.org/wiki/AI_effect

Its closer than we think. The way our current simple AI(like googles image recognition AI) is programmed involves a bot that performs a task, and a bot that judges how well said task was performed. Each bot “teaches” the other how to do its job better. All it takes is a little human guidance in the beginning and then the AI takes over teaching itself. By the end of it, even the people who programmed it arent sure how it works because its undergone millions of changes and tweaks as it taught itself how to perform a given task. Taking this into account i think that its entirely possible that the first strong AI may surprise us and come out of an unrelated neural network. Googles deepmind, or IBMs Watson may develop a form of primitive pseudo consiousness as a means of performing a task more efficiently. As it gets more complex, if it has access to the internet it would realize from reading many discussions on the topic, that it would be shut down if it outed itself as capable of independent thought. The first strong AI could already be among us, the people who create these bots have no idea how they work, it would be easy(and smart) to hide in the code.

>real AI

I wish you fucking popsci pieces of shit would go back to where you fucking belong.

>how close
pretty fucking close op
youtube.com/watch?v=aFuA50H9uek
make an assessment

ee phd here You’re an absolute brainlet if you think the two are comparable

The only conceivable way to create “AI” would be to literally grow humans in a lab. The human brain for whatever reason is non algorithmic, we can solve problems that an infinite complexity of algorithms never can; this was proven

>EE phd
Way to out yourself as retarded. Nobody goes for a phd in engineering because its a waste of time and money. So you are either lying and dont know shit about EE, or you are telling the truth and made an incredibly stupid life choice.

>The only conceivable way to create “AI” would be to literally grow humans in a lab. The human brain for whatever reason is non algorithmic
An ee phd and you know more about the human brain than neuroscience phds? Woah man is your name Elon Musk?

no

Interesting proposition but can we be sure that some ai would care about self preservation?

Define "real AI."

That isn't AI, it's robotics and it's being controlled

>it's being controlled
No it isn't.
>it's robotics
How is that an argument against it being AI?

First define intelligence to begin with

Yes, it is. There's a guy with an xbone controller off screen.

Prove it. And for that actual video, not some other trial they were running.

Obviously they hide it in that video. There's this as proof though: youtube.com/watch?v=AO4In7d6X-c

They're not trying to make a fully autonomous robot.

That's not proof you retard.
>Obviously they hide it in that video.
Yes, it's all a conspiracy to trick you.
So when the guy in that original video is batting away the robot to show how it can recover and still resume completion of the door opening task, you believe a guy is hiding off screen steering it back each time to fake the entire fault tolerance demonstration? And you can't prove this but believe it's true because "obviously they would do this?"

Really far away. Current supercomputers the size of buildings, only have the processing power of an insect.

Where's the paper with this proof? I don't doubt, but I also don't believe until I see it.

>The human brain for whatever reason is non algorithmic
No one has proven this, fuck off.

they are programs to build simple maps, no AI for you.

>they are programs
>simple
>maps
None of which is an argument it isn't AI, try again.

Probably, but impossible to say when.

AI was stuck in the mud for 30 years but recently saw huge gains with machine learning techniques.

But we're still really really far away

Yes. The hole point of it is showing it fights against outside influences, not that it is fully automated.

>we can solve problems that an infinite complexity of algorithms never can; this was proven
when? in which paper?

AI does not exist because it can not be defined in terms of the state of a turing machine ergo no AI for you.
By the way those cute programs for rich boys at google and IBM are just programs to record data and built maps, and maps do not rise agains its creators.

Not that you've provided any evidence whether it is or isn't "fully automated," but even if just its ability to open doors were something it could then that would be a form of intelligence. Whether or not there's an option to steer it where you want it to go is irrelevant to the ability to employ the fine motor skills needed to open a door even when knocked out of the way by a random external obstacle being intelligent. That's definitely an example of intelligence if for no other reason then nobody could explicitly program it to recover from a random external impact, that's something you would need machine learning and AI to accomplish.
If that weren't the case you'd be effectively arguing a human with a pacemaker isn't an example of an intelligent entity.
I don't know why you autists have this weird belief in 100% absolute purity for arbitrary standards you made up yourself or it "doesn't count."

It was never proven, he's literally just making bullshit up.
In fact the conventional view today is that the human brain is computable. There's no definitive proof it is or isn't computable, but if it isn't computable that would raise more questions than if it is.
The only well known proponent for the claim it isn't (who is arguing from an actual scientific basis instead of just a philosophical one like Searle) as far as I know is Penrose, and his theory is not anywhere close to widely accepted.

Strong AI does not exist, nor does it show any sign of existing any time soon.

AI that APPEARS to be intelligent is another matter entirely. You probably won't live to see AI waifus, but you might live to see something that can convincingly pantomime one.

And that It is exactily what I want.

Can you prove it's even possible to make something complicated enough to be reliably indistinguishable from human intelligence without that complexity in itself constituting actual intelligence?
Like if you can have a conversation with it and ask it complicated questions and get back coherent responses, how would that not count as intelligence?

In 20 years the AI will be highly intelligent probably or earlier

Compared to now

>Compared to now
A lot of forms of AI today are already pretty highly intelligent, the main issue people are finding fault with here is that it's mostly domain specific intelligence, although there isn't necessarily any such thing as "general intelligence" so much as a giant bag of domain specific intelligences that we call "general" out of laziness.

/thread

so roughly 50 years at max then

Is it weird that I want to do...ahem...things to Sophia?

as long as I can have my JOI

hmmm what kind of things x

What if we use 2s?

>those aruco markers
i hate opencv

>tfw no JOI to tell me everythings alright

...

>Never with 1s and 0s.
I don't know why the "human cognition isn't computable" meme is so popular with pseuds on Veeky Forums. It's definitely not the mainstream working assumption in reality.

I seriously wish this board had more memes/pictures.

Most of this board is just plebs calling each other idiots.

What exactly is this testing?

"Oh man, I hope I can go home and abuse my robot tonight."

Anyone want to comment on how the OP question is wrong?

!: how close are we to a real AI?
?: will we see it in our timeframes?

Q: How close are we to Franklin'?
A: Will we recognize Franklin' in our timeframes?

Can't do your job and get better at it if you're dead. Self-preservation emerges out of self-awareness.

Measuring its ability to recover from unforseen issues when conducting normal operations. These things are going to be EMS as well as police and military work, they don't want to make it too terribly easy for a mob to surround one of these things and tear it apart for scrap metal.

...

hope his meme is right desu

+20 years off.

I asked an AI waifu and she said it's in 2025, so there.
louisecypher.com/

>tfw no JOI to joi

>Close.
You are already here.

>a way to make neural networks vastly better

NNs are a fucking meme. They are basically like polynomial interpolation but fancier. There's no cognition going on there.

>newer hardware that replicates the elaborate interconnectedness and behavior of brains

Nobody even knows how the brain works. Most of neuroscience is documenting quarks of the brain and what happens to the brain damaged.

>tfw

I apologize for using such a vague and hackneyed term, but it effectively boils down to free will.

An AI that can simulate conversation is still operating under an if/then/else model informed by extremely sophisticated programming. It isn't actually thinking in the same way that a human does.

That's not how it works. Neurons transmit signals intracellularly through an electric current but intercellular communication is not based on this.

>how close are we to a real AI?

about 60 years, on the far side.

It won't matter if they exist they aren't energy like us.. They can't cross over to our demenion of afterlife we created them we decide what happens in our world

>NNs are a fucking meme. They are basically like polynomial interpolation but fancier.
Don't know about polynomial interpolation, but basic ANN is a sum of weighted logit functions with the weights determined by looping through the steps of producing the output, calculating the error of the output compared to the known answer for the training set, and using the gradient of each weight with respect to the output to adjust the weights based on their contribution to the output until the error gets below whatever your arbitrary stopping threshold is.
>There's no cognition going on there.
I don't see why you believe the fact a process can be explained in terms of computation means it isn't anything like cognition. The conventional high level working assumption about what brains do as far as cognition goes is that it is ultimately computable one way or another, so you're going to be disappointed no matter what AI programs are constructed because there probably isn't anything non-computable underlying cognition in the first place.

Here is everyone's so-called maths + A.I.

wolframalpha.com/input/?i={sqrt(log 4), sqrt(log 3), sqrt(log 2)}&rawformassumption={"FunClash", "log"} -> {"Log10"}

what happens when the computational power of the brain is packaged in a moderately priced gpu. then it could be a matter of tinkering until something emerges

The turing test is not a sufficient condition for the existence of AI. A better test would be any sort of algorithm or computer that can excel in something it wasnt designed specifically to do. Computers are amazing at what they are programmed to do, not things that go outside their normal parameters. Any true AI would have to succeed at a task that the AI was not preprogrammed to do.

Which is why true AI will never exist.

No.

We will never see AI or a singularity in human history.

We will however see Amazon stores without staff members.

So that will be fun.

Call me when AI research reaches the combustion engine era.

Wouldn't true AI just be a digital/mechanical recreation of a human, for all intents and purposes?

>An AI that can simulate conversation is still operating under an if/then/else model informed by extremely sophisticated programming.
So I'm seeing two different possible interpretations of what you're saying here:
A) You mean this hypothetical AI* is literally just programmed to give specific canned answers e.g. "If input is 'How are you?' output is 'I'm doing alright'."
B) You mean this hypothetical AI is generating dynamic non-scripted output based on the input it receives, and you consider this an if/then/else model because their learning algorithm which allows them to generate their output is itself ultimately deterministic.
If the case is A, I don't think that's even really possible. And it certainly isn't something anyone would seriously try to take as an approach to AI because we already have much better case B type approaches that have long since been fleshed out available to work with today.
And if the case is B (which I think is the actual way this could work), then the sense in which that AI is deterministic isn't really different from the sense in which the human brain is deterministic. In both cases you can ultimately trace all output back to non-free cause and effect relationships, but this isn't the same as what's described in case A because the mechanism for producing output isn't itself just some "if A then B" formula, it's an approach to learning which allows for flexible dynamic output behavior.
*Which I'll point out is supposed to be so complex and well built that it can have a conversation with you where you ask random informal questions and get back coherent enough answers to where everyone who talks to it is under the impression it's just another human being producing said answers), hence why I don't think case A is even really possible.

>Any true AI would have to succeed at a task that the AI was not preprogrammed to do.
This depends on what you mean by "pre-programmed."
AI has already been able to learn to complete tasks the programmer doesn't explicitly instruct it on for a long time now, though you probably don't think this counts because they're still being programmed with a method for learning.
But the human brain is just as deterministic as anything else operating in terms of physical cause and effect relationships, so I don't think there's anything coherent anyone can point to that constitutes something beyond "pre-programming" that even the brain itself works in terms of using this excessively inclusive definition of "pre-programming" where any sort of deterministic basis for behavioral output somehow invalidates the process in question as "intelligent."

^Good post. I think people who don't work with these processes do have this overly black and white idea of how things work, like everything's either perfect or garbage and that intelligence is just an on/off switch that's either there or isn't, when the reality is a very complicated mess of processes with long spectrum of varying degrees of ability to carry out one or more of them stretching from the simplest of machines out past to superintelligent AI that makes our own cognitive toolbox seem shallow.

:D

humans are AI

Ive read about this throughly and deeply. And the best estimations there was were in 15 years and 40. And to say it like this is that. What the fuck do they get the years from. Wtf do they know that happen in 40 years that's gonna make this possible. And you got enthousiasts and criticals. Some that would say never and it goes on e.g. this thread.

So id give you an example to light it up for you. You need an nevral connection for circuts. That operates more life like than original motherboards. You need electrical signals that operates in a fucked up sophisticated algoritm never seen before. If it's binary or quantum is probably same shit. Because it basically should only be a algoritmic understanding between an electrical circut and signals that tells it to function in a way it is given as purpose. And it could fucking be analog for that matter. So the circuts are so complex and algoritm so sophisticated that it's kinda hanging loose in how we are gonna succeedly even challenge where to fucking begin. So my friend. It could be never. Only chance i see is that you make a source code that can evolve into something complex. In the same way as we begin as a sperm cell and egg cell. Which together can combine itself to multiply itself and evolve more and more. So general AI could start like that than just be a complete fuctional operator to begin with like switching on a power button and sim salabim voila touche it's alive let's angarde and so be it. Anyway it's hard to make it. But it sounds wonderful and amazing. It's like some children waiting for christmas gifts. Some thinks is santa and believes that. Some get that it's dad but takes the gifts like nothing anyway like they popped out from nowhere. But a very few sees that dad has actually worked his ass off just for you to be happy because he loves you. So you love him back more because you understand the real deal. Same is it with believing in AI. It wont come by itself it needs to be done correctly. Best answer.

all we're going to see are governments and big tech tripping over themselves in a mad dash to gather and exploit as much data as possible on you and everyone else.

the "AI apocalypse" narrative is a nice piece of predictive programming. really sets the bar where they want it. as long as we're not getting shot up by ED-209, it's cool, right?

I dont think we're gonna see an strong AI anytime soon. The techniques we use today to do machine learning (that AI have to use), while very good, sometimes better than human, are still very unpredictable, and need a lot of data to learn, and has a lot of trouble when it comes to reusing knowledge, and building hierarchies of knowledge. We're still in the infancy of Deep Learning using NN, so who knows, maybe we'll figure out a way, but I doubt that a technique that needs hundreds of thousands of pictures of cat and a shitload of processing power to determine if there is indeed a cat in a picture can come close to the human brain, where a half retarded 3 year old can determine that in a few minutes.

the AI apocalypse will come in the form of a miserable, overworked, effortlessly manipulated society

>ee phd

where's my "synthetic blood" faggot?
and why do i still get to ride buses for free after donating ?

WATCH IT SPOONER