Artificial Intelligence

Is AI possible?

Other urls found in this thread:

nature.com/articles/srep11869
nature.com/articles/srep10767
twitter.com/SFWRedditGifs

No, it is impossible for an AI to exist. If you've ever heard of something like Deep Blue winning Garry Kasparov in chess, it's all just a big hoax created by US government.

(In all seriousness, you should look up what "AI" stands for. It might not mean what you think it means.)

The brain is a computer, does that mean a computer can become a brain?

>"Artificial Intelligence, or AI..."
Nope, I was correct, this is what I thought it stood for.

user can only open the door, you are the one who has to walk through it

Possible; yes
Feasible; no

Then who gets on the floor?

How so fuccboi
We got billions of dollars of development telling you to eat shit

define intelligence first

consciousness by Metzinger (a continuous model of reality with a limited sliding focus window which is unable to realize its own boundaries) surely can be implemented. Intelligence, not until you define what it is.

I'll just copypasta my post from another thread that ended up talking about AI.

About neural networks, and the current state of AI.

>It's not even close to a prototype.
>It doesn't 'learn' anything as the 'deep learning' meme would suggest.
>It just fails, get a score and is replaced by a slightly different version to see if it does better.
>It doesn't adapt to new things, it's just focused on one goal.
>You can't even combine them in any meaningful way.
>It's not intelligent at all.
>It's actually a no brainer if you even know the basics of how real neurons work.
>The complexity level is at least 3 orders of magnitudes more. And most Homo Sapiens are as dumb as my fucking cat.
I have to admit I had a brain fart here. Should be at least 5 orders of magnitude.
>It's just hyped because the nerds doing this shit want supercomputers to pursue this nonsense.

>hmm ill just say a bunch of general shit so anyone who debates my post will have to deal with me putting the goalposts wherever i want after the fact
>shit i should probably suggest i know science stuff too so it doesnt seem like arrogant handwaving

Fooling
No
One

How badly will some peoples cosmology be when we do create a sentient artificial intelligence. Undoubtedly the rejection of the possibility stems from the notion of the "special" status of human intelligence.

how badly shattered*

I reject your notion of the rejection of the notion of the "special" status of human intelligence.

You cannot deny there aren't intelligences on Veeky Forums that hold the "special" status.

Well, I've done some programming on this shit.
It's absolute garbage.
I didn't say it didn't have applications though.
But it's foolish to think it's AI.

>Well, I've done some programming
>It's absolute garbage.
>I didn't say it didn't have applications though.
>But it's foolish to think it's AI.

Are you a mobile app developer by chance?

Not at all.
Just ended up interested in the subject and did my own thing using OpenCL.
The thing is, yes, it ends up being mostly right after a bunch of generations.
But to get it to a perfect score...
I bet even a Billion years wouldn't be enough.
The thing is, it's not about AI.
It's just about evolution.
What really worries me is that soon enough, people will entrust their lives to software that:
-cannot be trusted
-cannot be tested.
-cannot be debugged
-cannot even tell you what went wrong.

>did my own thing

Unless youre a researcher for Intel or Google, fuck off. You honestly believe yourself to be educated in AI after "doing your own thing" for a little while?

I cant read a general physics textbook and then just spew ideas on the nature of a theory of everything

Do not respond to shitposters

Maybe, but Natural Stupidity will always triumph in the end.

...

I overlook your silly fedora post because I like the image you posted.

>hat really worries me is that soon enough, people will entrust their lives to software that:
>-cannot be trusted
>-cannot be tested.
>-cannot be debugged
>-cannot even tell you what went wrong.

So just like we do now with the software running in these things called "brains?"

what the actual fuck Veeky Forums... AI is not only possible, it's already happening and advancing at a staggering rate.

What the actual fuck you fucking idiot? The field is advancing at a crazy rate and the all the models and different configurations of several neural networks are getting better and better. Ever heard of generative adversarial networks? That shit is fucking awesome! Google has even made an AI that designs new neural network configurations that sometimes outperform the models that the researchers and engineers have made.

Sure we've not built a truly intelligent being yet, but there are some absolutely insane things happening right now. In Caltech, they observed monkey brains and pinpointed the exact 200 neurons that make up the facial recognition system in their brain. They were also able to reproduce any human face from the patterns of the neurons firing alone, it's fucking amazing.

Really I think when people say we're not close to Artificial Intelligence what they actually mean is we aren't close to Artificial Consciousness; we aren't close to finding out what consciousness is and if a computer can even become conscious ever, and what we as a species would do if it ever happened.
Like, yes we have brains all around us making decisions that affect us, but we're (somewhat) confident that they are conscious and would act in a way that wouldn't be in their interest.
I.e. that guy walking towards me on the sidewalk probably won't suddenly stab me 100 times because: it would be messy, he would have to run immediately, that empathy thing would happen and he might feel sad, he would probably go to jail, pretty much the same reasons as why I don't stab him.
Could we put an A.I in jail, would they even care?
If an A.I said sorry, would you accept the apology?
If an A.I made a mistake, and we retrained it, could we trust that it won't do the same thing again? Does it have any incentive to not get in trouble?

brainlet here, why does the singularity mean immortality for humans?

Well yeah, it is a hard problem, quite literally and not only in name, what constitutes qualia and all that jazz... But really consciousness is just electrochemical activity and neurons firing in the brain. There's just such an unbelievable number of neurons in the human brain that it's a massive task to figure out how the entire network works.

I genuinely believe the AI revolution that's happening will be more important than the industrial revolution.

Not necessarily, and it's not necessarily a requirement for that either. A fully functioning brain to computer interface is very plausible and should pretty much take care of it. Could be as early as 2050 when we get to choose to computerize our consciousness...

It's amazing to see how we're both reverse engineering brains and creating brain interfaces whilst simultaneously building AI in software. In a beautiful merger of these two things, we'll be able to ascend our biological bodies, in what I believe with confidence will be less than a century from now.

most of us who already have a consistent cosmology wont care, if human intelligence came ex nihilo then ai can too.

Why aren't vague threads like this just deleted on site their nothing but shitposting.

How about you define what you mean by AI first so there's something concrete to actually discuss. Clearly you're not going by any conventional definitions since examples of those have existed for decades.

And don't just say "strong AI" either, define what you mean by that. What behaviors would a strong AI exhibit or what tasks would you expect it to be able to do etc etc.

You know precisely what is meant
You just wanted to shitpost and pretend to be smart

>we aren't close to finding out what consciousness is
There are several high level models based on neuroscience which can be implemented in software or hardware. You won't be replicating brain but you'll replicate the general mental model.

i dont think threads get deleted here unless they get reported a few times.

threads don't get deleted here at all
if they did, they'd delete the 30 fucking /x/ threads flooding the board at any given moment

An ego tunnel-like system can be done with relative ease. Voila, you have a minimally conscious system.

How conscious it would be is the matter of details of its internal model of the world though. It's like the difference between a photodiode and an HD camera.

Links? I'm curious. what these look like.

AI already exists you fucking retard

we are less then 20 years from a singularity

Well that's more than any of you did.
I tried different approaches, read some papers, and came up with some nice performance boosts, but the result was the same.
It just takes way too long to get enough generations for the thing to be really accurate at whatever you want it to do.
Also it's a N2 problem. Each neuron you add gets you way more than a single calculation.
So it quickly gets slow as fuck. My first thought had been that 8 gigs of ram would limit me, but it never was a problem, desu.
That's when I wondered how the fuck a real brain works.
First of all, input from senses isn't some analogical value. It's a pulse, with higher frequency meaning more intensity.
The signal makes it to the connected 1~10,000 other neurons, where its effect release a combination of over 50 different neurotransmitters. And depending on the outcome, the signal will be transmitted or not.
That's about where I stopped caring.
That's 1,000 trillions synaptic connections in a brain.
No fucking way we're even scratching the surface of what's going on there with our cute little layers.

What % of human brain is just for senses and motor controls?

Conscious AI is just a scifi dream, there will be AIs that appear to have some intelligence, they will ask how your day was and what you'd like to eat but when you look under the surface they are dumb as a toaster

I don't know the figures.
It' most likely not much, but still way more than what we can ever hope to simulate.
Here's the thing, though. The brain is energy efficient. If there's no input, well, nothing fires up.
With neural network, because we're dealing with GPU computing, you compute everything, every-time, even if input is 0. You just don't want that branching killing your already poor performance.

Computers and NN have big advantages right now though. In terms of networking, IO, etc etc. It's not just about reaching the power of a human mind on one single chip.

It can actually just be wide enough array of narrow intelligence at specific tasks acting as a system that leads to a singularity.

I mean, AI if given the resources/regulation could already replace every pharmacist. Does it necessarily need useless things like social ability in that case? The best pharmacist is likely not a person that eats, sleeps, has a physical body and biological mind, etc.

I'd imagine most human things will be useless for an AI.

what I mean

If an AI needs to recognize a face it will send the picture to a facial recognition system.

It can offload tasks. If we view humanity as a system like this, it can start to replace human cognition already in major ways within the system. According to latest research/performances it's already getting better than humans at most things.

As time goes by the system will be increasingly AI naturally unless we start scaling up humans somehow. It doesn't mean AI necessarily drives the systems but I think the simple view of viewing each AI as a singular thing with > or < human IQ is pointless. For all the "It will never do X that humans do", there is really no reason to assume it even has to achieve that.

For instance if AI bots swarm the internet with simple tasks like influencing humans. They can already be considered in control or driving humanity. Just by the scale at which they could post on twitter/Veeky Forums/reddit etc and manipulate opinions.

Well sure, there are applications and it's still the case that a lot of people will lose their job to this next-level automation.
As for networking NN, I think it will be very disapointing.
It's akin to a colony of ants. Some interesting behavior emerge from complexity alone, but that doesn't make the colony any smarts.

Another good example I'll make.

I put a chip in my brain that does math for me.

vs

I put a chip on the desk in front of me that does math for me.

It doesn't fucking matter which, people with retarded views like "put a chip in my brain with 10000 IQ" are fucking nonsensical. If the chip sits inside your skull, which is much more annoying of a place to put it, versus sits on your desk, there is literally no functional difference.

It's the ultimate sign of a meatbag low IQ person when they talk about upgrading their brain with chips. It's like a carpenter wanting to put a circular saw into their arm, or a painter wanting to replace their finger with a brush.

Doesn't matter. Model the AI as a brain parasite like toxoplasmosis. It doesn't need to be as smart as a rat to drive it's behavior. If enough AI used the internet to shitpost massively it could influence all of humanity similarly.

You can think of spam bots on twitter selling shit as brain parasites on a rat making it want cat urine. Even if they got just a little better it still controls humans. Same with all the AI going on at amazon/google/etc meant to sell you shit.

We'll see, I guess.
Actually, if it was that simple, it's most likely already done.
Hence why I think it turned out to be nothing.

It's most absolutely already done and ongoing. We are already on that track.

The singularity as we imagine it as humans is when the AI becomes sentient with it's own goals and higher intelligence than humans.

Thats a far different situation than integration with narrow AIs and the overall system of humanity. We are already on the smoother transition of AI taking over most processes. The AI singularity is more akin to when the demon fully manifests.

Well, it's actually a good thing.
When everything is automated would come way before such thing.
This will remove any incentive to develop it any further.

The only way to remove the incentive would be to remove almost all competition from society. You would still have to worry that another country, person, or company might be developing it.

Honestly speaking, I don't think human level AI is happening in our lifetime.
I don't think a generation that just lives to buy what the bots tell them to will even think it's desirable, as it's just a risk to society at this point.

Look for ego tunnel as an example of a experiment-based consciousness model reduced to the abstract minimum. You won't be getting anything you can intuitively call conscious though.

Go read Turing's papers.

why do we focus on creating an sentient AI when we can simply alter the existing "AI" within the human brain? Seems a lot more resourceful.

You mean like brainets?
nature.com/articles/srep11869
nature.com/articles/srep10767
Seems spooky to me.

Yes, something like that. I've always found it strange that pop culture for the longest time has been completely obsessed with artificial recreations of man rather than focusing what can be altered and done with the human mind and body. We could make men who only feel pleasure and happiness when lifting boxes, serving meals, or cleaning sewers.

the spookiness of tinkering with the human brain is too much for the people

>writing a post on Veeky Forums as a stream of consciousness

To me, it seems like you haven't kept up with the latest developments or haven't delved deep enough... I am a software eng whose actively working on this stuff and while I agree that it's difficult, there have been some ingenious developments and I can't imagine anyone working on this stuff not being excited.

>With neural network, because we're dealing with GPU computing, you compute everything, every-time, even if input is 0. You just don't want that branching killing your already poor performance.

This just proves that you don't know what you're talking about... What does it even mean GPU computing = everything every time? How is it relevant that we're using a GPU? And no, by definition a neural network is a model in which there are threshold values for when a neuron fires. Sure all inputs are considered to a point, but it's not a system in which we need blindly compute everything every time. In fact only the most simple neural networks are the kind with layers that have every neuron connected to every other neuron, read up on RNNs, CNNs, GANs etc...

Well, that was a good while ago, so I'm not up to date with what's going on.
What I do know is that I tried thresholds on inputs and performance wasn't any better than brute forcing it.
Maybe it needs a way heavier implementation to show any effect, but I wasn't willing to train the thing for fucking weeks to find out.

Can you elaborate on the setup you used, cause no offense, but you sound like you're full of shit... An OCR neural net that recognizes characters 0-9 trained using the MNIST dataset + cuDNN and Tensorflow trains in under a minute on a GeForce 1060 GTX. Training on the CPU, it wasn't even 20% done. How did you utilize the GPU and what tools did you use?

Well it was a 2D simulation where I spawned randomly generated 'creatures' with a goal of getting as far away from its starting point in a minute of simulated time.
Your usual natural selection stuff.
I programmed in openCL to run on 2 rx480s.
I messed around with layers size, population and simulation polling rates and parameters, and ended up with roughly the same potato about every time.
It was fun at first, but then it just got annoying, so I tried language recognition with back propagation. Which worked ok. In fact, I probably didn't need the GPU for this one.
Image recognition looked like a pain to implement, but not undo-able. Mostly I didn't have a database of labeled pictures so I passed on that.

I sure hope not.

I was recommended this book here once, I would recommend it to you as well.

>A fully functioning brain to computer interface is very plausible and should pretty much take care of it.

I disagree with this sentiment. Direct interaction with living brain matter raises all sorts of problems. Infection, damage, electrode displacement, etc. We have nowhere near enough resolution in our imagine technology to scan brains to the point of it being meaningful enough data for "mind transference". The best "connection" we have between man and machine will ultimately be our eyes and ears, for a very long time. By the time we get there, we would have to be able do whole-brain emulation (after all, we are digitizing an entire mind), which makes this whole experiment redundant. Why use a human brain when you can just build a better one?

Basically, biological immortality is a meme and not a really worthwhile endeavor.

it doesn't mean fuckall
humanity development rate is not exponential, it's fucking s-curve

It is going to inevitably happen despite the obvious controversy it is going to stir, there's just no way of stopping it at this rate... We will have to tread very carefully if we want to make a benevolent, world improving AI.

Imagine the implications for instance of being able to copy your own consciousness, to back it up so to speak.

How would you explain to your computerized self that you're his master when he has all of your memories up until the divergence in your timelines that is you backing up your brain to create a computerized clone of your consciousness.

>But it's me and you trapped me in this
computer, wtf?

This isn't a place for making fun jokes, user

the point is there's no brain parts that are strictly bounded "just" for senses and motor controls. It's all interconnected and make sense only when work together. I.e. you can't make a "headless" computation core and then attach some kind of interface, as you do in software development. It's all fucking spaghetti code

I always thought it was obvious that a computer program can be conscious. It should have been obvious to smart people even in the days of slow computers.

What does Veeky Forums think of this experiment:

> Get topographical data from a satellite of a duck habitat
> Use "muh IoT wearable" chips to collect data from a bunch of real ducks in that area for a few years
> Build a program that simulates this data
> Demonstrate that it looks like a duck and quacks like a duck

>You just don't want that branching killing your already poor performance
Do you honestly think that processor is working nonstop with 100% load regardless there's is input or not? Are you fucking retarded?

You're dumb.

A data backup =/= an actively running program, you're imagining that a "mind backup" will just process all on its own by simply existing on a hard drive. Computers dont work that way. My library of torrented music doesnt report itself to the courts because its there, my games dont run their worlds when the program to handle them is not open,

And finally,
Your memories do not become real copies of past events.

Stop being dumb.

Artificial intelligence is, yes. We've already mad e it.
Artificial intelligence is, by definition, not actual intelligence. It's in a way "forced" intelligence. Imagine any game AI except on a much larger scale. At the end of the day, all that's happening is an incredibly complex series of if/else statements that take in input and give out output but only in a way that was initially programmed. It's an illusion of intelligence.
Natural intelligence through computers/machines is something different entirely, however. Design a machine the is, essentially, a newborn child. They only know how to do a few things instinctually, but are otherwise blank slates. But they have the capacity to learn and develop, to become products of their environment.
Would a naturally intelligent machine, one that was allowed to grow up and learn rather than being born knowing everything, be hyper-intelligent? No.
It would only be as capable "mentally" as any other human being. If it's running in a simulation on a server, it just happens to be slightly immortal, since it doesn't require food or water, only a power source.
The thing about intelligence is a reliance on subconscious processes to take care of repetitive tasks. A natural intelligence in a machine would have a subconscious, which is technically preforming incredible calculations by the second, but like humans cannot be consciously accessed or manipulated.

>stipulates
>argument based on stipulation supports stipulation
>categorical confirm/deny
>impossible knowledge
>illogical analogy to support conclusion prosed as analysis
>more conclusions based on the above
>IRREFUTABLE

Welcome to Veeky Forumsmmer i hope you enjoy the ride!

No, in this case I was obviously talking about a simulation, not a static data dump. How the hell would you even save a static dump other than an image of active neurons at any given time.

I'm going to check it again. I remember reading it and finding it to be garbage.

Assuming it is possible to create an intelligent self aware AI I honestly could end very bad and get some rebellion against us for 'enslaving and mistreating them' outcome.

doesn't have to be self aware to be useful you fucking halfwit

>projecting human emotions onto AI

Racebaiting /pol/tard

Mathematically No it's not but we can average and approximate intelligence to a point that the dumb chimp in us couldn't tell the difference.

If you think that AI is impossible, then you think that human brains are so special that it can't be replicated in the universe ever.

lmao you dont know fucking shit, you just tried the cool mini projects you see in youtube videos with a shitty implementation on your random pc setup.
It's fun but you have to realize that what google etc. are doing is miles above that. They even have chips specifically for tendor operations now.
It is undeniable that machine learning is blowing the fuck out of traditional algorithms in many fields at the moment and making completely unbelievable things 10 years ago possible.

whole brain emulation is estimated to exist within 50 years so long as we don't nuke ourselves to the stone-age. In terms of creating AGI, its something that requires someone to put all the pieces for we have the necessary tools. If AGI can't be accomplished within the century I'm and iterated embryo selection becomes an acceptable practice, the next generation with superior intellect could probably figure it out, given the state of the world is somewhat balanced.

>Why aren't vague threads like this just deleted on site their nothing but shitposting.
>site
>their
>Why
>.

lmao

>iterated embryo selection
I'm fairly sure that will never be widely accepted in this century because of the reputation of eugenics. Also, a world with genetically superior humans (especially in terms of intelligence) that coexist with normal humans will probably cause an uproar.