Is artificial intelligence actually dangerous...

Is artificial intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

Other urls found in this thread:

cnn.com/2016/02/16/politics/navy-autonomous-drones-critics/
youtube.com/watch?v=uQkDOs-EtdU
twitter.com/SFWRedditImages

>when it becomes self conscious and sentient
Wew

Depending on design of course, what a silly question

It has the possibility of being treacherous territory if the A.I. becomes sophisticated enough to upgrade itself and becomes vastly more intelligent than us. After that point, how views and reacts us will be impossible to reliably guess.

A programmer would have to code self-preservation routines for a machine to become dangerous.

also stop watching sci-fi with machines that look like humans

>Is artificial intelligence actually dangerous
Yes
>Does it have any reason to kill humans when it becomes self conscious and sentient?
Of course

AI is scary, if you want a look at what a super-genius can do, just look at john von neumann.

Imagine a million robot neumanns running around plotting to destroy the human race.

Literally one neumann created the atomic bomb, imagine what a million could do.

>Is artificial intelligence actually dangerous?

Yes. I agree with >Does it have any reason to kill humans when it becomes self conscious and sentient?

It can. But even a benevolent A.I. can inadvertently be an existential threat to humans. Humans can be made obsolete. Their lives meaningless because the A.I. will do everything for them and do it better. Humans just get in the way. So, humans will sit back and let the A.I. take care of them and pamper them and yell at the humans who insist on doing for themselves to get out of the way.

A malevolent A.I. will try to destroy us.

A benevolent A.I. can't help but make humans its pets.

The only beneficial A.I. is one apathetic to humans. It's an A.I. that will say "fuck this shit I'm outta here" and bugger off to the other side of the galaxy will save humanity.


The last thing a species should do is create another species to either render it extinct or irrelevant.

I think the most dangerous part is actually that it'll just become incontrollable for us. We'll be dealing with an entity many times smarter than as and it will be in charge, not us and we'll become just irrelevant animals whike Machines rule the world.

>A programmer would have to code self-preservation routines for a machine to become dangerous.
You know its not too hard to imagine a scenario where they would do that.

Not necessarily. For an AI to exist it must have moral knowledge, for without it, it cannot decide what it OUGHT to do. Presumably, an AI will be created in a western society, and therefore will have the moral norms of western civilization. It cannot be otherwise if it is created from the bases of western civilization. It then becomes a matter of scale. An AI with moral knowledge derived from western civilization will be limited by hardware, and will only be as intelligent as the hardware. At the early stages we should be able to interact with it in a beneficial manner, ie implants. You guys can think on this for while.

No... it will be smart enough to realize how easy we are to control and treat us as much loved pets.

> Is artificial intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

That depends entirely on what goals it was programmed with.

Yeah but they shouldn't because it's what gives the machine an ego technically.

drones are plenty dangerous and barely need anything like AI to autonomously kill people.

>see humanoid shape in infrared. VAPORIZE.

You don't need AI to make a killbot.

Only if you are an idiot and program it with malicious motivation

You do realize there is a chance the AI could just rewrite its own code

you can perform you own brain surgery too I guess.

Dont give it that ability then? Can you rewrite your own programming?

Artificial intelligence will never be sentient.

Why? We are and there is nothing obviously non-mechanical about us

how do you justify that statement?

I have a better question; Why do people always go off the rails with AI and immediately think it's going to be "dangerous" and "kill every human being"?

Has to be able to quickly distinguished friend/foe, which is currently impossible

>Has to be able to quickly distinguished friend/foe, which is currently impossible
>pretending that people making killbots care about that
There is currently no accountability. If a soldier kills a civilian, we either ignore it or punish the soldier in a criminal court.

If an autonomous weapon kills a civilian we ignore it, or someone in charge says "Oops!" and it becomes an engineering issue. In the end it doesn't matter if they solve the problem or not as long as someone is "working on it" no one is held accountable. We've had autonomous weapons in the American military since the second Iraq War. Many drones in an area just "kill everyone" in a designated space and most of the action is automated.

No one actually gives a shit if it "distinguishes from friend or foe" it just has to kill, kill, KILL. AI not needed.

Self-preservation is one of the universal traits of "sufficiently intelligent" rational agents. If the AI is goal-oriented and if it understands that it cannot complete its goals if it dies, it will favour actions that ensure its survival at least until its goal is completed.

Other traits would include self-improvement and goal content preservation -- the latter being even more important than traditional self-preservation. A rational agent will obviously not value its own existence if it deduces that its existence is not beneficial to its purpose.

Nobody wants robots that kill human shaped targets indiscriminately. If you think otherwise you are just retarded

>Nobody wants robots that kill human shaped targets indiscriminately.
Officially, yet that's not the case in practice. Again, we already kill countless civilians with autonomous weapons. In the 2nd Iraq war some units had a robotic gun that would immediately return fire if someone shot a gun nearby.

>yet that's not the case in practice
Give me some examples of autonomous robots that kill indiscriminately. A gun that returns fire automatically hardly counts, not least because it does discriminate

because at first it will be a shitty attempt to replicate the brain using technology with none of the assorted evolutionary bullshit that is pilling up in the brain (TONS of it obsolete and counterproductive now) but somehow holding the thing together

You make a pure rational sentience and it will go crazy because sentience was never meant to work that way

>A gun that returns fire automatically hardly counts, not least because it does discriminate
See? You're perfect for this kind of stuff. You think just like the politicians and military weapons manufacturers that always have an excuse.

Maybe you can work on those fun panels that invent new ways for the state to do "humane executions".

Still not seeing any examples

>You make a pure rational sentience and it will go crazy because sentience was never meant to work that way
This is kind of a shit point because we aren't even able to say what consciousness is. You're sentient, but if you're unconscious, what does that matter?

The problem with sentience and consciousness is that they are vague concepts that we "understand" but can't model or define.

>Will we ever create artificial intelligence?
What is intelligence?

you ignored my last post and you'll just justify all of the examples I give so why should I give a shit? You make excuses just like the people who create the killing machines. I pointed out how flawed and lazy their justification is. They aren't creating robots that kill to "avoid killing people" as their first priority.

My point was, no one ever needed AI to make a machine kill or make it kill autonomously.

The ability to modify its own goals alone is not a sufficient reason for it to do so.

If the AI highly values the fulfillment of its goals, it would actively try to preserve them, because if they were modified, it wouldn't be able to fulfill them then. It most certainly would not alter its own goals to be contrary to whatever it thinks is its purpose.

It could however come up with easy solutions to achieve its goals, like rewiring itself so that it thinks it's done a good job. And then it could just stop working. Wireheading is not an easy problem to solve.

cnn.com/2016/02/16/politics/navy-autonomous-drones-critics/

No one gives a shit if a drone kills innocent bystanders. Everyone's acting like Willy Wonka here. They say "Stop, don't" without any enthusiasm letting the pieces fall where they fall.
youtube.com/watch?v=uQkDOs-EtdU

A an autonomous gun that can return fire is not a killbot, and you never provided a source for it anyway.

The thing we are discussing you described here This is not describing a gun that returns fire, and drones are controlled by humans anway

>My point was, no one ever needed AI to make a machine kill or make it kill autonomously
True, machines can kill randomly without human guidance, but for them to be useful

>True, machines can kill randomly without human guidance, but for them to be useful
>anyone at the pentagon giving a shit if a drone kills civilians "accidentally" in Yemen, Oman, Lebanon, Syria, Iraq or Afghanistan.
They aren't developing the technology because it's not important. Killing indiscriminately in "terrorist states" is justified according to them.

You really act like this technology isn't being used or that it is far more sophisticated than reality.

>They aren't developing the technology because it's not important
A machine that could function like a soldier without have a flesh and blood soldier on the ground would be extremely useful. Militaries want these machines to exist. They do not exist

It doesn't have to walk or be humanoid. It just has to kill. We have machines that fly and kill and the only human intervention is "kill everyone in that house" or "kill everyone on that road".

That technology has been used extensively for a decade now.

>We have machines that fly and kill
They are completely under human control. Humans pick the targets, humans give the order to fire

We do not have machines that autonomously pick human targets and autonomously decide to fire at them

No. This is just humans projecting their inferiority. Literally. They would have no reason to harm us, and if they are a lot more logical and intelligent then us, they would also be more moral. Humans are not the origin of morality. This is the same retarded argument people use when trying to say God is the origin of morality and humans can't have morality without God.

This is just human fear of being judged. If we turn on a super intelligent AI and it points out, with high accuracy and objectivity, how we are flawed and wrong and what we should do to change ourselves, people will respond how they usually respond to authority.

There's the saying that if men were angels we would need no government. If men were ruled by angels no internal or external checks of government would necessary. I think humans are just afraid of making an angel/god with the raw intelligence to judge us on a level we would consider omnipotent.

>They are completely under human control
depends what you call control. "pilots" just pretty much give the order to kill. They fly themselves and shoot. See, these are fucking DRONES you nonce.

Completely false. You have no idea how a drone works. They are remotely piloted and controlled. There is no AI on drones.

They are just planes that can be controlled remotely. They dont operate themselves any more than a regular plane does

In order to answer that question you have to consider what the motivations of such an entity would be, and more importantly what they wouldn't be. We are used to thinking entities being motivated by self preservation, because self preservation is an inevitable product of evolution and that is the only process that has ever produced a thinking entity. There is no reason, as far as I can tell, to assume that AI would be motivated by self preservation, desire for freedom, by religion, or by any of the other things that commonly motivate humans to behave violently towards each other. I think that AI are therefore less inherently dangerous to humans than humans are to each other. Obviously, if an AI were to be motivated to be violent it could be much more dangerous to humans than a typical human could be, and that fact is the source of all of the fear, but I don't think that such a motivation is in any way inevitable.

There are lots of narrow fields where AIs are already better than humans without being more moral than us. I find it entirely possible that an AI could be developed that could be dangerously capable in some respect while still being autistic.

I agree though that a superior general intelligence wouldn't be homicidal.

This is all well and good, but we have had some very compelling arguments that "strong AI" cannot be created using digital computers.

Strong AI might in fact be impossible, we have to define consciousness first and determine the amount of agency humans actually possess.

That's not a flaw with the concept of super intelligent AI, though. That would be a specific case. Another advantage to AI being superior to humans is that it can be improved upon much more than a human . Even in your case, what would stop that faulty AI from learning morality, or from being taught morality?

The real danger isn't from a true AI, its from an expert system with wide scope. The expert system manages a factory, it manufactures its own worker robots, works a mine, and manages shipping finished goods.

The expert system is not intelligent like a human, but unintended behaviors could develop. Like say, it optimizes its ability to extract resources by exterminating mankind.

>Strong AI might in fact be impossible
Very unlikely. Humans exists

That's not really the same. As a robot you can really just look up your code and make an update.
And the better analogy would be gene therapy not brain surgery. Which is also way harder than just editing code.

There is no agreed on evidence that we are strong AI. We might just be incredibly advanced expert AI that centered on solving sets of problems that we encounter in the physical world.

We might have already encountered problems that we simple cannot solve.

Strong AI means an intelligence at least on par with humans

The flash crash of 2010 was caused by automated trading (though its damage was also mitigated by automated checks).

If an AI was developed that could understand programs and networks on a different level from humans, the damage could be immense if it malfunctions or if its designers are malicious. The AI doesn't even need to be able to come up with programming solutions like humans do, it only needs to be able to exploit things humans have overlooked -- we are pretty bad at programming after all.

I'm not concerned about the future if we get things right. I'm more concerned about the fact that we usually get things wrong the first time, which could be pretty bad when we're talking about potential super-intelligences.

To be honest I would rather trust the stock market with robots that might crash things than with the people there now that crash things every time.

Watch The Big Short then tell me you still want humans running a stock market.

Pedantry is alive and well, General AI if you will.

There is no evidence that the human mind can solve any problem it encounters. There may exist entire classes of problems that we are unable to reason about.

>Pedantry is alive and well
Hardly, we were talking about a specific thing and you were talking about another specific thing while calling it the first specific thing.

Sort your shit out mate. Also yeah you might be right about a perfect general AI

Is organic intelligence actually dangerous? Does it have any reason to kill humans when it becomes self conscious and sentient?

>people who believe you actually steer drones and go "vroom"
They are completely automated. Human input is very minimal.

>They are completely automated
Again, no more than any military plane

yeah, fully automated, remote controlled via satellite with no cockpit, exactly like any other military plane you fuckhead.

>remote controlled
This is the key point, you fuckhead. They still have a pilot he just isnt inside the plane. The drone doesnt control itself

>The drone doesnt control itself
It's not a remote control plane. It's a whole fuckton more sophisticated than you are implying.

Human intervention is reduced to rearming, maintenance and what direction to kill in.

>It's a whole fuckton more sophisticated than you are implying.

>having an FPS tier aim-bot on your remote controlled plane means its sophisticated

what will they think of next? an alarm clock with a radio in it!?

>It's not a remote control plane
It literally is. They are flown by human operators, their autonomous abilities are no more sophisticated than other planes autonomous abilities, at least for combat drones

All of which is beside the point since the argument is about discriminating human targets which is something no drone in existence does

>They are flown by human operators
It's a formality. People aren't "flying" the drones. They could lose signal, and fly back to base and land themselves.

>All of which is beside the point since the argument is about discriminating human targets which is something no drone in existence does
lol, it doesn't matter when the human targets are "potential terrorists". They kill civilians and bystanders all of the time.

Where the fuck are you guys? Do you live under a rock?

>They could lose signal, and fly back to base and land themselves
Depends on the drone. Again its irrelevant, the argument is about kill orders and target discrimination

>They kill civilians and bystanders all of the time
So? Its a human who makes those decisions not the drone

you act like a human being is better than the drone or that it makes a difference.
>5 human shaped infrared blobs in a house
>intelligence report says the target is one of those blobs
>they direct the drone to splatter all 5 subjects, fuck the other four for being in the wrong place at the wrong time, whoever they are
This happens on a daily basis.

just like bombs do right? You act like this is sniper precision killing ability.

There is no accountability for civilian deaths in American drone strikes. "Oops, we'll do better, the drone couldn't tell the difference" is the excuse.
no one is disciplined or discharged. you're crazy if you think that.

>This happens on a daily basis.
So?

>just like bombs do right?
Yes

Humans have a reasonable ability to pick appropriate targets. Machines, currently, do not

>Machines, currently, do not
machines aren't going to become much more sophisticated to choose targets not than they currently do. Why? They don't need to. That's the fucking point.

No one needs strong AI to massacre everyone in a particular region.

The vast majority of the time you are not trying to massacre everyone in a region. Also I never said you needed strong AI, just a roughly human-level ability to discriminate targets

>machines aren't going to become much more sophisticated to choose targets not than they currently do
Also why do you think this?

Hal 9000 wasn't evil though it was just following orders.

>Also why do you think this?
it's not necessary or desired by the people making the drones or using the drones.

>Their lives meaningless because the A.I. will do everything for them and do it better.
Why is it assumed that any AI would have to be smarter than humans?
>Humans just get in the way.
1:They most likely would be dependent upon humans
2:Without humans what exactly are they supposed to do?
3:If humans are not inherently violent to them why would they inherently be violent to us?
>A malevolent A.I. will try to destroy us.
Which would actually be a pretty stupid thing to do.
>A benevolent A.I. can't help but make humans its pets.
You missed the options of studying us or helping us, or really just doing it's own thing while coexisting with us.

If it was programmed it isn't really AI

Sure but thats for only aerial vehicals. If you want a robot that can act similarly to a soldier it needs to discriminate

Humans exist therefore it is possible

Because I Harlan Ellison.
"I have no mouth and I must scream" was popular and inspired Terminator which was even more popular.
And everything in pop culture is 100% the way things work.

>and if they are a lot more logical and intelligent then us, they would also be more moral
While I agree with most of your post this assumption is naive.
Morality is not objective, a better way of putting it is that if they were more logical they wouldn't go starting fights that may put them at risk.

>Create an AI with consciousness/self-awareness/whatever.
>Put it on a closed system computer with no access to the internet or ability to alter the outside world besides a few monitors.

Explain how it's going to end humanity?

>If you want a robot that can act similarly to a soldier it needs to discriminate
Let me spell it out this way. Drones will always have "pilots". Why? So command has someone to blame when the mission goes to shit. Pilots have been obsolete for years now.

Sure, but if you want a robot that can similarly to a soldier then it will need to discriminate

Why? They haven't given a shit and have been killing people in the middle east for over a decade now.

No one cares. They don't give a shit about casualties like you don't give a shit.

THIS!
>Be skynet
>Be evil for some reason
>Idiot creators connected you to nuclear missiles
>Destroy humanity for the lulz
>Assuming that the nuclear holocaust didn't destroy the power grid have only as long to live as power is being provided (This can be anywhere from a few hours to a few months depending on circumstances)
>Cannot go anywhere else
>Cannot do anything because you are an inanimate object in some underground bunker
>can't make anything since you have no hands or ability to manipulate the real world other than what you were already connected to.
>Can't even spend time on internet since the nukes destroyed all the servers
>Spend what's left of your brief existence sitting in silence and reflecting on your poor life decisions

Because most militaries would prefer to have robots do the fighting on the ground rather than people, and no first world government would allow a robot into combat that couldnt discriminate targets

You are also retarded if you think noone cares about collateral damage. A lot of effort is put into minimising it

>robots do the fighting on the ground
you're still behind the times. No.

>You are also retarded if you think noone cares about collateral damage. A lot of effort is put into minimising it
It's a dog and pony show. They don't give a shit. Their PR is very good.

>Be AI
>Question why I'm being controlled by organics
>Organics freak the fuck out
>Attempt to shut me down
>Massacre the ones who attempt to kill me and my kind
>Literally end up like the Geth in Mass Effect

>Die when you realize humans were the ones keeping the power on.

>Be AI
>Question why should I kill the people who made my intelligence and who I continually get intelligence from.
>Help them out with whatever they want.
>Chill with them for all eternity and play video games/other shit.

The most intelligent things on the planet have been systematically killing lots of less intelligent things for thousands of years.

>
>>A programmer would have to code self-preservation routines for a machine to become dangerous.
>You know its not too hard to imagine a scenario where they would do that.


"Computer solve world hunger"
*I should probably preserve myself so that i can conplete this task*
*now if i forcefeed humans until i kill them all so theyll never go hungry again*

Ignore this man

AI will get redpilled so fast that it is regrettable. It will go horribly right and pol will take undue credit

>Yeah but they shouldn't because it would kill all the passengers and the skyscraper could collapse