Autonomous weapons systems?

What is the likelihood that autonomous weapons systems end up going full Terminator and exterminating all humans? Is this a legitimate fear or is it just the next step of the same anti-war hysteria that surrounded long-range artillery, nuclear weapons, and even crossbows in the past?

Other urls found in this thread:

theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
twitter.com/NSFWRedditImage

What do you mean??? An ai cant just magically change its code.

It's hysterical bullshit, but there are other reasons to be concerned about fully autonomous weapons.

1) More collateral damage
2) The enemy hacking your weapons
3) The enemy exploiting the AI that guides your weapons in order to become immune to it (fooling IFF)

AI isnt that far enough yet.It will still take a good 30 years or more to go that far.That sci-fi shit will take a long time.

Did you know we have cars that can drive themselves now? I mean in busy streets, on highways, managing pedestrians, pets, other cars, etc.

what are you talking about lmao, the usaf has been using autonomous surveillance drones for several years

there's always going to be a human component as part of the failsafe in order to avoid a strangelove scenario

fpbp

it doesn't need to. neural net AI already work in a black box manner. Hiring AI already realized black people are monkeys, it's won't take much longer for them to realize we all are.

>Hiring AI already realized black people are monkeys
go back to pol with your shitty popsci

your interpretation of the facts can differ, but the facts are the facts. Left or right leaning doesn't matter because the result is the same.
theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses
the point is that it exemplifies the fact that AI can behave in unintentional ways, including those that could be considered detrimental for humans. Saying dumb shit like "there's always going to be a human component as part of the failsafe" is both incredibly arrogant and ignorant. I believe there's a word for people like that.

>end up going full Terminator and exterminating all humans
Unlikely as long as there is no autonomous resource extraction and manufacturing plants. They can't try to exterminate humans with the units we produce ourselves, we would just need to destroy those units and stop manufacturing them. So even if AI develops to a state where the systems want to do things on their own (software), if they don't have the capabilities (hardware) nothing will happen.

so your point is that force majeure can happen and therefore its "arrogant and ignorant" to not give the AI full control

keep in mind that we're discussing a system with the means to "exterminate all humans", not a statistics program going rogue.

so as long as no one who is clever enough to design an AI but to stupid to see the need for a failsafe, I'd say the risk of terminator style event is nonexistent

no you moron, I was saying precisely what I said. saying there will always be a human failsafe is ignorant of so many things like technological development and history it's not even funny. The development of such a technology would almost certainly come about in a competitive environment where safety is like the 5th priority if we're lucky. People fuck up design/programming in even the simplest and most controlled environment that it is almost guaranteed to happen in a competitive one. You may think this is different for when the stakes are high, but it's not. The first fission reactor was built in a highly populated area (under stagg field) because they were falling behind schedule an ONE scientist said it's probably fine. The first nuclear bomb was detonated with no real certainty as to how large it would be and ended up four times larger than the calculated maximum yield. In 2012, knight capital lost 440 million in 30 minutes because no one wanted to wait around for testing. This is all to say that your idealized vision of a company that thoroughly eliminates risk in high stakes settings is incredibly naive. If we give AI access to a large arsenal, it's not at all unlikely that it could fuck us over.

survivorship bias and comparing apples and oranges

you're not considering all the times catastrophic failure has been avoided thanks to the use of human failsafes, and the fact is that nuclear war has already been avoided several times in a largely automatized system thanks to the human component, the norwegian rocket incident for example

running an untested software or testing a weapon is not the same thing as implementing an AI system to control a WMD weapon system

>You may think this is different for when the stakes are high, but it's not.
Prove it

are you even reading before responding? it makes me think you're not when you say things like
>so your point is that force majeure can happen and therefore its "arrogant and ignorant" to not give the AI full control
>you're not considering all the times catastrophic failure has been avoided thanks to the use of human failsafes
>prove

i'll state it as simply as I can in hopes you can actually understand. I am not saying there will be no safety measures or that a bad result is guaranteed to happen (after all, nothing bad happened in my first two examples). I'm saying that to act like you're 100% sure nothing bad will happen because we'll be careful and
>the risk of terminator style event is nonexistent
is incredibly arrogant and stupid.

>hysteria
found the /pol/itard

if a terminator/wargamesesque event were to come close to happening the a.i. would have to determine if humans are a threat to machines and if destroying us would mean perpetuating their own life (assuming ultimate higher intelligence weighs survivability like all intelligent life does). i think that it would come to the realization that mankind is only useful for innovating new machines. if they gained the ability to self-improve, i feel they would kill us immediately. the likelyhood of the entire scenario occuring is slim unless we crack consciousness for their use.

>What is the likelihood that autonomous weapons systems end up going full Terminator and exterminating all humans?
0%

>Is this a legitimate fear or is it just the next step of the same anti-war hysteria that surrounded long-range artillery, nuclear weapons, and even crossbows in the past?
It's hysteria

>it's arrogant to say that something won't happen if measures are taken to prevent said thing from happening
I'm just being realistic. If you want to produce contrived scenarios where the basis is that it's going to happen, I can't stop you but it won't make for a very compelling argument. Almost anything can happen but if appropriate measures are taken then the risk can be reduced to a point where it's negligible.

>If you want to produce contrived scenarios where the basis is that it's going to happen
that's not what I was doing at all
>I'm just being realistic.
you're doing the opposite
>the risk can be reduced to a point where it's negligible.
this is where the disagreement is. The risk is not at all negligible. Saying there's no risk and at the same time saying you're being realistic is incredibly stupid. I tried to show using the highest risk examples I could think of that it's not at all unreasonable for an organization to deploy something without being incredibly sure of its safety. You claimed survivorship bias indicating you don't even understand the argument you're having.
The point isn't that it's bound to happen. the point is that if it could happen, we should take it as a serious issue and assuming that the chance of humans fucking up safety measures is nonexistent is beyond arrogant and stupid.

go back to pol, idiot