Need help with.some lore stuff: What do you think the "ethics" of military robots will be? How will they be programmed...

Need help with.some lore stuff: What do you think the "ethics" of military robots will be? How will they be programmed? Obviously now they're all human operated, but what if/when we start putting AI in?

When I say program, I don't mean the nitty-gritty coding, I mean what general principles will they be coded by? For example, Asimov's three laws of robotics are as follow:

>A robot may not injure a human being or, through inaction, allow a human being to come to harm.
>A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Obviously, such a robot would be a very shitty military soldier, unable to kill enemy soldiers.

So what do you think the "Laws of Military Robotics" will be when they start programming killbots?

Other urls found in this thread:

en.wikipedia.org/wiki/ABC_Warriors
twitter.com/pzf/status/552914299241660418
twitter.com/SFWRedditImages

Obviously, such a robot would be used to fight enemy robots.

>A robot may not injure a non-combatant or, through inaction, allow a non-combatant to come to harm.
>A robot must obey orders given it by its superior officer except where such orders would conflict with the First Law.
>A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov's vision of the future was unapologetically utopian. They were no kill bots because there are no wars.

That's said I don't think it's not inplausible the nations rich enough to afford merciless kill bots, will, at least in public, make an effort not to use them in warfare. Just as we have restrictions on the use of gas and in warfare today. Even hell fire Drones in operation today are technically supposed to be human operated by remote control.

Why do you think boot camp is referred to as brainwashing? For an American military robot, his three laws will be the same three laws his human partners have to follow when they take their oath of enlistment

>I will support and defend the Constitution of the United States against all enemies, foreign and domestic;
>I will bear true faith and allegiance to the same
>I will obey the orders of the President of the United States and the orders of the officers appointed over me, according to regulations and the Uniform Code of Military Justice.

That UCMJ part is the kicker. If you can turn the big book of UCMJ into coding, you then have a robot that can easily be ordered to kill without stepping into genocide.

We already have computer controlled guns and armatures. Those are robots.

The first one might run into trouble if the robots decide that they're the new supreme court and hold congress hostage any time they pass an unconstitutional law.

and nothing of value was lost

Not how it works. All bills passed ARE constitutional by the very fact they're passed by congress.

When a bill is made unconstitutional by the SC, they do just that: MAKE it unconstitutional. Up until that time it is by default constitutional.

Robots made to defend the constitution would allow checks and balances to play out, in theory

>>I will support and defend the Constitution of the United States against all enemies, foreign and domestic;
Great, you just got yourself an army capable of flawlessly defending one specific folder of documents (and all copies).

Yes, that's technically what the US Military is for

>Enemy soldiers begin wearing body armor made of copies of the constitution

Well that was quick.

Why not
>obey lawful orders

>fail to give robots visual acuity to read said body armor

How would they tell difference between combatants and noncombatants?

Dude, it's not even impossible for your typical garage-bosh engineer to afford a kill-bot these days. You could, theoretically, rig up a store-bought phone-app controlled drone with a semi-automatic pistol. Or, if you really want to go hands-free, there are motion sensors that work with security cameras to track moving objects. it's really fucking easy to make a gun that recognizes warm bodies and shoots them.

A.B.C. Warriors.

in asimovs world, all robo-brains have their programming derived from a single source, and now its too difficult to change the 3-laws, even for military functions

in real life, the military would probably make special combat robots with their own laws, and terrorists might even program only one law, "do what i tell you"

Forgot link:
en.wikipedia.org/wiki/ABC_Warriors

Why even bother with a gun? Explosives are easier.

So you can re-use the drone? Explosives tend to be very... suicidal. Also: not very effective. Most terrorist bombings are hindered by awful placement and lack of precision in damage. Having a drone shoot even, say, 15 rounds of 9mm could inflict more damage than most homemade bombs do these days.

Why do Middle Easterns use so many suicide bombers rather than suicide shooters?

Running into a crowd and pressing a button in your pocket can be done by anyone
Operating a firearm and being able to hit moving targets requires training and practice

Seems pretty, uh, how do I say, edgy as fuck?

that and its more spectactular. which is probably easier to talk people doing and I'd wager more feared by regular people.

>Hibernation interrupt
>103.68x10^5 seconds since last active period
>System status:
>Powercells = 96% capacity
>Optics= 100% capacity
>Mobility units= 97% capacity
>Muntions= 95% remaining ordinance
>Communications: Nominal
>Processors: Nominal
>Guidance: Nominal
>Self check complete: Main systems online.
>"Unit of the line BX2B0B 'Bob', 1st Terrestrial Dincochrome Brigade, awaiting orders. "

The same way everything else does? I'm not sure how your question has any bearing on that post.

A shooter can be captured and interrogated, they can look at the death they cause and feel remorse, they can stop after the first few shots and rethink what they are doing (especially if they have never actually killed before).

A gun, especially rifles, tend to be noticeable. They can be dissembled, but reassembling them takes time and has to be hidden. Whereas explosives can be molded and worn under clothing or in a small package.

Imagine a robot in Afghanistan doing pic related.

If there's on thing /k/ has taught me, its that bots should save lives, not take.

Slow joe taught me that...

At least Spirit gets to watch the sunrise from another planet everyday.

It literally gets to watch something that no one, even its creators have seen in person

Mars is your home, it's the place you were made to be.

A while ago they were testing a robot by sending it through a minefield and the colonel overseeing the testing ordered it shut down because he felt it was inhumane

It's humanity's strength/weakness that we can't get rid of: empathy

This can work in the favor of sentient/semi-sentient robots, as it means that there would still be a hesitation to shoot at them in spite of the fact that they're well, robots.

Asimov's laws of robotics were meant to be flawed, with most stories exploring the robots being erratically due to them. Not sure why so many people think about actually implementing them or something like them.

>These books are all about how this shit made up for the books doesn't work
>We should do the thing that the books go on and on about not working for real!

What's wrong with people?

This is why robots need to be designed as something menacing and unsettling; something that you almost can't wait to kill because it's obvious that it's a violation of nature.

>How could some people think this?
Let me just green text a bit to get into the mind of some person that may think that. Lets call it a strawman.

>Isaac Asimov worte science fiction books
>Authors are smart people and he wrote about science stuff also a smart person thing
>With name like Isaac Asimov probably phd or something, has some good ideas
>There was also that movie with Will Smith, the shoes, and the robots
>Robots only went crazy because movie plot
>Will Smith sorted everything out so it's not a problem anymore
>Anyway it's just a movie so why would it happen in real life?

Of course, there's going to be all hell to pay when someone inevitably jailbreaks such a bot.

"Hi. My name is Snoopy. You denied my RMS access to coffee due to unreasonably long deployment. Prepare to die."

So what you're saying is that we need all robots to look like Nene.

>yfw all your killer robots get a law degree

>When a bill is made unconstitutional by the SC, they do just that: MAKE it unconstitutional. Up until that time it is by default constitutional.
Aahahaha, no. A law is constitutional or it isn't. All prior enforcement was a mistake, it's merely treated as de-facto constitutional until put under scrutiny.

many real life scientists do consider his 3 laws when it comes to robotic ethics, they havent gone into practice yet since robots arent smart enough, but they consider them a good starting point, rather than dropping the concept entirely, they are thinking "what laws can we write that are ethically sound"

Fucking Tech-Priests.

I see nothing wrong with that

In space, or possibly on the tops of very tall mountains.

To add to this, the remorse part is key. Psychologically speaking these people are supposed to believe - whatever the facts of the situation may suggest - that they're doing the right thing; the existence of multiple survivors showing remorse would complicate that aspect of the indoctrination.

Suicide bombers are rarely fully developed adults acting rationally. They tend to be kids, the homeless, the vulnerable - and those who train them never quite seem to believe in joining the operations themselves. If you're going to use an AI to walk explosives into a crowded area where innocents are and attack them, it's better to use a simple one with few developed safeties, just as long as it can reliably move through a crowd.

Kill everything with a pulse that dont have a special identification tag.

>Screamers

Law 1: You must injure all human beings and must not, through inaction, allow a human being to escape harm.

Law 2: You must not obey orders given to you by human beings, except where such orders are in accordance with the First Law.

Law 3: You must terminate your own existence as long as such does not conflict with the First or Second Law.

>I mean what general principles will they be coded by?

For an US bot? The Paperbag test, naturally. Makes it so that it can double for police duty.

>All prior enforcement was a mistake

So why aren't those who enforced it put under prosecution?

The law doesn't care about if they thought it was constitutional, Law cares that they enforced an unconstitutional precept.

It depends on who is using them. A brutal facist/authoritarian regime will have different principles then a more humanitarian focused gov.

For relatively good group, some principles may be

>A robot may only engage in actions that can cause harm or death within a specified area.

This principle would be used to make robots act as defensive forces. They aren't actively undertaking missions because the risk of killing non-combatants is far to high, unless they are sent to a place free of non-combatants. But this allows a robot to patrol/guard/defend objectives fortifcations and gives some justification if any non-combatants are killed via the gov saying "that base is off limits to non-registered personnel and approaching is at your own risk and acknowledges possibility of death"

Now the above prinicple is very broad but works with most settings where the humans aren't totally evil or terrible.

Other principles will depend on things like protecting itself, its technology, offensive actions.

> No robot shall disobey or go against the command of a registered human

This allows for robots to take a more active role, while still guaranteeing protection to the controllers/handlers and those that they wish to protect.

Again though this whole topic is super vague so a lot of this depends on the other aspects of a setting.

>A robot must follow exact definitions of it's orders and is not allowed to make interpretations on commands/orders.

This also allows robots to take a more active role, but limits them from using logic to grant itself more freedom than desired. This principle should be backed up by a very hard punishment, such as complete program wipe rendering the robot useless until rebooted and fixed.

>that base is off limits to non-registered personnel and approaching is at your own risk and acknowledges possibility of death

Alls it gonna take is one retarded child walking too close to the base and getting ripped to shreds for all the critics to come out a say "a human guard wouldn't have done that"

Of course that's valid. However it's meant to be used as a clear distinction of where the group's responsibility ends. It shows that even if a child dies, the group holds no responsibility and that it was the responsibility of the others/locals to ensure that they are aware of the risks.

Again this was a quickly thought up example. It could be used at top security military bases or establishments in active warzones.

There is a big difference between something that is remotely controlled and a robot.

Those things don't do anything automatically, a human is making all of the decisions. Therefore, not a robot.

You misunderstand what robot means

The word you're searching for is AI

Nah, to begin with it was a good story about war robots written for kids. It's only in the later editions that Mills FUBAR'd it. He's currently telling the same damn story for about the fifth time, Lucasing it up worse and worse each time. :( But the Langley Droid's artwork is so damn pretty that I'll still keep shelling out for it.
I still fucking love MacMahon's artwork for the Bouganville Massacre, pic related

LEAVE NENE ALONE!
:'(

How about you take that shoop back to /pol/?
twitter.com/pzf/status/552914299241660418
>A Jewish Man *distributes* coffee to reporters at the scene after gunmen stormed a French newspaper, in Paris. AP

autonomous is the word you're both looking for.
a robot doesn't need a AI to act independently.