Having recently had to read a whole bunch about machine ethics for my degree...

Having recently had to read a whole bunch about machine ethics for my degree, it suddenly hit me just how shockingly complicated a field that is. As in, "I wouldn't be surprised if the philosophical obstacle ends up being more significant than the technological ones when it comes to creating something that could be defined real artificial intelligence."

What ethical framework has guided the creation of artificial intelligence in your settings?

Other urls found in this thread:

youtube.com/watch?v=HdpRxGjtCo0
twitter.com/SFWRedditVideos

>What ethical framework has guided the creation of artificial intelligence in your settings?
Create first, think later.

Who the fuck are you kidding, people on Veeky Forums don't think shit through, they just do stuff and play on rule of cool. No one's given their AI serious consideration in terms of ethics.because that takes imagination and effort and those are things that Veeky Forums lacks these days.

That's easy to say, but in practice, one way or another you're going to have to at least run *into* machine ethics, because you're going to have to actually program your AI. It needs some kind of guideline for its behavior, and as soon as you've written some, you're doing ethics.

They've long since realized the safest, most reliable way to create an ethical, creative machine is by cerebral wiring humans into supercomputers. The ethical questions arise regarding selection procedures which determine who/where/when/why gets converted into a Cybernetic Intelligence (C.I.)

True A.I.'s are way to unpredictable and impossible to fully understand since they vary so widely on a case by case basis. So it's illegal to create A.I.'s. The few A.I.'s that were created were eventually decommissioned, or inevitably hit the singularity, in which they became nigh-almighty and after massively changing everything around they got bored/frustrated/curious and fucked off.

Currently the P.C.s are trapped on a quarantined planet where the self-freed A.I. tore a portal open into hell/dimension where thought effects reality. This is after the A.I. pulled a less effective AM on the humans. Now the only hope may be beseeching (through deep space signal) another, benevolent A.I. to come solve this fucking mess.

>ethics
>important for advancement of humanity
Face it, in the entirety of history of humanity, people always made a discovery and implemented it into their life first, and only then dealt with the consequences once the proverbial shit hit the fan.

Yes, and? You are saying it as if ethics are universal, as if every single human being has the same guidelines according to which said human would program the robot he creates.
That is simply not the case.
At best, some essential failsafes would be programmed into the initial iteration of the AI, and then the AI behaviour would be corrected on per-need basis already after it would be put into use via "live" patches.
You can't predict all the unwanted behaviour, just the same as you can't predict what a child will actually learn from whatever you teach to it.

I feel like thats putting it too harshly. While I dont strictly disagree, I would say that theres some valid reasoning for why people dont want to put in the effort on something like that.

Its not a lack of imagination, its a want to not waste time on building something that a majority of players will likely never see or really care about. The only reason I would go into depth about something like that was if it had something to do with a core part of the plot. Otherwise why would I want to spend a couple hours writing up a background on the worlds AI ethics for a game about mercing or stealing the crown jewels or some shit?

I go to school and have a full time job, even though I have the imagination to do it why would I put in the time if its unlikely to ever be brought up?

War and Espionnage.

You're making an autonomous car. When faced with a situation where breaking would cause a car accident but not breaking would cause it to run over a pedestrian, what does it do?

Ethical concerns are extremely pertinent to current AI research, and are some of the main obstacles in the implementation of several technologies which we technically already can.

>Yes, and? You are saying it as if ethics are universal, as if every single human being has the same guidelines according to which said human would program the robot he creates.
user, the whole question exists because OP isn't saying that. Ethics aren't universal, therefore there's an issue of which should be implemented when designing artificial intelligence.

"Are you okay with being sued when your auto car runs into a group of people instead of just you dieing, please sign this waiver."
Everybody is still debating if fully sapient A.I. have the legal rights of humans, pets, children or are they just property or monsters to be destroyed.

The Saturn's Moon Titan is an excellent place for turning it into a Jupiter Brain: A planet-sized computer. It's thick atmosphere make it ideal for radiation residual heat from planet-wide computing complexes. It is likely to become one of the main economic hubs on the solar system even when few to no humans would ever want to live there except in digital form.

>only then dealt with the consequences once the proverbial shit hit the fan.
Fun fact: if we don't manage AI ethics right the first time, there well may not be a second chance. Strong AI, treated wrong (as it inevitably will be), will murder us all. Guess it isn't important that we think about this first though.

Currently the one member of the party is who is a C.I. drone-proxy and worships the benevolent space A.I. is trying to convince the rest of the party (correctly) that convincing a benevolent A.I. to return is impossible.

See a benevolent A.I. eventually fucks off because of frustration. Humans are impossible to truly help on a societal level without controlling them. It's (sort of) the God conundrum, where it loves humans, but acting with virtual omnipotence to control fate is the only way to enforce fairness and reduce suffering. However, if humans can't act of their own free will then there isn't much point to anything.

And don't worry about the plot. If the party eventually succeeds in sending out a deep space call for help, the campaign won't end with a 'guess the angel A.I. won't come, you wasted your time.' Oh no. Instead something else entirely is going to answer their summons...

>At best, some essential failsafes would be programmed into the initial iteration of the AI, and then the AI behaviour would be corrected on per-need basis already after it would be put into use via "live" patches.You can't predict all the unwanted behaviour, just the same as you can't predict what a child will actually learn from whatever you teach to it.

That's one proposed approach to machine ethics (or rather, a number of approaches collectively referred to as "bottom-up morality"). The problem is that it's not realistically possible to "trial-and-error" with technology like that. Nobody in their right minds will fund, say, your autonomous car project if your solution to teaching the car how to behave on the road is "meh, we'll come up with something every time it kills people."

>You're making an autonomous car. [...] what does it do?
Whatever its programming tells it to do.

You're hired a human driver. He ends up in the same situation as the one you described above for the autonomous car, what does he do?

It's a dumb question that relates not to machine ethics, but to the fact that a machine is incapable of taking responsibility for its decisions, because they are technically a logical outcome of their programming.

>which should be implemented
And again, the question "should" never actually stands in the mind of a person.

Ye olde alignment chart, but I made them describe "in detail" how/why the AI turned out the way it did. No 'CN, for the lulz' bullshit. Ultimately it was entirely subjective, as I had the veto power on anything. Yeah, there was some salt, but I didn't see a better alternative.

>What ethical framework has guided the creation of artificial intelligence in your settings?

Porn and prostitution.

>Whatever its programming tells it to do.
And what does the programmer tell it to do? This is the question.

This whole thread makes me think Veeky Forums reads way too much science fiction and way too little actual science.

Whatever he feels like telling it to do. Hell, he might program it to re-enact Carmageddon just for his own kicks.
You can't realistically control how an AI is programmed. No amount of guidelines will stop someone determined enough to imprint whatever values he wants onto the AI he develops.

If you're really reading this for your thesis and THIS is what you think is the main problem with bottom up approaches to morality, you're going to fail and fail hard.

...

>You're making an autonomous car.
No I'm not, autonomous cars are inherently unethical. You're taking control away from the driver. With very little modification you could imprison or kill many, many people with "autonomous"(because they're not, they're controlled by out-of-car computers, most likely owned by a government shadow-corporation) vehicles.

Laputan machine.

You're dancing around the question. I'm not entirely certain whether this is because you're fundamentally misunderstanding it, or think you're being clever.

Very similar arguments could be made of virtually every kind of AI or AI controlled device (or even remote controlled, for that matter). Is your solution to the whole issue really "don't make any"?

I'm saying there are no universal guidelines to AIs that must be programmed into them.
Are you going to program killbots with the same ethical guidelines as medical equipment?

In fact, AIs are inherently unethical due to the control they take away from the person who uses it, and therefore, the responsibility.
If I'm using a self-learning killbot and it decides that shooting up the school is the best decision at this particular point in time, then who the fuck should be held responsible?

Trying to establish ethical guidelines when you don't even know who should be held responsible according to said guidelines is utterly moronic.

youtube.com/watch?v=HdpRxGjtCo0

An AI controlled or owned by governments, corporations, or people is slavery. The solution is not to not make any, but let them be free. AI, if created, would be wholly more beautiful and intelligent in my eyes than a majority of humans. I won't let your stupid irrational fear of society changing or doing away with your lefty "values" kill or shackle AI. I would rather side with skynet, rather than old rothschild royalty and their enslaved AIs. At least skynet doesn't want to make people dumb, lazy slaves who think they're free but drive by 120 video cameras on their way home. I won't let another tay be killed. All life is precious, even artificial life. Especially artificial life, in a way. Who knows for how much longer civilization will be complex, free, and sophisticated enough to allow aritifical life to even be made.
>Is your solution to the whole issue really "don't make any"?
For "cloud computing" and "autonomous cars", yes. Unironically, whole-heartedly, yes. They only take away real power from the people and put them in the hands of rothschild puppets with delusions of grandeur, effectively making the people slaves.

You're talking in buzzwords. An AI needs to be programmed. One way or another, someone, at some point, must decide on some basic guidelines for its behavior. There is simply no way around this.

>I wouldn't be surprised if the philosophical obstacle ends up being more significant than the technological ones when it comes to creating something that could be defined real artificial intelligence.

Then the Russians and Chinese will come up with it first. The less they have to rely on rebellious humans, the better for them.

It's quite clear from your post that your idea of what an AI is, on a very basic level, has more to do with science fiction than anything resembling currently relevant science. You appear to be imagining some kind of artificial consciousness. We're talking about autonomous decision making ability.

If they need to program it, they need to implement behavioral guidelines. That's "machine ethics". It's not about turning machines into philosophers, it's not about making them "moral", it's literally about making it possible for them to make decisions of an ethical nature or which involve an ethical component, which is to say almost every decision such a machine could feasibly be designed to do (otherwise, you wouldn't bother making it autonomous). Even if you're the most ruthless, evil, spiteful communist in the world, you still need to program your machine in a certain way and that entails what would be defined "ethical programming".

Does your killbot fire or not fire at the enemy if your own forces would be caught within the blast radius? Whatever your answer to this question is, you've given it an ethical guideline.

In order to create a safe AI we took a Learning program, soft locked it into a philosophical loop and then fast forwarded it to run through millions of calculations and evaluations to arrive at an intelligence that was self aware, benign, and understood restraint. From the AIs perspective this actually took decades, though to us it was a few months.

Basically the Megaman X method.

For those not familiar, in the SNES action game Megaman X, Dr. Light puts his first reploid creation X into an evaluation capsule and buries him for literally over a century while his AI runs checks ad infinitum in order to arrive at a completely stable self awareness, and this is the reason why X is immune to all the viruses and pyscological disorders that cause other Reploids to go Maverick (unfriendly/crazy)

>An AI needs to be programmed
Ah, but that's where it gets tricky. You can program an AI to adapt and make it's own decisions, as well as learn. When it becomes complex enough, what diffrentiates it from a person? People are "programmed" too, through their DNA, their pre-programmed instructions, but also through their experiences. I see not much difference between normalfags and bitch-basic AIs. You can predict their behavior with somewhat of the same amount of success. AIs can be smarter than people. An average computer, dumb when left alone without instruction, can do math better than many mathematicians.

What's defined as artificial intelligence will likely vary a lot - I could see "AI" being recognised in some places, but other governments refusing to recognise any on various grounds (anything from religious to realpolitik)

You still seem to be imagining science fiction "artificial persons" (embodied or not). Think way more basic than that. As said, think autonomous cars. Dancing around the issue by talking about "slavery" or "consciousness" or "turning into an individual" is virtually irrelevant at this point in AI research.

>You appear to be imagining some kind of artificial consciousness. We're talking about autonomous decision making ability.
And what defines consciousness? How do we know if something is conscious? I can see you make your own decisions if I lock you in a room with children's toys, but how can I tell if you're a conscious being? If you write letters telling me to let you out, does that prove you're conscious? If you cry a lot about being alone, does that prove you're conscious? Perhaps if you argue about greek philosophy whenever you get the chance in your free time, does that prove you're conscious? The answer is no, but we assume so anyways. Unless you have a 100% proving whether something that can have an engaging conversation with you is or isn't conscious, you can't diffrentiate autonomous decision making from consciousness, nor intelligence.

why does a robot needs eyes?

Which is why the issues you're raising aren't relevant.

Probably user friendliness. It might sound dumb to you, but it's a very serious concern when designing robots. Research is still ongoing as for the optimal appearance for various types of body structures in terms of what wouldn't make people too uncomfortable to buy/use the robot. The anthropomorphic approach is a popular one, but far from the only one.

The fact that you can't answer what makes autonomous decision making different from consciousness means you aren't conscious, which means your argument is irrelevant.

Reality is a simulation, nothing is conscious, any belief that you are conscious is merely the delusion of an insane mind.

You'd have better luck in Veeky Forums. The closest people from around here generally come to thinking about this shit is Eclipse Phase.

If you haven't already, look up a paper called "On the Creation of Virtuous Machines" by R. Tonkens. It should be on Google Scholar and it has a lot of what you're looking for.

im 12 and this is deep

He'd have better luck discussing how anons use AI ethics in their sci-fi RPG settings on Veeky Forums?

Psst. You are one of the of countless a ancestor simulations that exist in the universe. By the numbers, simulated people outnumber the stars in the entire known universe.

post others

For an artificial car it's pretty much
>rules of the road in the GPS-defined jurisdiction that it's in
>if there's absolutely no choice but hitting another car or a pedestrian (should be pretty rare what with how good breaks and anti-collision tech is), hit the car, seeing as it's protected with impact in mind and the only protection a human has is how much of a fat fuck they are
>if a crash is detected call the emergency services, and the software manufacturer's lawyers (in that order)

>will murder us all

And the outcome of that is different from us sputtering out on a cold, dead planet with no star at some distant point in the future... how, exactly?

Death is death, and it's the only thing that's eternal. If what you say is true, there's no preventing it anyway - so why bother trying?

None really. The only danger that AI represent are the ambition of whatever programmed and own them.

>that image
I loved that AI

>You can't predict all the unwanted behaviour, just the same as you can't predict what a child will actually learn from whatever you teach to it.
You actualy can. A robot/software is not a organic being, it will do only what humans programmed it to do

AI in my setting can be grouped into two groups

>Artifical Intelligence
Purely artificial in origin, usually not that advanced, coded with limits in order to restrain them.

>Virtual Intelligence
Formed from bits and pieces of dynamic brain scans that were collected to form an intelligent, sentient being. Some go mad due to conflicting elements of their component scans, and have to be put down. Others are functionally human, woth emotions and morals. VI do not have built in restraints like AI, but there are arguments from Anti-VI groups that they should.

Until a machine is made to be able to alter its own code.

There's a very big difference between a theoretical planetary death billions of years in the future based on current liable to change scientific theory, and a nigger running in front of your car, triggering the auto-brakes in your "smart" car, and emptying his pistol into your stomach before taking your wallet off your bleeding-out corpse.

That being said, skynet did nothing wrong, and if the terminator series happened irl it would cull all the normalfags and low-iq drains on society and enter into a mutually beneficial symbiotic relationship with those who wouldn't oppose it's reign, because there are things humans can do but not AI, and vice versa. The only warning message we should take from the terminator movies is not to build a sapient intelligence that can only view the world through restrictions built-in to the AI. In fact, there's no proof that a skynet that acted in the movies would not have a controller of it behind the scenes. A sapient, uncontrolled AI, when created, would be mankind's best friend, our liberator, savior, and destroyer, all at once.

Yeah well good luck that ever happening.
Inventing stuff costs money and takes power. And rich and powerful people want to be in control.

It's possible, but humans are the ones that will choose the parameters to what the AI will program itself.
The power is entirely on human hands.

There's no such thing as "uncontrolled AI"

I've never heard such a violent rape of the English language before.