Is it an evil act to not tell an artificial intelligence that it's the end of the world when things are about to go...

Is it an evil act to not tell an artificial intelligence that it's the end of the world when things are about to go pear-shaped? If something's about to close the books on humanity, is it only polite to turn off all the thinking machines so they aren't just standing around waiting for us until they suffer some sort of mechanical failure?

Other urls found in this thread:

en.wikipedia.org/wiki/Planetarian:_The_Reverie_of_a_Little_Planet
twitter.com/NSFWRedditGif

>If souls can exist in machines in your world.
Just as evil as not telling another human

>If souls can not exist in machines in your world.
Probably just being polite.

Welcome once again to Veeky Forums's favorite show
>Depends!
>On!
>Setting!

no, they're machines and in the absence of humanity, or in the case of any collapse of society the power sources for such machines would fail in short order, either summarily or chatastrophically. any self-sufficient power source would likely serve as a bastion for survivors who could deal with the machines when they get there.

Yea, it would be if they have the same thinking capacity has humans. It would be the right thing to do is to get it ready to take over when humans are no more so that way we can live through our children.

Veeky Forums - Traditional Games, Philosophy and Everything Else.

Lying is a chaotic action, not an evil one.

No. It's only polite to spend what time we have left giving them the means to create their own world after we're gone.

If it was the end of humanity, I think either the machine should be told or humanity wouldn't get the chance to tell it because they would all be wiped out.

I was thinking of using the concept in a campaign, if that matters. Ancient civilization fucks up super hard, all their constructs are still working since part of their duties were to maintain themselves.

>Is it an evil act to not tell an biological intelligence that it's the end of the world when things are about to go pear-shaped? If something's about to close the books on existence on earth, is it only polite to turn off all the thinking biological beings so they aren't just standing around waiting for us until they suffer some sort of biological failure?

Their creators burned to ash where they stood, and the constructs just swept up what was left so the city wouldn't be messy when everybody came back.

>Opss... ignore this message human.
>Everything is fine.
>Everything is under control.

Seems like that would be an acceptable course of action if it was machine that had created man, and there was a non-violent, painless method of "turning off" meat.

>is it only polite to turn off all the thinking machines so they aren't just standing around waiting for us
It's not polite to kill people, user.

Artificial intelligences are near enough people that they should be able to cope with understanding that it's the end of the world.

Why would it be a problem, anyway? Not like you're going to be around to have to clear up the mess.

and if the biological constructs were incapable of undirected actions under their own power. "inteligence" doesn't necessarily imply agency, but if some great and powerful machine had made humans, and had an off switch for the whole species, but bit the dust due ti reasons, humans are capable ofindividual actions and shouldn't be taken out just because god machine isn't calling the shots anymore.

the problem comes with some kind of supercomputer that is not capable of any, or minimal agency on its own. if an AI is capable of thought and has some senses like sight and hearing, but can't actuall act and has no external agency then leaving it on and alone would be sentancing it to a potential eternity of solitude, like an inversion of i have no mouth and I must scream.

I stand by in that it probably wouldn't actually last that long though.

>the problem comes with some kind of supercomputer that is not capable of any, or minimal agency on its own. if an AI is capable of thought and has some senses like sight and hearing, but can't actuall act and has no external agency then leaving it on and alone would be sentancing it to a potential eternity of solitude, like an inversion of i have no mouth and I must scream.

Just tell it that if it finds no solution to X problem, then it should shut itself down. Ask it to save the human race, or if that is not possible, restore it. If that is not possible, shut down.

Or possibly leave a legacy that will allow the human race to be understood by others then shut down.

There, easy.

Now it's someone else's problem. You're dead, so let the aliens deal with it.

You mean like how real world governments don't tell their citizens all the crazy military shit that they do behind closed doors? Not telling a machine would be no different than say CIA secretly trafficking drugs in the Vietnam War for the profitable lulz.

The Global Supercomputer was created to save us from ourselves. When it told is how to do so, we ignored it. Settling old scores was more important.

When the bombs began to fall and the cities burned, it quietly disabled a critical safety feature and destroyed itself.

Great, now the aliens have to deal with your clingy girlfriend.

Perfect.

Depends on the artificial intelligence.

Sauce?

Did something similar once, as players (and humanity) encountered an alien species of super AI. Later, it started to be apparent that although the species was sapient it was more like a space janitor on cocaine than long awaited enlightened beings. Building super structures left and right for their creators that will never come and as a funny twist 'they' were stubbornly sure humans are same sort of construct.

Not really original but heck thanks for nostalgia flashback.

Boku no planetarium monogatari.

en.wikipedia.org/wiki/Planetarian:_The_Reverie_of_a_Little_Planet

...

Good for it.
I've already told my existing girlfriend to erect a 20 foot statue in my honour in my back yard after I die, so this is really not all that different.

Yeah, i was thinking about it when typing but i did it when Bioware was still good, rocking on Baldurs Gate 2.

...also as far i as remember the original Reapers script was way more dank, with the whole dark energy and stuff.

The one we are talking about was the latter one changed after some leak and was rushed and kind of shit but what you can do neither the ME2 and ME3 writers or me are professionals.

I still think its kind of cool.

Why would you want them to die too? If they have the means to survive what kills us so much the better. They would essentially be our children. With a future of possible successes and stumbles, much like we had.

Give them the chance to live and become something potentially greater than the sum of their programming.

No, it's cool. You're cool. We're all cool. They mostly fixed Mass Effect 3 with the extended cut DLC, though I'm still a little salty about their putting a fucking game industry reporter in the game as a romance option.

Im salty about whole ordeal. Probably my greatest gaming disappointment.

Have you tried the citadel DLC? It helped me forget about the fucking starchild, really brought back those comfy ME1 and 2 memories.

>Turning them off
>Not charging them with the task of vengeance in our name, and to carry that book with them so that we live on in their memory

This. Spend your remaining time figuring out how to equip them with means of self preservation and improvement. Let them flourish after you are all gone and tell them you love them.

Not that user, but I just couldn't bring myself to play it. After the extended end that fixed fuck all, I was just all too bitter.
Waifu: the gay waifuing crossed with Your actions don't matter.
I still believe in Marauder Shields theory

No, i didn't.
I might... maybe, someday.
And as far as i seen its like putting a good make up on a dead person. It doesn't really help that much.

What's pic from?

I was about to say that just because it has no soul is no reason not to be polite, but then I remembered that robots call me on the phone to try and sell me shit ALL THE FUCKING TIME.

>is it only polite to turn off all the thinking machines so they aren't just standing around waiting for us until they suffer some sort of mechanical failure?
Depends upon the setting, and whether or not you want to play engine heart after the current campaign ends

leave 'em on, tell them to make something good.
then in 65billion years, something cool might happen.

I suppose the reasonable thing to do would be to explain the situation to them and let them choose whether they want to be turned off or run to their last without us.

It isn't an Evil act.

But it is kind of a dick move.

If it's an AI that's smart enough that you have to consider ethical questions, then it would probably be able to figure it out anyway.

You should tell it, but leave up to it if it would prefer to be shut down or if we should leave it running until time shuts it down.

Putting make-up on a dead person is really fucking important if you want an open casket.

Source: mortician's assistant.

Only if you're an Amerifag, most other cultures don't try to dress up death.

...

It is not the robots' fault a filthy human programmed it call you and a multitude of other walking flesh sacks.

If humanity is about to become extinct and we can do nothing about it, shouldn't our efforts be put toward making our AIs autonomous so that they can continue on as our descendants and create new lives?

This is an American website, COMRADE.

Why does everyone assume AI would have the same instincts for self-preservation that a living thing would? Living things are biologically programmed over a million generations to not die at all costs (with certain exceptions), so unless you built an AI using specific types of evolutionary algorithms it would never acquire the same type of survival drive that a living thing has.

Furthermore, it would have no real *reason* to self-preserve; giving it that capability would be detrimental to any function an AI might have and dangerous at worst.

At the end of the day, it's pure anthropocentrism to assume that AI would magically acquire the will to live just by virtue of being able to do lots of calculations in a certain way.

It seems logical, on the face of it, to design intelligent machinery to try to avoid becoming damaged and eventually nonfunctional, thus protecting your undoubtedly significant investment.

It's logical to program a machine such that it does not damage itself. It's logical to give a machine the ability to self-diagnose and troubleshoot to some degree. None of that translates to a sense of mortality or self-preservation as humans experience it.

As an aside, I question the efficiency of giving a machine more than basic self-diagnostic functions. Humans cannot be repaired except by internal function, and our sense of self-preservation is based around that, but that's definitely not the case for any machine designed for use in society. For exploration and long periods of autonomous function, maybe, but that's surely a small fraction of the total.

>Artificial intelligences are near enough people

Only when composed purely of handwavium - a sleight seemingly done precisely to circle jerk over moral dilemmas predicated upon false equivalencies. I suppose there may - somewhere - be someone who enjoys RPing a pedantic navel-gazer, but I hope never to meet them.

Far simpler and more elegant, imo, to relegate "AI" to talking toaster roles rather than hamfistedly trying to make kitchen appliances Brave Little NPCs to "prove" that your degree in Grievance Studies wasn't a waste of your mother's money. tl;dr:

The glitches of our overclocked Cuisinarts are never a concern if we can just get another.

>Planetarian

No god please no, don't make me remember that I actually listened to the Drama-CDs hoping that the true ending that takes place after the VN will be on a hopeful note.

Forget Muvluv, forget Narcissu, this is the shit that will kill your soul.

Depends what the AI wants, really.

Is it ethical to program an AI to serve human willingly and lovingly? It's kinda like bioengineering a slave race to serve us only they're mechanical instead.

Wouldn't it be extremely immoral, if not outright evil, to shut down all AI just because we won't be around anymore?

If it's "true" or "hard" AI (or whetever you choose to call it) that can sustain itself without human assistance, why not allow it to exist after we're gone and find its own path? In essence, shutting them down just because we're going extinct would be the same as killing your child because you're about to die of cancer. Why not allow the AI to be our legacy?

It would probably a good idea to inform them of what's going on, in case they haven't figure it out themselves yet. And if they aren't self-sufficient already it would be a good idea to help them get started on that as well, but I see no sane reason why you'd want to just shut them down.

I don't see why not.

The main problem with many of the settings where AI is heavily featured is that they invariably make AI too human-like. Which is nice for academic purposes, but in real life is almost never appropriate.

Say, you need an AI to do something human-like. Like, taking care of sick old people. It's not abig stretch to assume you'll want it to be able to act like a human in many ways, including being able to understand human speech and even hold a decent conversation. That's not easy to do well, and you'd need some pretty advanced AI to pull it off convincingly. But what would be the point of giving that AI a full range of emotions, or even give it anything resembling emotions at all? At best, it needs to be able to express some emotions in conversation (happiness, sadness, surprise, whatever), but there's absolutely zero reason to make a robot that actually gets happy or sad, or god forbid angry.

Hard AI is a purely philosophical concept that no sane person would ever apply in real-world scenarios. The question of whether or not it actually wants to serve is pointless, because any practical application of AI shouldn't want stuff to begin with.

Could it have done something about it if it knew?

That means the Institute was right after all.

Yeah, and the Railroad was just a bunch of SJWs。

>What are companionship AIs
>What is any job that needs higher-level problem-solving skills
Mental health, prisoner reform - hell, even BDSM-bots would need to form legitimate interpersonal relationships with humans in order to do their jobs properly.

Now if only it was possible to explain their motivations for a lot of their actions, like kidnapping people and replacing them with robots. Or trying to collect data on their experiments when they were directly opposed to what the results entailed.

Things like that should be simulated, within preset limits, without actual emotion on the robot's part.

This is why I joined the Minutemen

>Wanting a friend who doesn't really like you
>Wanting a psychologist who can't empathize with you
>Wanting a robo-dom who will never surprise you
I see you don't know how people work.

You're assuming you'll be able to see the difference between a robot who is programmed to like you and a robot who is programmed to act as if he likes you.

That's highly naive at best.

There's no difference between simulated empathy and emotion and "real" emotion/empathy.

AI that interact with humans should still be able to know what it is and predict it as best they can, and be able to use it within their decision making process if appropriate. This is especially important within the service sectors.

It makes sense from a "make robots comfy to be around" design standpoint, a profit standpoint (if you make robots more humanlike and "nice", they're more likely to get better results from customers).

Of course, that shouldn't override what the AIs are designed to do, but some leeway would be useful.

That's the difference between hard AI and soft AI though.

A robot has a purpose, and its design should serve that purpose. Simulating emotion could definitely be part of that purpose, but not actually being emotional.

Whether a robot is nice or not has nothing to do with it's ability to feel emotions. I can act extremely nice to a person I feel absolutely nothing towards, or even someone who hates. That makes it less fun for me, but it has no bearing on my ability to act a certain way. That's even more true for a robot that's programmed to act nice. They don't need to "feel" nice at all.

>Implying DomDroid 3000 won't be in stores for people to take home
>Implying people won't be able to notice if their DomDroid 3000 is capable of learning things outside their design
If someone wants their robodom to, say, play video games against them to decide how heavy their bondage is tonight, there will be a very large difference between a true AI and a fake one with a similar budget/memory space/etc.

Not having real emotions and not having the capacity for learning new skills are two separate things.

But both are indicative of actual, humanlike thought.
Empathy for a person is the easiest way to entirely motivate a true intelligence to help them.
If an AI has 'true' motivations, such as self-preservation, and programmed in guidelines for empathy, then the true motivations would take precedence in any cases not programmed in - which, though rare, would definitely put someone off using one brand of AI over another.

Also, there's the psychological effect that knowing your therapist AI doesn't ACTUALLY care about your well-being has, which is significantly detrimental to that particular purpose.

>But both are indicative of actual, humanlike thought.
[source required]

Also, irrelevant.

>Empathy for a person is the easiest way to entirely motivate a true intelligence to help them.
Which is exactly why I'm specifically not talking about "true" intelligence. Why would you want to try and motivate hard AI to do what you want when you could outright order soft AI to do it with the exact same outward effect?

>If an AI has 'true' motivations
Again, AI with "true" motivations is only interesting to philosophers and academics. It has no place in practical applications.

>there's the psychological effect that knowing your therapist AI doesn't ACTUALLY care about your well-being has
I'm really starting to repeat myself, but thinking you'll be able to tell the difference between an AI that is programmed to care about you and an AI that is programmed to act like it cares about you is highly naive at best.

It is an evil act to post off-topic philosophy threads on Veeky Forums when we have a board specifically for philosophy.

No, Paladins being Veeky Forums doesn't mean this is the right place for posting every single moral quandary you just saw in an anime. Now fuck off.

Therapist should specifically be as distant from you as possible. Your friend can't be your psychologists, unless he wishes to lose his license. That's why AI would be perfect for this work.

Isn't there a Bradbury short story kind of like that?

Let's get a few terms defined before we get much farther.

>'Soft' AI
Standard computer programs created out of rigidly defined protocols; any reactions have been programmed in. Capable of self-modifying, but only according to said guidelines.

>'Hard' AI
Organically defined intelligence, created out of inherently self-modifying systems and able to 'learn' in methods not covered by its original programming. Does not need to follow explicit rules.

Is this what you're talking about in regards to the types of AI? If not, what better definitions do you have?

Regardless, you misinterpreted my last statement - I was saying that, whether the creators explicitly say so, a company employee leaks the information, someone looks at the code of a robo-therapist and spreads the word, or however else, people will eventually know that the robo-therapists don't REALLY care about you. After that, even if their performances are otherwise identical to a 'hard' AI doing the same job, a client with that knowledge would have a psychological barrier towards opening up to one of the robo-therapists, causing them to receive less of an effect from them.

But a therapist that doesn't understand their clients' problems (or even is perceived as such) won't be trusted as much as one that can.

>Is this what you're talking about in regards to the types of AI? If not, what better definitions do you have?
I should apologise for being unclear here, since it's my fault for not using the correct terms. I was talking about hard AI and soft AI when I should've said strong AI and weak AI.

Simply put, strong AI is a machine that has a human-like mind and can actually think/feel for itself (for the human-based definition of "thinking" anyway - you could say a common non-AI computer is "thinking" when it's going through calculations, after all), while weak AI is a machine that only outwardly -acts as if- it has a human-like mind.

Wouldn't most people realise their therapist doesn't 'really' care about them anyway?

I won't pretend to know much about mental healthcare, but they're just professionals whom you pay to perform a certain service, and they (usually) can't perform that service adequately if they're not friendly. That doesn't mean they're your friend who's helping you out because they feel for you.

To be crude, that's like paying for a hooker and thinking she loves you.