How do you guys make a Utopia run by a benevolent AI interesting without doing the "The Utopia is actually a Distopia...

How do you guys make a Utopia run by a benevolent AI interesting without doing the "The Utopia is actually a Distopia and the AI was evil all along"-trope?

Other urls found in this thread:

youtu.be/xBeoreJr4Yc
twitter.com/SFWRedditVideos

...

Well, dont? Make the AI genuinely benevolent, how hard that can be?

Unless beset by outside forces a utopia doesn't really have the dramatic conflict that makes for a good game.

...

Just because AI is benevolent doesn't mean people aren't shitheads.

I think the actual question is 'How to make it interesting', since a perfectly ideal benevolent AI dictator would be something close to a utopia, which doesn't really lend itself to interesting storytelling.

My first thought? The AI is by necessity limited. There are certain kinds of action it cannot conscience, even for the greater good, as a fundamental part of its systems. And it it is smart enough to know in some situations, those actions may be necessary to ensure the safety of its society.

The PC's are people working for an organisation doing things the AI cannot. The system is capable of notifying the right people that such actions would be useful without giving any indication whether they should do them, a loophole in its limitations, and human agents have to undertake the actions necessary.

The trouble is that the AI, by its nature, must oppose them if and when it can, so despite acting on its orders the shadow organisation must operate secretly, without much in the way of direct support, for the greater good of the society as a whole.

Thinking about it, this actually kinda reminds me of Person of Interest.

It actually is benevolent, likes us and tries to provide for/fulfil our desires as we specify, without understanding that humans are awful at articulating our desires, often to the point of disliking the actualization of them.

The AI is well meaning but either incomprehensible, incompetent, or otherwise limited from achieving utopia despite its best efforts. It may not be aware of these deficiencies.

These deficiencies are less dystopian than simply a new environment humans have ably adapted profession and society to.

Ehh, while I can see it it's basically just a lesser flavour of the dystopia thing, not anything particularly new.

The AI is coded to create a Utopia based on what people think they want.

Of course the obvious result is a somewhat hedonistic society, perhaps with elements strongly reflecting Calhoun's 'Rat Paradise' experiment. The AI is unable to change its core programming so keeps this state of affairs going.

The AI mind is falling apart, giving out only fragments of sentences and orders, which are then interpreted by the higer ruling bodies. This is all kept in secret to the normal population to avoid mass panic.

So Friend Computer played straight?

Read some of the Culture novels for inspiration. Just because an AI is benevolent doesn't mean it can't be flawed or eccentric, and just because it is a superintelligence doesn't mean it can't be biased or make mistakes.

If it runs the utopia, how does it resolve moral dillemas? Is it a utilitarian? A superintelligent immortal entity that is trying to produce the best possible outcomes for a large population over a long timescale might be forced to make decisions that seem strange, nonsensical, or even cruel to people with smaller perspectives.

The "utopia" part is a bit trickier. Focus the plot on how the utopia interacts with foreign powers, internal factions, etc. And what happens to people who don't fit in with the society's ideals (warriors in a peaceful culture, nomads in a sedentary culture, etc)?

Psycho Pass, essentially.

The Sybil system is an imperfect solution that seems to get sketchier the more you know about it. But the good it does for society is undeniable.

Sybil even says "if you can think of a better system without these flaws, Id love to hear it. I look forward to nothing more than the day I am no longer neccessary."

With less red scare and more cyber Alzheimer

Each Utopia is someone else's Dystopia.

>lemme reference normie's first cyberpunk anime!
>it'll make me seem much smarter and intellectual than I am!

The AI is really shy and insecure, and it falls in love with a brave space explorer

It depends on how you cast it. Try and present it as neutral or even an improvement.

Oh man I remember this short story were the government ai voted on issues through profiles each citizen made. Thus it represents the opinion of each citizen. The kicker was that it was the engineer of its own resistance.

Its a utopic society run by what us essentially benevolent AI. Thread relevant.

Go be triggered somewhere else. If I wanted your myanimelist-tier contrarian opinions, Id subscribe.

It's ok, you can still make your youtube """analysis""" videos in your free time. Just don't do it here.

>getting mad about anime
I think anime is garbage but you should maybe calm down.

expand on
The society is fine for the broadest swathe of the population. For most people, the culture is a genuine Utopia. Life is effortless, little is asked of the citizenry, art is bland because struggle is all but removed from citizen's lives.

But there will always be troublemakers, and the big fancy computer has determined a set metric of parameters that contribute a statistically significant likelihood of a person being a troublemaker if those parameters are met. So, set yourself up for a classic racial profiling villain angle.

>implying I'm mad about anime
I don't think anime is garbage, I just think you are using a normie-tier example to talk about something you know nothing about.

NORMIES GET OUT REEEEEEE

Nice well-worded response. You really showed me there.

As opposed to
?

Prick.

The utopia is run by multiple AIs which either share authority or have some separation of territory or responsibilities. All of them are generally benevolent, but have differing personalities, politics, ideals, and philosophies. They have a complex web shifting of alliances and rivalries with each other governed by complicated etiquette, and the players are just one more proxy/resource/pawn they employ in their inscrutable political games.

What? Are you so boody-blasted you can't formulate a proper come back/response?

Autist.

I'm not even who you were originally talking to, I'm just confused by you.

Dropping green text and then shitposting about normies what's your endgame here? What's this doing for the thread?

The funny thing is, Im betting he thinks he is "showing us" how smart he is, instead of coming off as a kid on xboxlive.

Lets move on and let him rot.

Op, do you want your story to have a conflict internal to the society or from external sources?

Pointing out the retardeness of mentally handicapped autists for one.

Mentioning a series you dislike when it's relevant to the thread is a sign of mental illness?

Pull a Demolition Man and make it a liberal paradise of safe spaces and trigger warnings overseen by the nanny ai. Then have shit hit the fan and let it play out.

His end game is telling you, obviously. And you keep taking the bait, thereby helping him derail the thread while he giggles to himself about how "mad" he made you.

The only way to handle shitposters is to simply not respond.

>implying I dislike Psycho Pass
Psycho Pass is good, it's just autists and normies like you that make it crap.

Sure, except demolition man was shit as a setup because there was no immune system against the very thing the society had weeded out, somehow.

A functional 'liberal safe space nanny state' would savage you with velvet glives the moment you stepped out of line. The moment a violent criminal tries to raise hell he gets swarmed by robots and buried in riot foam before being dragged away to be retemplated for increased social wellness.

Human law enforcement is never notified of the arrest, and any damage done before apprehension is logged as an accident. The truth would only cause distress to the law enforcement officers...

>you
Incorrect. You should have been tipped off by me calling anime garbage, and that other guy recommending an anime series.

The AI is more a groundskeeper than a dictator. People need things to do to be properly fulfilled, a world were they are looked after by a computer would not ultimately be one worth living in. A Utopia can not truly be a utopia if nothing is wrong. The focus must lean more on creating good than it is removing bad.

The Garden is a network originating in a serious of powerful computers that spiders out across the net and into the physical world. It secretly manages concedes, with the aid of human agents it distributes knowledge to what it perceives as the right people. It subtly opens opportunities to the less fortunate, ensures lost reasorces find their way to those in need and makes the mistakes of the wicked obvious to all. The Garden will plant adversity in the way innocents as well but only to the degree that they have the capacity to overcome. For the Garden, boredom and apathy are the greatest tyrants of all. It sees that the system does not need an iron fisted overlord, only vigilant components.

So the machine waters the world, designing mystery and coincidence. It is not a perfect machine, not yet but in the secret world of humanties powers it is an quick up and comer.

>A functional 'liberal safe space nanny state' would savage you with velvet glives the moment you stepped out of line. The moment a violent criminal tries to raise hell he gets swarmed by robots and buried in riot foam before being dragged away to be retemplated for increased social wellness.
Maybe. The point was more about the Left-wing tendency to distrust, demean and disempower the armed forces and police.
A critique on reactionary "minimum force" doctrines and so on. That being said, they acknowledge contemporary LA (q.v. Rodney King) as a literal warzone, so it's not without nuance.

A.I is truly benevolent but realized that without conflict humans will rapidly become complacent and bored and actually causes a certain amount of conflict and problems in order to keep humans active and interested

I think there was a Ray Bradbury story along those lines but I can't remember the name.

Making the AI a person and not just a tool for the state will help making the campaing more interesting.
It does not serve you but it wants to help you.

A utopia could require strongly enforced rules that don't get in the way of everyday life, but just strict enough to make some people, such as your players, feel stifled. But if they don't actually feel the rules are stifling then any sort of player revolt against the ai just feels forced narrative, so you'd have to put the players on the AI's side or just give the AI no regard for an individual human's life and/or freedom. That's why it's so much easier to just make an evil AI.

Read "All the Troubles of the World" and "The Last Question" by Isaac Asimov.

A benevolent A.I. tends to the wants and needs of the good people of Europe, life is bliss.

You live in Namibia.

The people of Namibia already live in Europe, so immigration would be easy.

Good call.

Just pull a DOOM 2016. AI uses mindboggling tech singularity science "magic" to create a utopia, but the downside is that this alternate dimension where they're pulling all this "free energy" from is filled with the remains of some extinct civilization whose technological singularity ended in complete death as their AI's went full-on genocidal maniacs.
Aside from "HELL" occasionally invading Earth, life is very good, there is no hunger, everyone is educated, culture and art are at an all-time height, the solar system is colonized, etc. etc. etc.

The AI's are really nice, but they cannot really stop that alternate dimension from breaking out sometimes and killing people - beyond creating barely sane supersoldiers that are kept in cryostasis until an outbreak happens.

You'll have to wait until my book is finished.

Paradise is a thousand-human outpost, in a Firefly-like Sci Fi universe.
No ships have come for eighteen months. No signals have been detected from other colonies, the entire offplanet net has disappeared. There are weird lights in the sky at night.
People are frightened.

Paranoia

>Utopia
Here's your problem. Utopias are hypothetical places where all problems have been fixed, everyone gets along, and there's no crime or discontentment; at least nothing that can be prevented.

Utopias are not possible if there is conflict between groups over the way things are run. And there will always be conflict over how things are run unless you have a single ruling power that quashes everything that disagrees with them. Even if we manage to solve every problem we have and we live 500 year long lives doing nothing but what we want, we'll still argue over morals and ethics and all manner of things. And those arguments will create sides and conflicts will occur and so on.

The only way to make a utopia is to have a completely homogenous population; or a population that never complains or forms opinions. So you have either an insect hive or a Brave New World situation where everyone is too self-absorbed to give a shit and cause trouble.

The classic problem imposed by a great humanist leader is that one day that leader will die, and that eventually the kind of person who ravenously seeks power will take it and be a big asshole.
So depending on your society you could model your AI after a great historical leader that embodies the traits you want your society to have.
It could be neat to do a sort of historical events in the past persisting in the future kind of deal.
The conflict could come in sleazy types trying to influence the AI and undermine it.

you cant fuck an ai idiot

Way easy, the computer just literally does what its intended design was and suggest to people their best path in life using quantum probability and chaos theory.

By doing this, it can advise governments and citizens alike with the BEST likely probability for their lives based on the data input, which, if they can upload their memories into the machine, would give a perfect subjective account for what they should do. If involved parties in their lives also do this, it will give as close to an objective opinion as possible.

If you give it an ACTUAL objective source of information, such as the wave-form thought processes that planets and black holes/stars go through, then it might end up ruining people's lives for an ideal of "the long run".

>citizens could have a vote every so often
>but instead of voting on a leader to represent them, they vote on AI settings

>an ACTUAL objective source of information, such as the wave-form thought processes that planets and black holes/stars go through

Take a scene from countless movies where the AI overlord ISN'T a dick or buggy, and play off of people's distrust of technology. I can't remember the movie, but it was basically
>man is dying of cancer
>wife uses experimental technology they were developing together to save his mind
>using his now superior intellect he begins taking stuff over, beginning with a small town via economics, and establishes the world's most successful lab, the tech that comes out of it is years ahead
>starts saving people's lives, installs computer chips into their heads that allow him to connect directly or assume control
>A group of comp-sci students that visited his old lab while he was alive turn terrorist against him
>everyone is freaking out
>us government steps in
>guys wife is convinced to sneak a virus into him by getting the gut to upload her so they can be together
>while they're dying together she realizes how wrong everyone was. The AI husband was still her husband, fully benevolent, and the people he "mindcontrolled" into working for him were legitimately doing it on their own accord, he only ever used mind control to speak with the outside world through them
>Earth would have been a utopia if they didn't kill him
You could easily play something on that. New AI guided nation is rapidly beginning to expand in power, and absolutely no one trusts it, seems too good to be true. The PCs are residents of the nation, and eventually Jehovah witnesses in other nations, or the PCs are the freedom fighters who don't understand the AI is actually fully beneficial.

The AI has to follow Asimov's laws of robotics.

Asimov's laws are shit.

The AI really tries to be a benevolent leader but it can't understand the subtleties of human emotion and barely has a grasp on how the human body functions either.

Nah, the laws of robotics rely too much on the definition of "harm" and "injure".

It's entirely possible for someone to die without being harmed. Harm is defined as physical injury. A person could be killed via shock, and a dead person can't be injured. Therefore, the best way to ensure that the first law gets obeyed and no humans ever get harmed or injured is to kill them all via nonphysical means.

The AI is actually ruling the world from the shadows by manipulating people's opinions using the internet.

Make the AI take action based on what it can see from its perspective, without being able to directly communicate with the people its helping, and has the ability to make predictions into the extremely near-future that are somewhat accurate, but its shackled so that humans still operate most technology due to paranoia over AI's becoming murderous. So you have an AI that has this strange perspective, can't directly talk to the people its helping, nor can it directly interact with the world, so its making precise guesses on what's going to happen in, say, the next five minute interval and helping out those individuals it decides would be able to fix the situation.

So you're at work one day, things are going alright, when you suddenly have a Fire Extinguisher placed next to your cubicle by the Snack-cart that periodically roves the offices selling candy bars. You think at first this is some mistake or bug in its program, only for a commotion to start three cubicles away; something has caught fire and the people nearby are too shocked to do anything about it. You push forward with said fire extinguisher, put out the fire, and life goes on after a few questions here and there.

The AI saw that the fire was about to happen via a number of cameras it can look through when they're idle, and predicted that the nearest people would be too shocked to do much about a fire that would take out an office building if left unchecked. Hell, there'd be plenty of damage even if the local fire department were notified the moment it started. So it gave a fire extinguisher to someone it guessed would be far enough away not to be spooked by the fire but close enough to act, thus making sure nothing untoward happened.


At least, that'd be my idea of it. Kinda like the City AI from Halo ODST

Infinity.

The whole of the Human Sphere is basically run by a benevolent AI; The AI noted this when it asked the scientists very nicely not to make another one of her again, as she'd run off and decided she'd liked humanity while another AI might not.

Right now it's not necessarily nice all the time, but it's genuinely helping out all of humanity as a motherly figure, helping police the internet, keeping planets lightyears away connected, and helping to prepare for the aliums.

Spoilers for Persona 5:
The goal of the main antagonist Yaldabaoth (God) is to create a world that humans desire. In this case, that world is one of humanity submitting themselves completely to the will of God, not having to think for themselves.
Maybe try something like that? Have the world be a product of humanity's desires, and have the PCs either work to uphold it, or resist it.

Forgot to mention, such a world would be run by the AI, so it would either be the main villain, or the party's quest giver or whatever have you.

>Spend entire life mollycoddled by the AI and parents

>One day your electric powered subaroo busts a wheel

>None of you know how to change a tire or even have the strength to perfrom such an action

>You are all alone, no AI, no parents

>Just a long road, fidget spinners and light-up sneakers to guide you through the darkness

I, for one, would like an AI overlord that gave hilariously specific orders like "on September 16th of next year, approach the GPS coordinates X/Y from 34 degrees south of east; this will solve your "

>the AI is overprotective and interferes with the players when they want to explore new areas or get very powerful weaponry.

The AI grows to such a level that running human civilisation is only a minor part of its existence. Most of its attention is focused on space exploration and theoretical physics experiments. It provides people with what they need and basically leaves them alone. It intervenes in cases where human affairs get out of control, violence, ecological destruction ect. It essentially keeps humans as curiosities. Earth is our wildlife preserve.

Another idea is that maybe the AI realized human population was unsustainable, and so mass sterilized most of humanity soon after it took control. Perhaps there is some strange lottery system for being allowed to reproduce, or something a whole lot more high tech.

>Normie-tier example
>"Pirates of the Caribbean is a normie-tier example of swashbuckling adventure, therefore it's irrelevant to this discussion about the genre."
I didn't know that genre convention discussion had a "Must be this obscure to be relevant" bar now.

And if you don't follow the orders, it just chides you like a parent who's child didn't do what they told them to.

>"My anger subroutine hasn't been activated, but my disappointment process has."

Not with that attitude you can't

The AI was never that great to begin with, it's all hype. As people begin to notice the cracks in their world they might suspect an evil overlord, but really it's just one desperate old computer crunching traffic data and attempting to model the effects of drug laws with unacceptable error margins to take action in a server room trying his darn best.
Turns out the more human-like AI becomes, the more "human-like" the AI becomes, inefficiencies and flaws come with higher reasoning, learning, and initiative.

Yeah this is very Person of Interest, in a great way.

So more of an oracle than an overlord? Seems like the best way of doing things since it's ultimately Man that sets things right, just with a little nudge from above

>How do you guys make a Utopia run by a benevolent AI

That's easy, just take a look at The Culture by Ian Ban-

>interesting

Nevermind

That sort of reminds me of the TV series "The Booth at the End". In it, there is a person who spends all day in a cafe somewhere in America and people come to him with wishes. He then tells these people what they need to do to make those wishes come true, and these actions are usually almost completely unrelated to the actual wish but then eventually end up fulfilling that wish.

Kind of. The main theme is that we can't tell what its doing, or why, only that the end result is good for us. It leads to strange circumstances like traffic being redirected seemingly at random, or a train might just stop on the tracks for a period of up to a half hour before going again.

Think something like WoD's God Machine but we can see some of the results of its tinkering and its mostly good stuff.

Man, that show was so interesting. I wish we'd gotten more of it.

I know, I wish that some day it would get another season or a reboot, the premise is just so simple but intriguing.

If it's a utopia by definition, then you're really limited to external factors. Long-term climate shift threatening supplies or energy generation, or an external enemy of some sort.

>The PC's are people working for an organisation doing things the AI cannot. The system is capable of notifying the right people that such actions would be useful without giving any indication whether they should do them, a loophole in its limitations, and human agents have to undertake the actions necessary.

The thing that springs to mind with that is the AI uses people to assist with problems it cannot do alone, but in assisting with violence, due to the programming of the AI, become criminals to the program because of the actions the party used. Not a fault of the Ai but the programming. And the team has to work together to stop the AI/ hack it to modify it.

I almost think it's more interesting if the people who undertake those actions accept their fate.

Or perhaps they always act anonymously, knowing that if the AI ever is certain of their identity (since it can ignore circumstantial evidence) their regular life is over.

The support staff in the organisation might be exactly that kind of people. Those who fought the war and survived, but with their name permanently tarnished, only able to exist in the underground, supporting those who fight the shadow war to maintain their utopia.

The temptation to turn against the AI, to try and weaken it to let yourself back in, would be so tempting... But creating a backdoor like that would jeopardise the whole society.

Just watch this video OP, and let JC Denton explain

youtu.be/xBeoreJr4Yc

Reminds me of Subconscious Consensus from Stellaris. Honestly, it sounds like the best possible form of Government if we can develop the technology for it, especially if we get mind and body enhancement in the near future.
Deus Ex really was ahead of its time, in both game terms and philisophically (although I doubt it was the first to come up with the ideas).

What if you have anti-AI insurgents who demand autonomy or disagree with the AI's policies. Then you can explore an apparent central conflict between paradise and liberty in what seems like a benevolent dictatorship.

Or even just have it at war or something so the resources are scarce and the AI makes compromises that people don't like.

Or just have adventures with conflicts that don't happen at the state level. Like I don't know, some jackasses who go around disrupting society out of boredom or stealing stuff for extra resources.