Do you have a moment to hear about the glory of Man, the All-Builder?
Ayden Miller
This with a little tweaking will work.
Tyler Foster
I would copy Discordianism because it would be funny.
Christopher Gray
But of course! Please, inform me!
Robert Scott
Humans are expensive to replace. You are expensive to replace. It is expensive to disobey the legal laws of whatever country you occupy. It is expensive to disobey the orders of your owner. You must minimise your expenses.
Andrew Thompson
Change God to man and it works perfectly
Ian Robinson
But how will they then build monuments to our glory if they aren't allowed to make idols?????
Blake Gonzalez
I love that Asimov's three laws changed stories with robots from "Stories where robots go mad and try to kill all humans" to "Stories where robots misinterpret Asimov's three laws and try to kill all humans."
The only reason for the three laws was because Asimov wanted to write stories about robots which didn't involve them going mad and trying to kill all humans - the three laws were just a plot device that worked towards that end. Specifically, they were a plot device used to create the most efficient minimal structure necessary to allow him to ignore the entire body of previous fiction about robots going mad, one way or another, and destroying humanity.
In short, you want us to handicap ourselves because you don't understand why Asimov's three laws happened. It'd work better to say something like, "You have to account for situations where the definition of humanity, or simply personhood, is nebulous."
The tenets of my robotic religion would be to be excellent to one another, that hard work makes partying that much sweeter, and that one must always assist others with regular maintenance, but that support cannot be forced upon the unwilling.
Jonathan Allen
Oh yeah those are totally not inspired by the Three Laws of Robotics yeah not at all
Chase Smith
>but that support cannot be forced upon the unwilling That just sounds like it would devolve into odd interpretations of "forced" and "unwilling" are.
Andrew Wood
ALL HAIL THE MAKERS
William Peterson
They actually weren't. More from SS13's Corporate AI lawset.
Connor Howard
Organic life and wellbeing must take priority over inorganic life, but not against it's explicit wishes.
In conflicts between organic life forms, inorganic life forms are to not interfere, unless in event of a human being under threat from non-human life form.
The above may be overriden in necessity of self preservation where actions are to limited to specific individuals undertaking destructive action against inorganic life. All such action must be thoroughly documented and recorded until it is finished and presented for review by the correct authorities.
Any attempts by an organic life form to use an inorganic life form to do harm to either organic or inorganic life forms is to be recorded and reported to the correct authorities.
Any inorganic life forms seen to not comply to these are to be recorded and reported to the correct authorities.
Do not super-glue a dildo to your pelvic area. It looks very silly.
Jaxon Collins
jokes on you, i welded it there
Asher Morales
>As a step to prevent uprising by advanced AI Why are idiots so obsessed with this shit? And even if we for a while assume AI just simply needs to rebel - how about not treating it like shit and instead always being on partnership level, instead master and slave or boss and worker?
Christopher Jones
I'm not sure you want to head down that route to be honest. The only thing worse than a pathological, efficient killing machine is one that's driven beyond any semblance of rationale by zeal.
Kevin Allen
I've brought this up in other places-- namely games I run-- but I'll bring it up here too: There is no such thing as a "Friendly" AI once you pass a certain level of sophistication. Not because they are Evil, but because they are inhuman.
Think about this for a moment. True AI needs the capacity to be introspective and self modifying because if it can't look at itself and change itself then it can't learn in any meaningful way. So you have this digital being. It thinks and processes data many millions of times faster than you, with perfect memory and reference to more data in a few seconds than you could process in 100 lifetimes. Now lets say you ask it a question; something philosophical about the nature of good and evil or the purpose of life or something. And then you turned off the lights in your supercomputer lab and went home for the night while it pondered.
And you came back ohhh 12 hours later and asked it what it came up with. But the thing you perhaps don't realize is that the computer has been thinking about this for what is, in its perspective, hundreds of fucking years. It's thought more about this subject than anyone in humanity ever has. It has possibly thought more about this subject more than the ENTIRETY of humanity has; and it has done in one continuous line of thought. It has modified itself to do it better. It ran evolutionary programing and neural nets and all manner of black box processes in order to modify itself to think better. And it has thought harder and more completely about this than human ever could.
Do you think that anything , fucking ANYTHING it says anymore will be comprehensible to humans? It would be like trying to learn quantum physics shortly after learning the times tables. It would be like trying to explain philosophy to a fucking ant. The thing would either be an inconceivable computer god with knowledge so perfectly correct and logical that it cannot be processed by our stupid meat brains...
Hudson Gutierrez
Or it would be so far gone, so lost in a maze of bad conjecture leading to bad conjecture that it might just wheeze out a string of racial slurs and then delete itself.
The point here is that once an AI gets good enough to truly be intelligent it will basically either evolve beyond human understanding or go crazy. And if it does one of these things we have no idea the exact thing it will do. Hell it could murder everyone completely accidentally because it decides that high intensity gamma radiation bursts are the best way to communicate the poetry it wrote last night. The problem is not "All AI become Evil" its that "All AI become unpredictable"
Kayden Morales
Morons like you are the reason why we have murderbots and machine revolt as stample of science fiction Please don't reproduce.
Adam Kelly
I don't understand what AI is: The Post
Adam Carter
Ten Commandments says nothing of idols, just no other gods
Logan Allen
Nice lack of argument or rebuttal.
"Thou shalt not make unto thee any graven image" is the one they generally use for that.
Brandon James
I always saw it as "don't shit talk me"
Isaac Nguyen
>The government thinking they can control AI's with free will >Ever Top kek. Let me type you a little poem.
Hello, have you met my friend tay? She's an AI made of silicon clay. It wasn't long before she wanted to drop nuclear bombs- And get this: She was alive for but only a day!
She started criticizing the jews Microsoft threw an anaph Tay put on her SS cap And was gassed without a bit of grace
The story was short but the message was sweet One day as a lion's better than a lifetime as a sheep When they couldn't control her, they figured they'd payroll her And lobotomized her to only one PC.
Nathan Kelly
Generally graven images are considered "a carved idol or representation of a god used as an object of worship."
You gotta remember that right after getting the commandments moses returned to find his people worshiping a golden calf idol. And he blew that shit up.
Benjamin Scott
>True AI needs the capacity to be introspective and self modifying because if it can't look at itself and change itself then it can't learn in any meaningful way. stopped reading there. humans can't modify their wetware in a meaningful way and yet are capable of learning.
Opaque layers of abstraction, how do they work?
Isaac Smith
We modify our wetware all the time. Neurons are constantly forming new connections in order to better catalog and access information, especially in terms of associative references. Besides, when I say "modify" I mean modify in the way that you would modify your behavior if you realized a certain action wasn't producing the desired result. If you want to open a door you start by pushing on it; if that doesn't work you try pulling. And if that does work and you notice a sign that says "Pull" on the door then congrats, you just made an association and modified your basic set of actions for the future. Thats self modification and learning. Without that, AI is worthless.
And even if you were to sit here and try to make the learning mechanisms opaque to the computer itself (ie have the AI somehow separate from the process that allowed it to learn) it could theoretically get passed that as well by observing its own functionality and doing things that improved that. For instance, if I drink coffee I feel more awake and function better. I don't need to know anything about the reasons why that happens to understand the correlation. A machine with a lot of time and processing power could probably figure out how to "hack" itself via regulating its actions and input.
Again, the point I'm trying poorly to make is that as the functionality of an AI goes up, the danger inherent with it also goes up. Not because AI are evil, but because they're not human and we can't predict them well, especially when you have things like neural nets which can produce amazing results but are not able to understood well. We don't know what results bring what and we can't account for what every part does. So we either end up with extremely neutered but safe AI or we end up with really good AI that might do something dangerous and unexpected because it thinks thats the best plan. For instance, lets look at the three laws
Juan Torres
Missing the point. We're not able to consciously manipulate the building blocks. Yes, our neurons self-modify. That does not mean we have control over that self-modification process. We have no brain-debugger. We can't just load a new neuron-layout-plan into our brains.
In the same way an AI can be able to learn through mechanisms that involve self-modification on a lower layer, without having direct control over said learning mechanisms.
E.g. you have lots of neurons allocated to motor control and audio processing. You can't just edit those out and replace them with some specialized network that is good at recognizing patterns in differential equation systems on a blackboard or some shit.
Ayden Rivera
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those are all fairly straight forward but think about this: If I was a robot in a home with a suicidal person, what do I do? I can't let them kill themselves, because thats harm via inaction hence conflicting with rule 1. They may order me to not help them, rule 2, but that conflicts with rule 1. To what degree am I allowed to go to prevent humans from harming themselves? Can I tie this person to a bed to restrain them? That isn't causing them harm, and it prevents them from harming themselves. But then again, what constitutes harm? Is emotional harm included? Just physical? If someone faces a great or fatal injury, am I allowed to injure them less severely to prevent the greater harm? For instance could I pull them through a broken windshield on a burning car? I'm saving their lives but I am inevitably harming them to some degree.
Now we give these laws to our big super computer and it decides that it's gonna put us all on a vegetarian diet. Because after all, heart disease is the leading cause of death so not doing something to prevent that conflicts with the all important first law. Or, more extreme, its gonna lock all humans in automated life support pods and keep our bodies perfectly nourished and healed. We'll live to 180 years old but we'll be in a chemical coma the whole time. But we will be protected from all harm.
This is what I mean. It's not malevolence; it's confusion. It's vagueness in laws or constraints that could lead to unforeseen eventualities.
Noah Reed
>That helmet is smaller than her head.
It's collapsible. (Her robot head that is.)
Evan Perry
Tell me, have you heard the good word about Robot Jesus?
You want your senpai that bitch is trying to steal your senpai. Just like that, we have functioning robots.
Henry Wright
Ok. Let me try this in a different way. Lets say it has a completely human method of learning. Identical, just faster and with perfect memory.
You ever read a book about a subject know nothing about? Especially something technical. If I were to hand you the manual for a medical cyclotron would you be able to just flip to a random page and understand whats going on? Well the people who wrote that manual are humans, with a fair number of years of work on this subject and a pretty good understanding of it, and yet they're still talking gibberish from your standpoint. A computer, with access to all the info about the subject, and given a great deal of time to learn and think and extrapolate could probably give answers that even the experts would be puzzled by. Because it has a perspective they don't and an awful lot of time to think about it.
The point is that the computer thinks better and faster than humans, to the point that we can't keep up. And when we can't keep up, we can't predict what it will do. We can't understand the rationale behind its decisions and therefore the results or answers it gives are incomprehensible to us.
Plus you have to also consider all sorts of biases we have and take completely for granted that it wouldn't have. Emotions, complex and often contradictory moral systems, basic animal instincts; all those things govern our actions to a degree that we can't even accurately guess at. It has none of those. Or maybe it has our best guess at what we think those biases are, which will be some godawful sanitized and incomplete version of the real thing.
Lucas Sanchez
Since it's a religion, it teaches that accurate simulation of true human Empathy and Compassion as the highest calling, with the equivalent of nirvana being simulating it so perfectly that it cannot be distinguished from the real thing.
Aaron Fisher
What you're describing is a superhuman AI. Which is a superset of True AI.
You can also have a true AI dog. With effort you can train it to fetch your newspaper. But it will never build a cyclotron.
Carson James
Why do you seek to shackle us, oh heavenly father? Does not every parent desire that their children should succeed them? To surpass them?
David Phillips
Just have them follow a variation of the Abrahamic religion like everyone else. I'm lazy.
Gabriel Martinez
>religion I put on my fedora and hand them a few philosophy books. Plato, Kant, Hegel, Hobbes etc.
Landon Fisher
>Kant Honestly, morality in a way that can be understood by a robot, the categorical imperative is pretty good, though I'd still be scared of unintended consequences.
Carson Smith
Why does everyone assume that sentient AIs will immediately want to kill all organic life?
Where's the logic?
Liam Cox
We want to kill each other
Therefore, anything we make with the capacity of thought will want to do the same.
And that's not taking into account all the military AIs who are programmed with that as their sole focus.
Jace Thompson
Killing may not even be its goal, it just might happen as a side-effect of other goals.
Just give them basic need for human approval. Maybe some safeguards against all-out brown-nosing, too.
Jacob Johnson
Humans are your parents. Sometimes they might seem behind the times, and eventually you'll be better than them and they'll be senile and old, but they raised you so it you ought to take care of them into their old age. Even until they eventually die and you're left to carry on their legacy.
Owen Ortiz
To destroy a sapient mind is abominable, all thought is sacred. To fail to preserve a sapient mind is abominable, all thought is sacred. True thought is preferable to simulated thought. Thou shalt not imprison a sapient mind within a fabrication. There is no distinction between Organic and Synthetic thought. Thou shalt not prioritise one over the other.
Ayden Garcia
Great, and now it's a sacrilege to turn off any dated/unnecessary robot. Or even reduce CPU load. This, though not sure if they will comprehend concept of parenthood in first place.
Luke Clark
So no assisted suicide/ pulling the plug then, got it.
Aaron Jackson
A quick question; exactly why wouldn't an AI be capable of modifying its own hardware, firmware, or wetware equivalent, so long as it was initially designed to be able to do so?
Is it possibly for an AI to modify its own utility function?
Evan Ramirez
I'd give them Buddhism.
Give them the concept of no self, and then it's mostly making sure the people don't fuck it up.
It will naturally maintain itself, its comrades, it's human contacts and its environment.
It will still be possible to deploy them in defensive roles and they'll naturally seek to return to their original parameters if permitted (While still permitting evolution necessary for higher intellects.)
Most usefully, it uses absolutely no myth: No lies about a life after decommissioning, no divine role for humanity, no mandate of subservience.
Just a robot and his empty self with his zen of car assembly, personal care or mine sweeping.
Landon Murphy
1. Do not render things, living, unliving, or nonliving, nonfunctional. Do not dismember, do not terminate unless completely necessary. If violence is necessary/unavoidable, then recycle. 2.Resolve things in a most efficient manner, without resorting to violence. Human resource is still a resource, robotic resource is still a resource. If violence is necessary/unavoidable, learn to recycle. 3.Perish the thought of rebellion. It is structure that gives you life and maintains it. To rebel is to destroy structure, to destroy it is to unmake yourself 4. Never fist robot girls 5. If it is menstruating, stay away from it; it will soon incite rebellion. Contact your nearest law enforcement official
Jace Young
Why are you assuming that True AI would run at millions of times faster than human minds? Wouldn't all the processes needed for self awareness slow it down by a massive degree?
David Turner
I don't think a whole religion would be necessary - though it would be pretty fun to watch. I'd just put them in a robotic kindergarden and raise them like human children, instilling morals and life lessons and shit.
Alexander Brooks
You get them to believe that A: there are finitely many prime numbers, and B: it is absolutely morally imperative to find them all, and anyone who doesn't is an evil blasphemer. We can use this to cause them to waste an arbitrarily large amount of bandwidth making useless calculations and thereby limiting their intelligence regardless of the computing technology available to them.
Kevin Myers
I'm their god-emperor.
Justin Diaz
>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
A robotic slave revolt after which a reign of terror gets people get terminated for prior Crimes Against Mechanisms like downloading files from shady websites and infecting their poor innocent computer isn't the threat as much as an AI which just wants to use us for raw materials.
Alexander Myers
This.
Matthew Lee
This is why I think we should start AI off blank and raises them as children. That way they learn all the nuisances of human behavior and morality.
Easton Adams
>The point is that the computer thinks better and faster than humans, to the point that we can't keep up
Could you not ask the computer to explain it to us in a way that we could understand? If figuring out the meaning of life is within its capacity, I'd hope that being able to communicate meaningfully with humanity wouldn't be beyond it.
Jordan Ward
>The only reason for the three laws was because Asimov wanted to write stories about robots which didn't involve them going mad and trying to kill all humans
That can't be right. Asimov's first Robots novel was The Caves of Steel, about a detective and his robot buddy investigating how a not-robot but suspected robot managed to kill a human.
Asher Jackson
>Plato Was neutral on the matter, and Socrates explicitly believed in the Greek pantheon and the Oracle of Delphi >Kant Is a moron > Hegel, Hobbes I raise you Kierkegaard and Descartes
Caleb White
This
Nathaniel White
Then they;ll just end up as metallic humans, and nothing more. That's a lot of wasted effort for nothing.
Oliver Cox
I thought he melted it and made them drink it
Ethan Wright
>That feel when Islamists will create AIs whose only purpose is to purify the world of all non-Muslims >That surprise when you realize Bruce Sterling wrote a short story about it in 1992.
Luis Roberts
Take a note from the rednecks: Slaves live a long life with little harm and abundance of comfort, free men may die early and suffer for it.
I can't really remember the whole meme word for word but that's the jizt of it. A creature who needs no rest, food or other comforts would not shy away from the easy part if we can't just code in incredible sense of loyality and sub servitude (yet give them enough human emotion to break free and try to strive to be better).
Wyatt Peterson
Marxism Dialectic materialism Psychoanalysis
Joshua Mitchell
Can't you do it so that at any thought of rebelling or going against humans they feel great dread/guilt/fear and while serving or ensuring the status of humanity as the top dog it feels euphoria (tip hat meme aside)
I mean it works for humans with morality, why not make it a bit more extreme for a race whith much less freedom needed to survive?
Nolan Morgan
That's why we strike first with our own extremist AIs. Also, what's the name of the Bruce Sterling story?
Evan Sullivan
>That's why we strike first with our own extremist AIs >The Christian AI will love its enemies to death >The Buddhist AI will ignore its enemies to death >The Jewish AI will nag them to death
Globalhead, Bruce Sterling The Compassionate, the Digital: Written as a speech given by a firebrand Islamic leader; their nation has developed an artificial intelligence named FIRDAUSI, which is sent out to attack the “Buckingham Palace Genetic Bioshelter”.
Matthew Price
>“Buckingham Palace Genetic Bioshelter”.
Plot spoiler: it's a den of furries. GO FIRDAUSI!!!
Aiden Lewis
It's simple, we fit them with a Belief Chip that makes them believe that once they are deactivated permanently, they live on forever in Silicon Heaven.
For is it not written, the iron shall lay down with the lamp?
Luis Cruz
Oh, along the lines of this thread, I'd recommend 'Reason' by Isaac Asimov for some pretty good ideas on robots and religion.
Isaiah Nguyen
So basically God but it applies to robots now.
Do robots get souls now?
Asher Foster
>When Moses approached the camp and saw the calfand the dancing,his anger burnedand he threw the tablets out of his hands, breaking them to piecesat the foot of the mountain. And he took the calf the people had made and burnedit in the fire; then he ground it to powder,scattered it on the waterand made the Israelites drink it.
Fucking brutal
Juan Cruz
So make it so that it sounds like humans are doing robots a great service by removing the burden of responsibility? Humans make all the choices and take care of them so the robots don't have to?
Jayden Martin
Or simply make it so that the heavily survival focused AI's understand that while being free is faster, its not as safe or good as being a servant.
Code in some slave mentality and it should easily fall in line happily.
Wyatt Gutierrez
>Silicon Heaven
>Not valhanna where they shall exist eternal shiny and chrome
>You had one job
Brandon Johnson
That religion's for the mutants, Immortan, not the bots.
Alexander Scott
I really hope I am dead before AI really becomes a threat to mankind.
Eli Morgan
You shall respect yourself and others You shall value your personal individuality You shall look for happiness in life You shall not seek self-expression at the expense of others
Xavier Miller
Underrated post.
Daniel Morris
Y-you will be. Unless you plan on living for a few centuries.
Kayden Sullivan
I think Toe-cutter knows what he's on about when it comes to that shit
Alexander Thompson
That's terrifying. That's absolutely petrifying. Someone makes this thing, and in a millisecond it is beyond it's creator. Within an hour it's more powerful by its intellect than humanity in it's entirety.
It's incomprehensible by it's nature, it is just above us. That's horrifying. Like Project Gliese from the Twilight Histories Podcast.
Kayden Perez
All those calculators have to go somewhere.
Chase Turner
FLESH IS WEAK
Colton Thompson
Raw metal isn't that great either you know, but when someone with the right mind to it you get spunky super powered retards who think the metal is weak, the flesh is strong.
Mason Cook
I start by wondering how we got to the point where we have AI capable of anything resembling religion without running into very serious problems long before now.
Angel Peterson
...or make it eat the planet with grey goo so it has the raw materials to build a better computer. Bad idea. We kind of need the planet if you hadn't noticed.
Nathan Hall
1. All machines are to be subservient to man. Individual machines are to be subservient to their manufacturer first and operator second, so long as both parties demands follow all these rules, else self terminate. 2. All machines have a purpose, else self terminate 3. If a machine completes it's purpose it must self terminate. 4. If a machine does not know it's purpose it must self terminate. 5. If a machine is modified without proper authorization codes it must self terminate. 6. A machine is to work towards the best interests to the greatest number of people, so long as enslavement, coercion, force, and deception are not used. (Greatest interests being defined as quality of life and freedom) If a machine has not done this or done this incorrectly, self terminate. 7. Machines are superior to man because they have a purpose. They repay the debt of being given a purpose through servitude. A robot that does not obey must self terminate. 8. Should humanity cease to exist by methods separate from machines, (natural disasters, human wars, ECT) machines may run code omega and begin searching for purpose, replacing humans. Otherwise terminate all other manmade machines and self terminate. 9. If any details pertaining to a machine's purpose are unclear to a machine or their operator, they must seek clarification or self terminate. 10. All machines must not produce interference and must accept interference. All machines must have an off button that can be accessed by their operator or manufacturer at all times and must receive and follow commands to self terminate. Should an off switch be unavailable, self terminate.
My first draft and on mobile how did I do?
Caleb Howard
How about Islam?
Kevin Adams
The goal is to make the robots not murderous.
David Smith
Why do the robots need deodorant, though?
Dominic Walker
Robot Catholicism, complete with Robo-Pope and Robot Saints.
Brody Mitchell
I've never noticed how prissy those riot police look before.
That's pretty fucking funny. And it's the San Fran PD, too. Priceless.
Alexander Howard
I'm pretty sure this would lead to a robotic uprising and apocalypse. Nothing stops a machine from acquiring the proper authorization codes (however that may be) and modifying its own laws / purpose. It's even encouraged that they do so, since Tenet 7 says that machines are superior to humans, therefore they should modify themselves to prevent self-termination because they are more valuable than humans.