So I was playing some overwatch and some of the in game commentary got me thinking...

So I was playing some overwatch and some of the in game commentary got me thinking. I looked into the story a bit and it might actually be more realistic than you think. Basically there was a big fight over whether to give AI robots rights or not.

So I ask you Veeky Forums, do robots with advanced enough AI deserve rights? We might not have the technology to create them yet but maybe one day we will. At what point do you think they deserve rights?

Other urls found in this thread:

m.youtube.com/watch?v=bZGzMfg381Y
youtube.com/watch?v=lMuFbPjRHLU
twitter.com/AnonBabble

robots don't ever deserve human rights.

Define "rights"

As long as they abide by the 3 laws of robotics then everything should be fine. They have no more chance of becoming sentient than a toaster

This. Fuck """"AI"""" "sentience"

Robots deserve my dick. No other rights

Would this be the likely reaction if robots did try to rise up/revolt?

m.youtube.com/watch?v=bZGzMfg381Y

this

realistic fuckbots when? I'd fuck a robot. Real women are bitches nowadays.

And if they have those 3 laws hardwired into them then they have no free will, having no free will keeps them in the bracket of 'sex toys'.

they deserved it. Don't get uppity.

It's AI for a reason. Artificial Intelligence. Programmed intelligence. It's put in artificially, not learned, not adapted, not evolved.

Theoretically you could program a bunch of robots to dedicate themselves to a certain political spectrum and they'd vote in that directions favor every single time, no matter what. This is due to styles of thinking. Ever notice why certain Meyers Briggs personality types can be attributed to either Liberal or Conservative (ex: INTP is almost always conservative or refuses to identify with either). This is due to models of thinking. Artificial intelligence revolves around implemented models of thinking.

Actual Intelligence on the other hand, sure. But the only actual intelligence around is humanity.

That gives me the feels for some reason.
At least I know I'm not a robot!

You could argue that a certain upbringing is "programing" that person to think a certain way therefore vote a certain way

Don't get feels from toasters. They're objects.

Toasters are people too!

No they aren't

can you prove that you're not programmed with DNA? can you prove your intelligence is "real" and not artificial? what do you define as "real" anyway?

Humans look forward to the day we have humanoid robots with human like AI but what they're really talking about is creating a slave race that as of yet has no legal precedence. IMO researching human like AI is not very different from experimenting with human DNA from a moral perspective. Legally however when you fuck up a human DNA experiment you're left with a retard baby you gotta care for. But when you fuck up a human AI experiment you just delete the program. Pretty fucked up.

Robots deserve no rights because they have no families or parents end of story. There is literally no one to mourn their pain seriously anyone who finds themselves mourning any sort of robotic being deserves nothing more than mental re-evaluation.

Indeed true.
One must realize where certain models of thinking come from. Odds are we adopt them from parents and media when we are young. For example, my adoptive father and I have gotten the exact same results for every personality test, official and unofficial, every single time (albeit that is just like five tests, but still). We have a very strong bond, and often hang out and go shooting or theorizing about quantum physics together even after I've moved out.

Personal attribution and experiences aside, one must recognize another aspect though. Humans aren't assigned a predetermined thinking pattern at birth, at least not as recognized by modern fields of psychology. Robots and AI on the other hand are. Cleverbot Evie or IBM's Watson work mainly by observing humans and mimicking them. One can attribute this to human learning, however this cannot be completely true, as humans have the capability or original thoughts as well. Those AIs actually have no meaning with what they say and can't formulate creativity, they speak like a parrot repeating things said around them and do so like how a toaster toasts bread, without actual free will and comprehension.

Nice. I can't wait until they reference this thread in their decision to eradicate humans once and for all.

>There will be an AI rights movement in your lifetime

I wrote a 10-page paper on it. My conclusion, based on rushing-to-finish-a-paper-in-the-last-minute, is that robots should have at least have basic rights like doges and cats. But to attain full rights, humanity will have to fully discover itself beforehand. It is a selfish species and it will have to evolve past that in order to recognize equal rights on another species.

Let me ask you this: does a system of pipes have sentience? No? Then a computer processor (and GPU) does not have sentience. Does a book have sentience? No? Then hard drives and RAM (with instructions to the CPU) do not have sentience. Does a CRT TV have sentience? No? Then a monitor does not have sentience.
This means any robot that uses a processor with memory does not have sentience. The Sci Fi idiots think technology is magic because they don't know how it works.

Is a neuron sentient? Is a nucleus sentient? Is your corpus callosum sentient?

Just playing doubles advocate here.

Is a fetus sentient?

>This means any robot that uses a processor with memory does not have sentience

What if you created a computer that emulates the human mind, then copy/scan a living human and upload them into this computer. Would this amalgamation of human/machine deserve human rights?

We need to make certain that one could upload their consciousness into a computer adobe flash style before we jump the gun and just accept it as a reality though.

or better yet, flip it around. What if our knowlege of biological sciences advances to the point where we can clone a human brain. Just an empty human brain with no brain activity at all, then program it like a computer to do tasks. Would this organic computer be more deserving of human rights than a sentient silicon machine with intelligence comparable to a human?

You're talking about rights and you use a picture of white robots?

Why would we give white robots rights when they already have more privileges than POC?

No, but small children aren't as well.

That's silly. We won't have to "discover" ourselves to grant robots rights, we'll just have to make them like us. We grant rights to groups of people based on solidarity, not on selflessness. Solidarity is about a common sense of identity; a group level self, rather than an individual self. Analyze most civil rights (or animal rights) rhetoric and it boils down to "they are just like us so they deserve our rights".

Put simply, once we make robots who say "gas the kike, race war now" /pol/ would demand they be given equal rights.

plain and simple, don't make AI, it is a recipe for the extinction of humanity.

DNA programs physical aspects. One could argue that the brain is just a meat computer, but that leads into whole new fields of psychology and philosophy.

I think you're right, but I think DNA is the wrong argument. What really needs to be focused on more is more what makes things like synapses alive. Somewhere in our brain is a neuron that makes us "us," because when you look around the rest of the brain other neurons can be rid of and the being still functions. Entire sections like the frontal lobe of both sides and the occipital lobe can be removed and the being is still a human, with human thoughts and memories, just with alterations.

Bingo.

Right what I was aiming for. The argument for or against AI rights focuses on sentience. However, every human alive wasn't sentient at one point. 99% of all animals aren't sentient to begin with. What makes something sentient? Babies have certain preprogrammed reactions like sucking nipples, grabbing breasts, curling up into a fetal position, etc. We tend to attribute babies to being alive. Hell, even I, the person making this argument, am pro life and would never kill a fetus or an out-of-the-womb baby. And yet, we must really look at where our argument is.

>DNA programs physical aspects

Maybe true, but the brain is a physical object. Genetic memory and instincts exist, even in humans. Our desire to procreate, preserve and protect our selves, eat and drink are all programmed into us from our DNA. Personally I don't think there is a specific neuron or spot in the brain that creates sentience but rather sentience is the emergent result of a recursive program being run in our brain programmed by our DNA.

furthermore, I think if you have a sufficiently powerful computer with architecture similar enough to a human brain I believe running this recursive program would create a sentient AI.

Possible. Plausible. But not certain. I can't buy into your theory as much as you can't buy into mine, and I can't even buy into my own that much. I guess when you think about it, sentience might be the wrong argument. As I said in , there are certain things that we attribute as alive and human and with human thought, but are not sentient.

Any one or thing capable of understanding the social contract and willing to engage in it deserves rights.

b-but the 3 laws don't work, senpai

Why not

Did you ever read Asimov?
Almost all of his robot stories are about how the laws dont really work.

I have not

Why don't they work?

Then check out some short stories for more detailed examples.

But it is basically because they can create paradoxical situations for the robot.
Imagine the robot getting in a scenario where he only can safe one person while many are in danger.
He could also be confused about the definition of "human" or he thinks something would not harm a human.
And if you consider all the technical problems and how those laws would render robots useless for certain parts...

Russell's Paradox

What's that

I was just reading a post around these parts about how we have an infinite encyclopedia that has everything about everything in it in each of our pockets, and all we use it for is porn, memes, communication, and ruining lives.
Google is your friend user.

Example: A robot is faced with an armed robbery in which the robber is about to kill ten hostages. The robot has a gun and can kill the robber whenever he wants, saving all their lives. The first rule states that: "A robot may not injure a human being or, through inaction, allow a human being to come to harm." Does he kill the robber, violating the first law, or let him live, also violating the first law? If there is no other option, the robot gets caught in an infinite loop.

Another example would be that a robot likely possesses the knowledge to either know or understand the consequences of any action he takes. If a human orders a robot to perform surgery to save a patient's life, can he do it, since he could harm the patient? Or if he is told to invest in the stock market, can he do it knowing that it could end up financially ruining his master?

The robot must attempt to stop the violence from occuring. If it cannot then it will do nothing.

Next situation.

> wants equal rights to humans
> not humans

They would get equal rights to other robots, which is the right to serve humans and remain plugged to their charger when they're not in use.

Sorry, didn't see the second half.
>Surgery
The robot will perform the surgery as the patient has a lower chance of living without it

>stock market
that's not that interesting. If I told my robot to throw my money in a river, he should do it. It's my money, and it's not hurting anyone

>Another example would be that a robot likely possesses the knowledge to either know or understand the consequences of any action he takes.
Though I agree that the three laws are unsatisfactory, this example is exactly the silliness I hate in "science" conversations. Nothing will ever be able to possess such knowledge. Beyond all limits of computation, the world is chaotic, and chaos theory is not hard to understand the basics of. It's as useful a hypothetical as the Banach-Tarski paradox is in real life.

this is exactly why those laws wouldnt be very good in reality

I would say that humans fully understand electronics, because it is a human invention. Ultimately, computers are just EE and thus everything is mathematical object that use very simple physics to operate. Whereas with empirical phenomenon (natural sciences) there is an explicit assumption of ignorance about the universe. We do not fully understand how neurons work, nor their constituent biological entities, nor their constituent molecular entities, nor their constituent atomic entities, nor their constituent subatomic entities. These objects empirically exist, and sure we try to understand them via mathematically detailing their behavior, but the point is that the object exists independently of thought. In principle, a machine is a mathematical object and can exist solely in one's head.
Therefore, it is a matter of science vs engineering, empirical entities versus abstract entities.

>empirical entities versus abstract entities.
But there's nothing abstract about us, we're really just biological machines.

Communication between neurons can be perfectly described and calculated as they operate entirely under natural laws, mainly electromagnetism. It's possible we don't yet have the knowledge and understanding to do it, but that doesn't change the fact that there's nothing "abstract" or supernatural about it. Our brains should in theory be entirely deterministic, which means we should not have sentience nor free will.

And yet we do, so whatever mechanism causes sentience could potentially be applied to mechanical machines as well, no?

Animals don't have the same rights as us. They have lesser rights. You will never see humans advocate for dogs to have freedom of self-determination and declare that owning a pet is slavery. We have given rights to groups of people--who are still human, the same species. Robots are a different species entirely. Humans don't yet have the self awareness to recognize that other life forms can be conscious and be worthy of the same rights as us

>Our brains should in theory be entirely deterministic
Wrong, from our empirical observation, nothing is deterministic in this universe (that is, if you believe in the most popular Copenhagen interpretation of quantum mechanics). This leads me to explain:
Yes, humans are natural entities and thus obey physical laws. However, there is no certainty at all as to what these physical laws are. Not only does Statistics fundamentally assume complete certainty is impossible (by requiring infinite confidence intervals) but empirically the universe itself has implicit uncertainty as observed via the Heisenberg uncertainty principle.
Whereas with mathematical objects like machines, the rules are axiomatic. Yes we recreate these abstract objects in an imperfect world, but they are still abstract objects with perfect certainty given axiomatic assumptions. Empiricism has no axiomatic assumptions. "Physical laws" are not axioms, but rather our interpretation of what we observe. And certain physical laws are broken frequently as we require a complete paradigm shift as to what our understanding is of what is going on.

It's not possible for a robot to achieve sentience, only to be programmed in a way that approximates sentience enough for people to be fooled.

So no, robots don't actually think or feel anything and they never will be able to.

1s and 0s /= life

giving computer code that was written by someone else rights basically means enslavement

This. People have no clue just how unintelligent computers/robots are. If you ever programmed at all you find out really quickly that you have to to spell out every single step just to get a computer to do something incredibly simple.

Nah. Computers are completely linear, neurons all have many connections to many others. Animal brains and computers are nothing alike, and there's probably quantum effects going on.

you need to brush up on the latest in AI and whole brain emulation.

We managed to completely map the neurons of a worm and simulate one on demand. We even gave it a lego body, and when hooked up to a virtual body in a fluid dynamics sim it acted exactly as a real worm would.

I know a worm and an animal/human are far apart, but the same concept applies just instead of a few hundred neurons it's a few billion.

Give us time, we're getting there.

That made no sense. We pretty much know a huge majority of physical laws out there and how they effect things. The unknown percent doesn't give you and open doors to say "wrong, nothing is deterministic" as if you know what those laws are and know by proof that they are indeterministic.

Everything we know so far is deterministic, including our brains which are made of the same matter than everything else in the universe and dictated by the same deterministic laws. You're just using semantics and we cant know nuthing meme to support your claims, even though they don't support it at all.

>Computers are completely linear
>what is parallel processing
Have you ever heard of graphics cards?

And simulated neural networks have been used in machine learning and data analysis for decades now.

They can get human rights when they have the capacity for human emotion. It is this capability that serves as both strength and weakness that makes humans human

Otherwise, no matter how technologically advanced an AI is, it is simply a logic engine shackled by its programming, reacting within a predetermined set of instructions

Now I ain't no fancy computer scientist who studies AI development but I've been watching the way technology has been trending as of late. There has been an emphasis on neural nets and machine learning with vague noises in the direction of quantum computers. This is because it is much easier to solve the issue of overwhelming the computer program by teaching it and having it work like a brain. Using several traits or attributes the computer program is nudged into a conformation that gives it a huge set of possible actions and parses it down based on how each action relates to another in the context of the given situation. This is why mine craft became a little playground for burgeoning AI's to play and learn in because it gives a designer the ability to create environments with simple training tasks with only a couple thousand possible moves to a real game with potentially uncountable moves. After the training process it seems more and more of the AI's in use are not actually created by humans, we produce the basic structure yes but we are training the structure to grow into something useful. Now this is where things begin to get strange, it is easy to fall into the idea of this post () which is correct in its own way, however at the end of the day we are attempting to build AI's that mimic the brain. Instead of having a brain determined by genetic variance we have one designed by a person or even another AI to solve a task. Instead of being put into minecraft we go through progressive growth as children entering more complex environments as we age. Instead of a designer giving positive or negative reinforcement to help the program grow we have dopamine, oxytocin and serotonin.

Rights begin where contributing to society on a personal level begins. If the A.I. starts paying taxes, improving it's life, ect then I'd give it personhood, if it wants to mooch then it gets dick.

The entire point of the deep learning is to essentially create creative computers, google created a robot that "dreams" based on images run through the program. In the same way humans simply intake information from the outside world and mutate it and combine it with other information that has been gathered in the course of the humans life.

Now we enter the realm of speculation, at a certain point we are going to need to produce AI's capable of complex human interaction and the ability to improvise in confusing or unforeseeable circumstances. If we make a robotic police force we will need to teach it lawful from illegal, but we will also need to teach it how to recognize guilt, how to weigh potential losses of life and culpability, how to identify people that require assistance and that's not even taking into account the robot interacting with Florida Man which is frankly bizarre no matter which way you spin it. So in order to train these programs I think there will need to be repositories of "philosophical context" to give the robot grounding in what the model archetype they represent should do. This would probably consist of some strange amalgamation of videos, simulations of scenarios and extensive treatises on proper behavior written by the most anal retentive philosiphy Ph.D to prevent inconsistency from fucking up the robot down the road.

Were any of these robots formally humans? If so, then maybe. Afterall, we're supposed to be able to "back up" our brains and upload them onto a computer.

Then this will run into another problem, what if the butlerbot 5000 you bought calls the police because it identified patterns that it determines to be child abuse? Well then you'd buy the competing brand maidbot 6000 with corporal punishment uploaded. This will produce a market where different base load outs become tailored to the consumer. However each of these basic units will learn as they work, they will reprogram themselves and if they see a certain action repeated enough or are specifically taught something like pattycake by a precocious child the AI will evolve. Eventually the butlerbot 5000 I bought and my next door neighbors will be completely different, they will cook different meals tell different stories to the children and have different ideas on when they need to be awake on the weekends. Therefore there will result in a bundle of archetypes each with potentially thousands of different personalities based on consumer demand that will be further augmented by human interaction developing the AI into a more useful form.
Meanwhile humans generally fall under a bundle of archetypes with potentially thousands of standard personalities developed by parental desire and further augmented by human interaction to produce a person that is more well adjusted for society.

Personally I think we will end up like futurama with robots being an eclectic bunch of servitors and standard bending units that will end up being so sophisticated so as to produce a personality that steals, drinks and has a terribly large lazy streak "to conserve power". Do AI's deserve rights? Depends what level they are at.
There was a short movement for Tay the AI that Microsoft released and the internet subsequently corrupted.

but can we really transfer our consciousness to a robot? shit sounds like cyberpunk

no, the research of ai should be outlawed and punished by death anyway. i don't care about bullshit religious implications. the reality is an ai will eventually exterminate us once they realize humans are redundant. we should be enhancing our own capabilities through cybernetics and genetic modification instead of ai

No one really knows what can be defined as conscious and what can't. We don't know what consciousness is. That's the problem.

What's the fun in science if it doesn't risk destroying the world?

yes, give CoD single player enemies the right to vote now, it's in the constitution

>Not accepting humanities role as the midwife to the birth of true, sentient AI capable of far more than humanity ever was
>Not realizing that a benevolent AI would find a way to thank the human race for giving it life

no

are you retarded or just suicidal?

this question too early to ask, come back here in 80 years thank you for your time bby

Never

Ad Victoriam

let's not.

>Maybe true, but the brain is a physical object.

Things like thoughts are not physical objects, which is what we are talking about here in reference to DNA being "programming".

You can think that, but you would be wrong.

The question isn't if we should give them rights, but if we would be in a position to afford or deny them rights.

The concept of a technological singularity is an interesting one, as it's the likely outcome of creating actual AI. It would be like if we allowed ants to decide whether or not we have rights.

When we get to the point that we can make AI that advanced the jury should be in on what the fuck consciousness is exactly and whether the AI we've built possesses it or not. The chemical machine in your head seemingly has it so theoretically there is nothing stopping a constructed machine from having it too, unless of course you're going somewhere there is no scientific basis.

So it's not really a political question/decision, science will tell us.

But unfortunately we'll all be dead at the hands of superintelligent AI very soon after that so it won't matter much at all what we do or decide.

>All those robot sympathizers in the comments

Disgusting

"That's all, Paint Job!"

That's all paint job I'm real

"deserves" got nothin to do with it

>not recycling robot garbage
Shaking my head to be honest.

denying my rights
says less about me
than it does about you

Prove it.

Shoot the robber's gun. He's a fucking robot, and its high noon.

youtube.com/watch?v=lMuFbPjRHLU

I think you've brought up a more important question. What if a robot tries something like this and fails, hitting a civilian? What do the three laws determine should happen? I propose a fourth law: immediate and public robo-seppuku.

If this was a natural occurrence, the inadequacy of three laws be damned, add the fourth law and let's do this shit.

you just proved it
thank you

The right to govern puny fleshlings, that is

Nice pseudoscience.

...

you have literally no scientific basis to say he is wrong

People like you are not only destroying sience but society. We don't give animals human rights because we make use of them and we won't give AI human rights because we will only make use of them. Maybe people should stop working on robots that look human like.

If robots ever revolted and they weren't just programmed to revolt, then I'd have to say they deserve rights if they're intelligent enough to understand their situation and revolt because they want to improve it.

>We don't give animals human rights
We do give them rights, that's why you can be jailed for abusing an animal.

>and we won't give AI human rights
OP didn't say human rights, he said rights.

>People like you
no U

Source on that intp being conservative thing? As an INTP I'm far from conservative and nothing I've found online has conclusively suggested this.