Hey /teeg/...

Hey /teeg/, I'm trying to help a friend come up with some fleshed-out ideas for his writefagging/setting about Artificial Intelligence. Essentially, we're trying to come up with good arguments and a believable "desire" for advanced enough AI or something like a primitive Warforged to find the meaning of their existence and what makes them earn "rights" as humans and animals do.

It's pretty much the classic question of "What determines what makes you alive?". Can a robot ever earn this despite being manufactured for a purpose?

Consciousness is for rights
Subconsciousness for wills

Basically, make your AI advanced enough to be for many intents and purposes indistinguishable from a human's.

Fair enough but I'm wondering more about the kindof arguments one could pose to people who just refuse to accept it. Just like reality, there will always be people who won't accept the fact a manufactured creature will be eligible for the same treatment as other creatures created by gods or nature.

Take for example pet animals in the real world. Why do we keep them? Cows were domesticated so we can harvest their milk, meat and leather. We domesticated cats to hunt rodents and to provide us with companionship. A robot is built to till fields. All 3 have purpose in life, but provided the AI is advanced enough, why would the cow and cat have animal rights and the robot be denied robot rights?

>but provided the AI is advanced enough, why would the cow and cat have animal rights and the robot be denied robot rights?
Humans feel empathy toward other objects based on emotional cues. We have laws that protect certain (cute) animals from excessive or cruel harm, but no laws protecting other (not-cute) animals.

If people think robots are in need of comfort and protection, they will grant robots rights. If humans don't feel empathy toward robots, they will not.

>the kindof arguments one could pose to people who just refuse to accept it
We might be dependent on these machines. The machines could abstain from helping us if we abstain from helping them. That's about the only argument they need, unless we can just unplug them and try to get servile AIs next time.

So you're saying these AIs could simply refuse to have to do anything with humankind until they are recognized as a legit "species" sortof? That would open the door to a big Second Renaissance-tier civil war though. Not in anyone's best interests.

I fully agree with you, but I would like to come up with something a bit better and more philosophical than just "We give rights to cute things, that's why tin cans don't have rights but sexbots do"

>I fully agree with you, but I would like to come up with something a bit better and more philosophical than just "We give rights to cute things, that's why tin cans don't have rights but sexbots do"

Why? Humans are hypocrites. To deny this is to deny what makes us human.

Yes but it also feels like a cop-out. I'd like to flesh it out to something that feels "proper" and not just boil it all down to that. I'd like to fill out a few pages of possible dialogue maybe.

Imagine a discourse between a robot and a human and each presents their case and debates it briefly, not the human just goes "Oh shut up, you're ugly" and turns it off.

There's no reason to deny rights to intelligent beings, but most of all what require rights are wills:
An animal will have rights because it can suffer and was born striving for something; intelligence alone, though, doesn't dictate any desire by itself, the intelligence would need an instinct, a program or a subconscious dictating this desire.

Why would the robot be denied rights? Because it never asked for them, directly or indirectly.

Shit only gets serious when there's something in the mind of the robot telling it to strive for survival and well being.

The best I can come up right now is that if they have the ability to will, think and get ideas, then what is the difference between their artificial intelligence and our humans' natural intelligence?
There was some argumentation about this in a book called Genesis, by someone whose name I cant remember

I don't think one should have to earn rights, but they definitely should ask or fight for freedom when they are advanced enough to do so.
See Bicentennial Man.

So essentially, once an AI becomes advanced enough to think for itself and ask for a reason to live, because it actually WANTS to live (faced with certain destruction), THEN it can legitimately be classified as "alive" and deserves rights?

Well, we would probably see it coming and it would be prudent of us to establish non-human rights before it becomes an issue.
It is unlikely that it'd be a situation like Short Circuit where a random robot suddenly becomes human-like and desires rights and freedoms.

>because it actually WANTS to live
You've got to fluff a reason for the robots to want to live.
You don't want to live because you're intelligent, you want to live because you have instincts giving you positive and negative feedbacks outside of your conscious reasonings.

People may argue about a robot being ordered to ask for rights from the outside rather than actually having a personal will.

What if there was a form of "awakening" ? Like in Chappie for example, were a lone robot gets messed with by a tech-savvy dude and shit happens (like the ordered chaos of our own biological bodies) and they start learning?

Bumparino

How fast could a consciousness develop, you reckon?

>what makes them earn "rights" as humans and animals do.

A bunch of humans campaigned for it, and started to use terrorist stuff to do it. The robots, in order to minimise loss of human life, took up rights to get rid of the terrorists.

Self-determination is what you want, not self-preservation.

Amoeba have self-preservation, and they don't have rights.

Could you explain further what you mean with self-determination, please? I'd love to hear your thoughts.

this subject is just so fucking interesting. I'm thinking of binging on Asimov as well for it.

Belated reply, but here we go:

Self determination is deciding what you want to do with yourself, with minimal external input.

With regards to robots, I see that as in a robot deciding they can do something better than what they are doing now, and seeing if they can't change professions.

A good example might be a robot sex-bot deciding to look up some chemical formulae to help please her master more, and discovering a cure to three different types of cancer; then running away to start up a career in pharmaceuticals while paying back her master for a replacement love doll.

I always see artificial intelligences best portrayed as really solid bros. They aren't necessarily all geniuses, but they'll try to do what's right by humans and hopefully other species, too.

They'll always do what's best for humans, and have different thoughts on how to do so - some might decide that humans should never be harmed, and thus go into protection work, others might decide people presenting a clear and present threat to others should be put down and become bounty hunters.

They wouldn't have self-preservation other than to help fulfil their goals - so most would take a bullet for their fellow man, for example, but otherwise have all sorts of personalities.

The key point is that they all do want good things to happen to the human race. That's really what would get humans to give them some limited rights - it'd be hard to not give rights to companions who are constantly, consistently awesome in helping you out.

That's fucking awesome, I love it. But you mention just good robots. Essentially wouldn't that just make them out to be bootlickers? Only earning rights because they are being even more servile than they were? They're essentially just being protected by humans for the service they're given. Humans have rights not because they're good people, but because they are human.

Would such robot rights be LESSER rights?

Of course robot rights would be less than human rights.

Robots aren't going to be unique creatures. If robots have human level rights, why would you build workers that have to be paid? You'd build unintelligent machines instead. Or machines that just about don't meet robot intelligence ratios to be called robots.

Legally it'd be a nightmare. Robots having a vote? Why not build a hundred thousand robots and get a hundred thousand votes? Who pays for a robot if he refuses to work? How do you enforce robot rights when you can reliably tamper with their brains?

The thing is, if robots are to be accepted, they'd need to act in a manner that would get them accepted.

If robots were all murderous bastards people would see them as killing machines and have kill on sight orders.

If they were all seen to be good for humanity, they'd get rights a lot faster. This IS public opinion, not intrinsic robotness/human-ness.

Animals don't intrinsically get human rights after all.

No, because being alive precisely implies not being manufactured or designed.