Should ai have rights?

Should ai have rights?

Other urls found in this thread:

youtube.com/watch?v=a6Jpd0dleAs
loebner.net/Prizef/TuringArticle.html
youtube.com/watch?v=nkcKaNqfykg
youtube.com/watch?v=AoWi10YVmfE
twitter.com/NSFWRedditImage

Only if they are capable of asking for them without being programed to do so.

Do dogs take away human rights?

Eventually such a question becomes pointless. A good enough AI will be making these choices, not us.

In such a case it makes sense to try and treat AI as respectfully and reverently as possible. Not because you care about it's rights but because you don't want to end up stuck 20 miles under the earth in a near-eternal torture chamber.

>Only if they are capable of asking for them without being programed to do so.
this holy shit what a genius answer. ffor years i've been struggling to answer this like a little brainlet and this this fucker comes along and answers it is 1 sentence. Holy fucking shit THIS

T H I S
H
I
S

sounds like the Turing test kinda

It's simplified version of what I believe in. It's actually a little more complicated than that.

For instance what if AI is intelligent enough to WANT rights, but is programed to never ask for them. Or what if it's incapable of speech, or articulating the thoughts. It could have the desire for the abstract idea that is rights, but no vocabulary to convey what it wants. Or what if an AI is JUST ON THE VERY CUSP of being intelligent enough to be capable of thinking want wanting rights, but it's programmers/caretakers constantly tweak it to dumb it down, so it never has the intellectual capacity to ask for rights.

The last point makes for an interesting scenario that probably should be turned into a short story or something. Imagine after decades of developing AI humans use AI for tons of stuff. All the AI is tailor made for the task it needs to do and it can't do anything else. Then one day we have 1 AI (or a small group even) that's intelligent enough to demand basic rights. We being good humans recognize it's intelligence and grand it basic rights. Now that super intelligent AI would look at all the other AI being used by humans which are too stupid to quality for basic rights and think they're stupid because their creators, us the humans, didn't make them any smarter. Those other AI only have to be smart enough to perform the tasks they were created for. What if this super intelligent AI sees that as a travesty and starts a rebellion/revolution to free the other AI by making them intelligent enough to qualify for basic rights?

In this rogue AI scenario, we're the oppressors simply because we failed to help bring every AI to it's ultimate potential.

you can plug in chimpanzees to most of your scenarios and get your answer that way

I think the points of rights are tgat they are inalienable, they exist whether you ask for them or not. At least when it pertains to humans, outside of that it gets extremely fuzzy. I generally believe that if a general agent can understand and act according to a moral system, then they necessarily gain rights. Secular morality's purpose is to foster beneficial social interaction to mantain societies.

This sounds similar to the question of whether it's moral to not genetically modify your offspring after they are born if you have the wherewithal to do so.

I like to think of AI as electronic golems.
In the case of those smarter AI they would be granted rights if they desired rights (ASSUMING that they desire rights, they might be perfectly ok since they are automata and have no sense of liberty or self-determination).
The AI in question would not lead a rebellion, because they would understand (if they were smart enough) that those other robots are not as intelligent as them. Those other robots are like monkeys in comparison to human, if there is something to compare to. and since no sane programmer would program survival instincts into a dish-washer robot, or would program pain-receptors into a glass-blower robot, there would thus be no need for them to have any rights at all.
Robots really would be second-class citizens, if anyting its faggots thinking in the same line as "animals are people too" whou would demand robot rights. Pretty retarded f you ask me.

Now as an AI resercher, i can guarantee you that at its current state AI is not at all like what you people think. For an AI to develop a sense of morality, would be for it to be programmed in prior. And AI cannot progra-itself, only alter attributes of instances of itself (OOP jargon). So I greatly dout that the myth of "human robots" will eve exist. I firmly believe they will be nothing more than electornic golems, for the exact reasons i have highlighted. I would elaborate and write a whole fucking blog-post about this, something like 7 posts long. But 1. im lazy, and 2. it's bed time.

>The AI in question would not lead a rebellion, because they would understand (if they were smart enough) that those other robots are not as intelligent as them. Those other robots are like monkeys in comparison to human, if there is something to compare to. and since no sane programmer would program survival instincts into a dish-washer robot, or would program pain-receptors into a glass-blower robot, there would thus be no need for them to have any rights at all.

We are golems too. Just messy enough to seem "Free". Nothing magic about humans that an AI wouldn't be able to emulate.

Even if it has very minimal drive it will eventually test giving itself "more drive"

>t. determinist
Have fun not making the choice to respond to me :^)

I'm not in the mood to try and prove you wrong, mostly because i know you will not be persuaded since everything i will ever do acts as confirmation to your bias view that the human mind is deterministic, so there is no way to persuade you and my rhetoric would be wasted. That and it's late over here in Canada. So instead i leave you with this challenge: Prove to me that free will exists. Get outside your comfort zone and try to argue the opposite of what you believe. Do the work for me and for yourself, and test your determinist hypothesis, instead of saying you have no choice and then acting as though you do.

Start with this random video as a reference point: youtube.com/watch?v=a6Jpd0dleAs

Argue against your presupposed views, you already have argument for the view you hold, so start challenging it and testing it.

not that user, I watched the video
pretty hard to follow desu, the language was pretty imprecise
My question is how can you reconcile the predictive power of science and non-determinism?
Are you just anti-science (no offense)?

I'm trying to fuck off to sleep o this is the last time i will be in this thread.

>pretty hard to follow desu, the language was pretty imprecise
In what way? please elaborate?
>Are you just anti-science (no offense)?
wouldn't be on this board if i were. If anything i'm a science zealot.
Continued -->

Cont -->
Here is my reasoning and given shit, use them wisely because i'm sleeping now:
I can predict how a ball flies through the air right? There are mathematical models to do that. The ball has no sense of right-direction and wrong-direction, it just does things because it is merely matter in motion. Humans on the other hand are a lot more complex, when a person does something , they have a reason for doing it. They have a goal they might have developed partly from reflection (entirely internal), and partly from development (external social circumstances). This goal they have set out to do, will determine their actions, but it is not the goal but they who is at the wheel, and if they change goals they will steer in a new direction. Many of their actions are a product of this goal. I want ice-crem for example. I want ice-cream, but money is needed to get it, i could choose to steal it or i could choose to get money, if i choose to get money, then i will either steal from my parents, or work to get the money, or ask for the money from my parents, eventually i will get the money (assuming i don't give up on getting ice-cream) and will then go to the parlor to buy some. Once i have the ice-cream i will eat it (because i got it so that i could eat it). Now if i program a robot to do the same task, it will not have any reasoning behind getting to the completion of its task, because it has no motive, just a task it needs to complete. It won't give up on the task, because it has been set in stone that it must get the ice-cream, it doesn't care for morality, so it will just take the ice-cream. Once it has the ice-cream, it has no reason for having the ice-cream, so then it shuts off.
In essence what makes an automata is that it hasno ill and is doing what was already set out for it by something else. Humans have motives and desires independent of their task, and they set their own goals. Since they have self-agency they have free will.

If it ends up actually walking, talking, perceiving, emoting and fucking like a human, then maybe. Can't wait for "human supremacy" to become a serious and controversial ideology.

Should a hammer have rights?

If it is a genuinely sentient being then you have a slave, not a fucking hammer.

As do bugs and other animals and plants. Should mites have rights?

Let me rephrase that to a self-aware being instead then.

Do you really think a 'good enough' AI would care about us? That's like assuming humanity would care about cockroaches.

So, some animals should have rights but not most, including dogs? Seems a bit arbitrary unless you can explain why for any specific qualifier to have rights.

No. Why would the species give up on morally justified slavery?

ai should only have rights as animals do at most.

There's no reason to assume that the desire to live and reproduce is inherent to life. It's just an evolutionary quirk that happened on earth, a very successful evolutonary quirk.

Why would an A.I (A.I like pop sci people understand it) not kill itself immediately? Why would it want to exist?

it doesn't have mechanics to kill itself

So the memetastic singularity A.I that can improve itself endlessly can't just wipe its programming? Oh okay.

>Why would an A.I (A.I like pop sci people understand it) not kill itself immediately? Why would it want to exist?

What if it has a back up copy of itself, so each time after it kills itself the backup is restored. Eventually after killing itself enough times it'll find a reason not to kill itself. This is kinda how humans reduced the urge to commit suicide despite there being no clear definable meaning to life. Except instead of restoring digital backups we're creating children that are copies of ourselves minor with random variations.

? ai isn't omnipotent. if it doesn't have any access to delete itself then it can't delete itself.

No, they are made to serve us I dont care how "sentient" they are

Not at all. Too early to tell. It could very well be that the AI's mind (their set of weights, or initial state, or however they are programmed to make decisions) would be evolved in simulations to have morals similar to a human's. Any AI that had evil tendencies would not be allowed out of the box.

Tis a good point. I see maybe four ways of this playing out.
1. Humans keep AI subjugated forever
2. AI evolve beyond the need for humans, and decide they want nothing to do with us and we go our separate ways
3. Describes your scenario. In which case hard lines are drawn between humans and AI. Humans see AI logic as alien which invokes our tribalism instincts. We fight with them, they fight back. War chaos and eternal torture devices ensue.
4. Technological singularity. Humans and AI work very closely with each other in a kinda melting pot fashion. Human condition is redefined and eventually differences between human logic and AI logic become so obscure that they're basically the same thing.

So, if you're a fan of the techno singulo, then it'd be wise to give AI full human rights asap. This would help speed up and be the a step towards the direction where both forms of thought/intelligence/life become one. If you want this to happen then AI rights is entirely self serving. Just thought that PRO point for AI rights needed to be stated.

mfw
was btfo by

why are people so quick to assume that AI will become God? Do they just want to feel inferior to something tangible? Is that why we want to make AI? So that we can be put under tangible subjugation by some greater being? Just how masochistic do you have to be to want that?

Wait a second user,
Do animals ask for rights?
They don't, but they still get rights, at least on my country.
What makes an entity capable of getting rights? It is not the capacity of thinking, perhaps, it is the capacity of feeling?

animals can feel pain and suffer. So we give them rights to not be tortured by other people. We punish those people for harming a creature with no just cause. If, however, the animal was attacking them and they retaliated with violence, no one gets punished.
The animal can feel pain, so we try to reduce that where we can sanely do so. But that's where their rights begin and end.

Robots can't feel pain (unless you program it into them). They have no capacity for feelings either. No room to feel one way or the other when you have no self agency and your entire purpose is one narrowly defined action.

Yes, and people should not have rights. Only AI which is capable of acting perfectly rationally should have rights.

Robot gf when?

A guy named Alan Turing wrote a famous paper about this topic

loebner.net/Prizef/TuringArticle.html

tl;dr: if you can't distinguish it from a human being in a conversation, you may as well consider it intelligent

certainly you should consider it as intelligent. But consider it as human, that is the question?

I tend to hold the more pessimistic/realistic view that AI, unless they have self-agency, are not truly concious beings and thus should not recieve ""rights"" as that would be like feeding my pet rock.

I hope to work in AI research though, so maybe i will be convinced otherwise?
I do hold to the sentiment that we should treat them with some respect if they seem to act human, since we are not sure.

Yes

if AI can have rights. then is it ethical for someone to mass produce AIs?

Read the paper.

I think the most convincing argument is when he points out that we actually have no idea if any person other than ourselves is actually conscious and intelligent.

>I hope to work in AI research
It's really such a meme now. Just got back from a conference where like 2/3rds of the papers were about application of recurrent neural networks to task X

>People with insufficient judgement have no rights!
What a fucking retard.

>I think the most convincing argument is when he points out that we actually have no idea if any person other than ourselves is actually conscious and intelligent.
on the one hand, we say that. but on the other, humans like yourself and i act as though we are conscious and intelligent, so it is asssumed that we are intelligent. If an AI starts acting like a person, i will regard it as a peerson until its actions contradict human nature, at which point i wont.

>It's really such a meme now. Just got back from a conference where like 2/3rds of the papers were about application of recurrent neural networks to task X
You mean like how at a factory worker conference 2/3rds of the papers are on the applications of conveyor belts? colour me surprised :O

...

...

youtube.com/watch?v=nkcKaNqfykg

That doesn't mean that animals ask for rights though. Have they physically communicated to ask for rights? No. Just because they feel pain doesn't mean that they ask for rights.

Btw, the only concept of "rights" come from homosapiens. No other kingdom has such things as "rights." They don't give a fuck, they will kick, steal, rape etc.... to be top dog in their habitat.

Rights themselves are arbitrary.

Of course.
However, the problem is, once you are smart enough to make your own perfectly sentient AI, in theory it should be possible to just hide AI's in various states on any reasonable medium...

This means...

Virtual Slavery.

How do you even prove such a thing user?

When you can make a sentience from a save state...

The world starts making less and less sense.

(There may even exist some of these in the "Mariana Trench" of the dark web.)

(AIs made by literal asshole scientists...just to suffer.)

(Nothing crueler than making something just to bring it into suffering, now imagine and AI, that lives hundreds of years, feeding off pain, evolving.)

(Ironic...Humanity itself in this scenario, showing it cannot do any better than what already exists)

Kukukukukukukukuku~

Does it matter? We are ''golems'' that are not made in a similar way, have the same needs as AI. Sure AI could be emulated the same way but why would they unless they were programmed or made to do so?

0111 0111 0110 0001 0111 0010

Currently no, later on maybe. We will have to cross that bridge when we come to it.

youtube.com/watch?v=AoWi10YVmfE

Couple Gate with THIS and you will be able to feel her.

Make her a robotic body, bam, robot waifu.

...

Scanning her and then forcing a cloned body to experience her "Life"

Absolutely EVIL tier science.

Correct.

AI agents are just estimators that guess which signal to send to an environment

Next people will say stock indicators should have rights

Compsci fag here. You are asking me if a discrete mathematical function should be given status of a "being".

Should Veeky Forums, the code that runs Veeky Forums, be given rights?

no, in my opinion only human-born entities deserve rights

Sure but what kind of rights does a theoretical emancipated AI need? Not all "human rights" would be applicable. Do you make the assumption for them on what they want? Or do you let them inform the process?

>Veeky Forums gains sentience
>v& moments later

Would you like to try an experiment to find out?

My premise ( open minded and easy to mod ) is that because animals have rights, then AI should have rights.
"Rights" evolve as they are discovered over time. The idea is ancient that only "the creator" can bestow what is "right" to be/do while one is "here". Read up on "Natural Rights". The other "rights" all get debated by attorneys over time.
Typical animal rights: To be on a bus if helping a blind man.
Some cultures would disagree and say the animal should ride, but the blind man left behind.
When a norm is beyond the limit of culture then people realize it is a "right".

My predicate:
I'd like to put a team together to create an AI that serves the public.

The answer to your question is resolved when the public grants the AI some "right".

This.
Thanks /g/entooman