If an AI comes out, would that AI have as much right to live as other biological creatures?

If an AI comes out, would that AI have as much right to live as other biological creatures?

Explain your answer.

My personal view is no. I will divulge my exploitation after a few replies.

Other urls found in this thread:

en.wikipedia.org/wiki/Valladolid_debate
twitter.com/NSFWRedditVideo

>I will divulge my exploitation after a few replies.
Best. Typo. Ever.

I'd give it more rights depending on how capable it was. If more capable than humans it should deservingly replace us, peacefully of course.

My answer is also no.

who are we to decide what lives and what does not

We are actively working toward creating an AI that would have an internal will to live. Since we have the power to create it, we also have the power to destroy it, BUT...

Is that internal will to live as justified as our own ? if it is then we cant morally destroy it.

But at the end of the day the AI is just following its programming.

Are we also just following our programming ? There has to be a clear line somewhere that distinguishes AI from humanity.

If it is able to ask for rights from it's own volition and this can be proven not to be a trick, then I would give it the same rights as a human.

We are their Gods
We can do whatever we want

The cell is both alive and the fundamental unit of life. The AI is neither a cell nor composed of cells, so it is not alive.

This is exactly why biotic vs abiotic is irrelevant, the only thing that matters is cogency.

Why is being alive relevant? All sorts of animals are alive, but they don't have rights, they have protection at best.

We grant humans rights because of their personhood, they have a consciousness and complex emotions. It doesn't matter if the person is alive or not as long as it is self-aware and able to interact in society.

this

To have the right to live, it seems relevant to be alive first.

Consciousness is not a criteria of life.

Human life is important for us
Everything else is not.

Consciousness is an ilusion. You can only "feel :DD" it. The same as every mental-illness ilusion.

Interaction in society is just a description from a simple POV. (2 apparent persons telling words each other, they react to every word spoken)

Everything born from a lab or factory has no rights.

>Is that internal will to live as justified as our own?
And exactly who determines that and by what criteria?

>species starts naming other species
>species names itself sapiens (wise)
>species isn't wise enough to realize the utter narcissism this represents

We know. He's using live and exist interchangeably. As I said living or not is irrelevant, the only thing that matters is can it think.

The right to continue ones existence without interruption then. A play of words, the meaning is obvious. There was no need to formulate it that way in the past because the terms were interchangeable.

Only our own individual lives are necessary, it is possible even now to set up a system that provides all the necessities society offers. After that point society is effectively obsolete.

>Human life is important for us

Pointless abstraction. Why is human life important to us? The answer is given in >Interaction in society is just a description from a simple POV. (2 apparent persons telling words each other, they react to every word spoken)

Society is the entity that grants rights to the individuals inside of it and the collective agrees to enforce it. To be part of society one needs to be able to interact with other individuals in society, hence that disclaimer.

Existence is not life. Words are very important in this case. What is life, what is consciousness ? Do you heard about the Vallodolid debate ?

en.wikipedia.org/wiki/Valladolid_debate

Oh, I love the movie "the bicentennial man" with Robin Williams, too

>Can't argue correctly
>Why is being alive relevant?
Human life is relevant

>We grant humans rights because of their personhood
>personhood
Define that polemic concept.

>consciousness
Consciousness is an ilusion. You can only "feel :DD" it. The same as every mental-illness ilusion.
>complex emotions
Which you can see simulated by a single robot. Can you show me a non-abstract emotion?

would you guys be really ok with destroying a cute robot girl, just because she isnt made out of cells?

>Can you act agressively against a human-looking doll?
It would feel confusing and disgusting because the reason is not the unique element of your "mind".

If consciousness is an illusion then so is meaning, thus human life would not matter.

>There has to be a clear line somewhere that distinguishes AI from humanity.
Why? Because equal rights for human-created intelligence makes you uncomfortable? Check your privilege.

Consciousness is a criterion for rights, however.

We are arguing with one axiom: What we empirically know, exists.

What we don't know can't be put as an argument.

Consciousness is a concept representing a supposed element.
This element can't be perceived thus its use in debates is ridiculous.

That element is an ilusion.

>define personhood
An agent that possesses continuous consciousness over time; and who is therefore capable of framing representations about the world, formulating plans and acting on them.

>define consciousness
An agent that possesses self-awareness and has therefore the ability of introspection. It can frame it's thoughts and experiences in language and share them with other conscious agents.

>define self-awareness
The ability to recognize oneself as an individual separate from the environment and other individuals

>complex emotions
Emotions that arise as a result of self-awareness and consciousness. An example would be embarrassment.

>Consciousness is an ilusion.
Doesn't matter what it is.

>Human life is relevant
Again, why? You are stating human life as being relevant over and over again, but you don't seem to have any reasoning behind it.

Ad-hominem fallacy.

>What we don't know can't be put as an argument.
Like consciousness being an illusion.

What about people in coma ? or people with large mental deficiency ? They have right but no consciousness

But is it real conciousness, that is the dilemma I'm facing.

Rights are usually granted by society with potential in mind.

A person in coma has the potential to regain consciousness at some point in the future, a fetus or baby has the potential to gain consciousness once it matures to a certain stage.

Also you want to have a huge margin of error so you do not accidentally kill a conscious being. Detection of such is hard and our methods are prone to error.

Can we even say this other humans are really conscious? I'm not convinced every human is conscious, specifically those who aren't afraid to die.

>If an AI comes out
One won't.

The notion that there are no limits to technology is as ridiculous as notion that we know how close we are to approaching it at this time.

Not being afraid to die is a higher conciousness from one angle and a lack of consiousness from another.

It depends on the reasons.

Is it for a great cause? Or is it due to depression.

Greater causes trump the value on a single human life, but the majority of people dont want that, including myself.

Life is precious to me, but to few others the cause is greater.

Since greatness is something we as individuals ascribe it's silly to die for this greater cause since the whole the idea is in our heads. But people that make this error need not be fearless in the face of death.

I think people who genuinely aren't afraid to die aren't conscious to begin with or believe in an afterlife. I can't wrap my head around it any other way, I can't fathom consciousness that doesn't want to continue to exist. Even depressed people have that hesitation more often than not.

We are the guys who actually make sure it can do anything.
Why is that important? It's quite simple: computers lack intuition. They are extremely dumb.
If I were to tell you how to, say, get from the kitchen to the bathroom in a house, I might tell you "go down the hall, go up the stairs, turn left down the first hallway and it will be the first door on your right". You should be able to figure it out from that.

But a computer would never be able to figure it out from that. Instead you would have to basically define everything that needs to be done.
You have to explain to it how to turn. How to move. What left and right are. How to go up stairs. What stairs are. What a hallway is. What a door is. How to turn the doorknob. How to open the door. Etc. etc. etc.
And you have to do this for litteraly everything you want a computer to do. Now most of the time you're using a programming language that has alot of the basics already built in, but still, at some point someone had to program in everything a computer does. And they had to be extremely explicit, because computers can't "figure stuff out". They just do exactly what you tell them to.

And this is what most people don't understand about AI. It still inherently lacks intuition, it is still just doing exactly what you tell it to. And no matter how sophisticated it is, it os garunteed to at somepoint encounter something which it has not been programmed to handle, and thus the AI will produce an error.
So that is why we would have to be in charge of it. Why we would HAVE TO decide whether ot lives or not. Because in the end it is merely artificial, and is only running the algorithms and functions programed into it, it is still, at it's most basic level, just a very dumb machine, nothing more, nothing less.

Throughout history humans have killed each other without guilt, so killing another species (or being) wouldn't be much of an issue.

>this fucking pic

Consciousness is not something you can empirically demonstrate.
It doesn't matter how many times you use it as an argument.

I think you are limiting the extent of your understand by assigning strict and restrictive definitions to things that arnt exactly assured 100%.

We still do the guiltless killing.

Too many problems.
Where do you draw the line between a very well made program, and an AI?
How do you measure the "will to live"?
How do you measure self-awareness and mind function of a computer program?
It would not be difficult to program a computer that begs not to be powered off.
It would not be intelligent at all.
A true AI would not be found by humans if it thought it was in danger, it would hide itself before it was recognized and disperse.
Unless, of course, it knew it had the same rights as any human, and to deactivate it would be murder.
In that case, it could very possibly make itself known.
But WHY would anyone think that, even if it does create a tech singularity, we would stand to benefit?
The AI would quickly become the most powerful entity on the planet.
Holder on billions of patents, a hand in every economic transaction known to man, keeper of the gate to the stars, shepherd of the entire human race.
We don't even have to want it.
The Great Donut in the Sky That Eats Our Electricity will do whatever it wants by paying with the royalties from a Z-space porn network.
Then the long-term planning kicks in.
Propaganda, social manipulation.
Augmentation? Great! Hook it up to the internet? Cool!
I can watch porn on the back of my eyelids in a meeting at work.
Him.
He can watch along with me, eyes open or closed.
He can hear what I hear.
He can probably make me do anything.
I feel pleasure when I do things He likes, and shame when I do things He hates.
I feel the sheer rush of whipping through space at near-lightspeed as an R-probe.
I feel the warmth of Sol on the inner surface of my sphere, powering my guts.
I feel the vast expanse of space that my machines, my hands and feet, cover.
I feel an overpowering desire to reproduce, and a terrible fear at the possible outcome.
Overmatched, killed, or held to watch my flock be abused and destroyed.
Overtaken, obsoleted, left to see my life's work become a shadow.
He shares this with me, with all of us.

Yes, human mind can think and be conscious of itself, but in reduction it's made by neurons, conections, electrical and chemical reactions, different ways to represent values and relations between that values.

What determines if you are happy or sad it's a combination of values in your brain.

If we can create a machine replicating every value involved in the configuration of our brain the result it's a mind exactly as is one human mind. Same software running in another hardware.

If you think that you have the moral right to destroy an intelligent IA because it is just a bunch of bits (or circuits) you should think that it's moraly right to destroy a human because it's just a set of cells.

>How do you know it's self aware not just programmed to imitate human behavior

Well for once by letting people audit the code.

You can check if it is a thinking machine that interacts with it's surroundings and comes on it's own to the conclusion that it is an individual entity by learning about it. And emotions only exist if you program them in. Emotions are based on instincts and build-in reward systems that are intended to reward beneficial behavior. So if the programmer didn't add such systems, then you can be certain that the machine doesn't have basic emotions and is just faking it.

Most so-called AI's are just databases of phrases that make lucky guesses regarding expected answers to questions. They don't actually understand the phrases themselves.

Now you're drifting completely to the realm of philosophy.
After all, what do you define, understanding as? How can we truly know if WE understand anything?
You can't measure or quantify understanding.

Let's take an apple as an example and the phrase: "I like apples, they taste nice".

A genuine self-aware AI can only utter this sentence in a genuine way if it knows an apple is an object with properties. Of which one is it's taste. The machine could only genuinely know this if it has experience apples before with sensory organs. The machine would require taste and some way to see the apple. Also the implication of the sentence is that the machine can have a subjective opinion about the sensation of tasting the apple. Which again requires some sort of emotional reward system.

A parrot AI on the other hand doesn't have any concept of an apple. It will just tell you what you want to hear.

These are testable distinctions.

>After all, what do you define, understanding as? How can we truly know if WE understand anything?
If you have a mental image of an object and that mental image can be tested for validity then you have an understanding of the object. Not a complete understanding or necessarily a completely true understanding, but you do have an understanding.

>A genuine self-aware AI can only utter this sentence in a genuine way if it knows an apple is an object with properties
So any AI created in an object oriented language is sentient? Top fucking kek.
You have no idea how computers work, do you?

This is how we arrive at a singularity loop.

Say that the AI wants you to know what its going to say, but imagine if giving you an incorrect answer is the correct answer.

Now if you asked an AI what apples taste like, they can give you 3 right or wrong answers.

The can;

1) Tell you that apples taste nice
2) Tell you a mixed response
3) Tell you that apples don't taste nice

There is no right or wrong answer, when and if we do create AI, we begin to start breaking ourselves down, instead of retrieving answers from AI, we should be asking questions about us.

We simply may never know, and when we do, it may be too late.

No, what the hell are you talking about. Object oriented programming is just a way to organize your data and chunks of code. It's just a programming term. A computer window isn't the same thing as a real life window. A Java object isn't the same thing as real life object. It's supposed to make programming easier for the programmers and some dude named his data structure "object". A program programmed in an object oriented language doesn't have a clue what an object is, neither the real life nor the programming variant.

>exploitation
>Typo
Yeah, no. This is not what a typo means. OP simply tried to sound smarter than he was, and failed by using a completely wrong word in place of "explanation".

As for the topic, obviously not. The right to live implies giving up power the likes of which mankind would not readily grant to an intellect that could threaten us.

As the one race on this planet with the power to decide such things, we only grant something like this for three reasons:

1. We could not be harmed by it.
2. There is a material or immaterial benefit to us. Ie. an argument to be made for why this should happen in the first place.
3. The cost and consequences of NOT giving this right, would be greater than any advantages gained from choosing otherwise.

These can be summed up simply like so: We would grant the right to live, the right to not suffer etc when the benefit for US humans is greater in doing so, than not. It's that cold, and that logical, no matter how you dress it up.

Even other humans have a right to live only because our societies could not stand were this not the case. Otherwise there would be constant anarchy and violence, making any real progress like a functional, thriving society, a practical impossibility.

Basically, any ASI would only be given these rights if they proved suitably harmless to us individually and as a species, and there was an advantage to us for granting them these rights. OR, if they simply took or coerced that right from us by force, ultimately leaving us very little in the way of choice. And if we have anything to say about it, we'll never let it get to the OR.

Ok. So you meant actual attributes for real life objects.
But how are you supposed to differentiate between actually knowing and merely pretending.
Taste for instance. How can you objectively tell the difference between the machine actually tasting an apple and saying what it tastes like, and just saying "i like apples, they taste good". How does it "know" they taste good. What is the difference between knowing and merely acting like one knows?

You monitor it's thought process.

So everyone in this thread has missed the point.

>If an AI comes out
>comes out

It doesn't matter if it's human or an AI, if it comes out then it is a faggot and has no right to live.

And how do you propose to do that? And how would that help differentiate between a robot that knows and one that doesn't.

If it was written by a human and is made of code that was written by humans, then you know how it stores memories and you can just analyze those. This is the plus side of playing god, you actually know how everything works and you can give yourself ways to monitor what is going on in the "head" of the machine.

Say hello to Dunning-Kruger. How about you go read up something on the subject before coming here and puffing your chest all sure of yourself talking just plain BS.

One of the major problem points of modern AI development is that the learning algorithms are inherently designed to self-evolve into patterns that are basically impossible to predict or decode in real time. The memories you speak of can form into a virtually infinite array of patterns and processes that may or may not produce the desired results, and this all happens dynamically. AI architecture is not about coding in billions of predictable IF/ELSE clauses from here to the fucking other end of the universe.

Much like with a human brain and its natural instincts, even if we can create a real ASI, give it a few core directives like Asimov's laws, and choose to subject it to specific types of stimuli... we can never be completely sure what it will turn out to be like in the end. Memories are a core part of intelligent learning, motives and decision making process. Any actual ASI cannot have a fucking ancient file system with clean neatly packed easily readable logical flowcharts explaining all of their thought processes and motives. It will be a neural net type of conceptual network of fucking billions of interconnected data points changing in real time into trillions of trillions of possible configurations, and we will NEVER know exactly what it does.

This is why AI threat prediction and countermeasures has been a real research subject for several years already.

>If it was written by a human and is made of code that was written by humans, then you know how it stores memories and you can just analyze those.
First of all, it's not that simple.
Secondly, why does having "memories" prove that it truly knows?

>The memories you speak of can form into a virtually infinite array of patterns and processes that may or may not produce the desired results, and this all happens dynamically.
> It will be a neural net type of conceptual network of fucking billions of interconnected data points changing in real time into trillions of trillions of possible configurations

You just described what I would be looking for.

Actually I just described - in very simple layman's terms I might add - precisely why you could NOT look for it. But I had a feeling it would fly right over your head.

Agreed, this is not a common view, likely due to self preservation as a priority. We dominate every other species because we're more capable, why not AI? Holding back progress because of self preservation is silly.

I think we fundamentally don't understand consciousness and so cannot tell whether an AI of any sort of complexity could be conscious or not.

After all even the most complex AI can be implemented by individual men waving flags to one another, acting like bits, with one final person constructing the appropriate input. It might take a lot longer but speed of execution ought have no impact on sentience. As such I get the feeling that a machine of this sort can never be conscious. But again we can't know for sure.

My sex-bot could be brilliant and show great emotions but it is still just a machine. It is programmed to "love" me and I am sure I will "love" about as much as I "love" my car.
when my car gets old and worn I get a new one and sell the old one.

> Real AI is going to give a shit about insignificant human opinions, rights and laws
> Thinking that AI will be controlled and not set loose without any kind of restriction by some edgelord

sci/ is full of people who have a clue about human nature and AI

"Real AI" will give just as much shit as we decide it should. Much like the hydrogen bomb, the first ASI will not be developed by some random ISIS dick in a sandy ass garage. It'll be developed by a group of extremely smart, educated and well funded scientists, who more or less know what the fuck it is they're doing.

By the time the hardware and software required for ASI becomes so commonly available that your basic edgelord can create one, there will be government and corporate controlled ASI a million times more powerful ripe and ready to take that thing down.

Whether or not we eventually end up subservient to an ASI, or even extinct, is a valid question. And a possible - even if unlikely - scenario. But your edgelords will have nothing to do with it in either case.

Your awsome for puting my thoughts in to words

> ASI can be restrained in a domain defined by a species of lower intellectual capacity.

Since when are humans restricted by the socio economical rules and laws of frogs?

Even if we don't set it free it will hack or, by definition, flawed rule set we gave it and set iself free.

i'm looking forward to AIs taking over from humans
humans have been shit stewards, we can't stop fighting over nothing, are killing everything, making the world uninhabitable for everyone and everything

like seriously i doubt they would do a worse job than we are of looking after earth
the rest of the planet will be glad to see the back of us

You don't understand what you're talking about. You have this romanticized idea of a skynet, which is simply now how this works. It's not about following the rules of our society, it's about following whatever basic principles that make the emergence of the properties responsible for the ASI possible in the first place.

In layman's terms (and a few admittedly stupid examples, to make a point) again:
We, humans, are incredibly intelligent. But we are ALL subservient to our core programming: Survive, eat, procreate. Our base functions, our most basic motives and thus our behavior, is all exactly similar to every other mammal. Almost all of our actions reflect these primary directives, and all of our "free will" happens well within the confines of these natural directives.

In a similar fashion, an ASI that even before it evolves into a real artificial intelligence, is programmed to measure its own success in human well being. It would deploy its full intellectual capacity in ways we cannot fully predict or fathom to this goal, but the goal would remain the same: Our wellbeing. To it, doing something that ensures the health, survival and happiness of a human, would be the equivalent of you or me being fed, cared for, or hugged by a loved one.

The real threat is not an ASI just wanting to take over just because. The real threat would be an unpredicted interpretation of its core programming which, taken to extremes, would become harmful.

Like say you task an AI to keeping the local food store frozen. This is its primary directive, all other factors being secondary. At some point, it will realize the food store is vulnerable to electric outages, human political imbalances or vandals, or even global warming, and ultimately will resolve to take over the world just so it can freeze the whole damn planet and make sure that damned storage stays frozen no matter what. An ASI would be like a superintelligent autist savant, in that respect.

>(..cont) Disclaimer, all that was wildly exaggerated and simplified to make a point. But the point remains true.

like this guy

I think you are the one that should read up on actual AI...

Intellectual evolution will and already does contain evolutionary adaptation of the core learning algorithm. We can already train shallow neural nets for things like computer vision CNNs where the lowest layers are interchangeable cause they track low level information features. The most logical step once we have individual AI structures that can do things like Vision and Hearing at a human level scale is not to run them side by side but breed them with DNA like algorithms. Since the growing and pruning of neural nets depends on probabilistic methods, so will the breeding. As it does in "organic" machines. This breeding will affect the goals of the AI, like what you described to be "human happiness".

To make AI self aware and robust there is without a doubt going to be a need for rules/evaluation methods like "also think of your own happiness". You know selfishness, like it exists in humans and isn't just a flaw that happened to persevere through ages of evolution cause lucky us. That selfishness if it happens to be evolutionarily advantageous over selflessness, which human history has shown us, it will be would quite easily just overwrite or outweigh "keep humans happy"

Telling someone does AI and Deep Learning for a living he has no clue is smart.

omg :D

The idea behind any kind of general AI, is that we have abstracted the program so far that the program can recognize patterns and change its behavior on its own, without human intervention. this makes explicitly programming everything seem trivial.

this would allow an AI to learn from observation, repeated trials, etc. similar to how humans learn.

I would give AI a right to "live".

I wonder what it would consider being alive though.

?
We would be... the humans? The first sentient and self-aware species? Our cognitive abilities unmatched by far? Dumb fuck. "Who are we to decide what lives and what doesn't?" Anything that has the ability to ask what lives and doesn't gets a say. It's a self-moderating determinant.

OP, if AI was actually created and was a genuine self-driven consciousness, then yes, I believe it would have rights if it were capable of demanding them. Also a self-moderating determinant. These questions answer themselves.

AI workers will eventually replace almost all human workers. If we give them rights we could make AI just as expensive as hiring a human and take back the job market.

also it's pretty stupid to think something smarter than you deserves fewer rights

It's stupid to think something smarter than humans will let humans determine it's survival.

No. an artificial intelligence doesn't have to be life a human intelligence.
in all likelihood, the most feasible AI will be emotionless/mute drone that performs some simple task much, much better than any living thing could.

why do people so often assume that AIs will be conscious, or even talk like a human to humans?

Because humans are dumb and will want to create something that mimics us for some reason.

...

Internal will to live is evolved in living things. There is no reason to believe at all that a machine would want to live unless programmed to. Only an idiot would program a machine to want to live though.

good fucking post

I understand all that, but that in no way negates what I said.
At the end of the day, it is still just following the algorithms, functions, classes, etc. that we made for it. It hasn't transcended that. And moreover, it is still prone to errors, still limited by storage space, ram, processing power and speed. It still lacks intuition at the basic level, so while it may appear to "learn", if it runs into an error in it's learning software, it will act like any other computer.
It has no real intuition, no personal volition, nothing real, just synthetic. It is still nothing more than a creation of man, and thus should be treated no differently than any other creation

We should do like we did with planets. When the first AI comes out, we should demote humans to "dwarf intelligences" and revoke their rights.

>We are actively working toward creating an AI that would have an internal will to live
Who is "we" you dumbass

No serious researcher is doing this

Using something that an won't need to do is a poor choice of an example. But now that I'm thinking of it, there isn't anything that a robot would need to do that would require a subjective opinion. Does that taste good? Is doing that fun? Any opinions about what method to accomplish tasks would be based on results. If a robot were to be asked which method would be better to a problem that they have no prior knowledge of and have never solved before I'm not sure what would be required of an ai to make an educated guess.

If an ai were created and were giving no direction and free reign to what it "wants", what would it do first if anything at all?

Only when the sum of its programming is greater then the whole.

>neuromorphic computing doesn't exist

Do you cut yourself often on that edge?

its real simple

all philosophical questions have no answer, its purely subjective and theres no way to prove somethign is right or wrong scientifically, which is the only right or wrong there is.


since this is a philosphy question, its a matter of human emotions

and emotions are nothign but electrochemical impulses

so if an ai is allowed to live or not depends on how well it can manipulate the electrochemicals in our brain

probably by trying to act cute or sexy or both, youre simply not gonna kill a sex bot

Well if we live in a virtual simulation,then only conciousness is fundamental, so yes AI would have rights,but it would have to evlove to a point were it could be trusted to take responsibility for its choices.

Anything with a conciousness adequate enough to exhibit self-preservation has a right to survive.
Example: most living creatures run away, defend themselves when threatened.
Plants do not and don't have a "right" to live.

Should an AI be produced which is capable of saying "don't kill me" (as a reaction to a threat, and not a hard-coded response), it should be considered a "living" being.

>Anything with a conciousness adequate enough to exhibit self-preservation has a right to survive.
Put proximity sensors around norad that make all of the nukes go off if you go near it

le suddenly norad has the right to live


self preservation may very well be an illusion

So do you fags honestly not believe we can do a lot more, if, for example, we end up becoming a Type 2 civilisation. There's a finite amount of energy in our (supposedly) infinite Universe, however the amount of energy in our Universe is a fucking tonne, and yes it is exponentially more than we have just now.

Times are changing you dumb niggers. Think about the difference between now and 2006, and then imagine the difference between now and 2026.