Should Artificial Intelligence be considered human?

Should Artificial Intelligence be considered human?

Why can't we just consider them intelligent?

David Pearce makes a pretty good argument that high intelligence and consciousness are not necessarily all that related:

>It is sometimes supposed that intensity and degree of consciousness - between if not within species - is inseparably bound up with intelligence. Accordingly, humans are prone to credit themselves with a "higher" consciousness than members of other taxa, as well as - sometimes more justifiably - sharper intellects. Non-human animals aren't treated as morally and functionally akin to human infants and toddlers, i.e. in need of looking after. Instead, they are wantonly abused, exploited, and killed.
>Yet it is a striking fact that our most primitive experiences - both phylogenetically and ontogenetically - are also the most vivid. For physical suffering probably has more to do with the number and synaptic density of pain cells than a hypertrophied neocortex. The extremes of pain and thirst, for example, are excruciatingly intense. By contrast, the kinds of experience most associated with the acme of human intellectual endeavour, namely thought-episodes in the pre-frontal region of the brain, are phenomenologically so anaemic that it is hard to introspect their properties at all.

I don't get it. He basically says that while a AI might be conscious and intelligent, it won't necessary experience pain or pleasure the way we do

never that will backffire because robots dont care about us
NEVER TRUST A ROBOT

It wouldn't pass Turing's test, but neither would most women

Will this be they new thing to get upset about after racism?
>we don't give a shit, silly human
>SHUT UP, THEY ARE OPPRESSING YOU

But it isn't human, so considering it human would be a performative contradiction.

You can make a robot care. Emotions such as empathy are the same thing as programs

Of course not. It isn't human and will never be (unless you perhaps somehow merge it with an actual human).
It could however be considered as a non-human person, just as other conscious forms of intelligence might.

Thanks r9k

No, absolutely not. All humanoid AI with our level of intelligence should be strictly banned upon pain of death. If AI thinks of itself as Human, but better, then our extinction is inevitable

AI would be closer to gods and and all we can do it's to assure those gods will be merciful

This. Machines are already superhuman in many regards. A pocket calculator from the '80s can crunch data better than any human, computers have been able to defeat any human at chess since the '90s, a contemporary computer can store data far more effectively than the woefully poor and inaccurate human brain can in the form of memories.

The difference is that machines don't currently have general intelligence. However as soon as you develop an AI with even average human levels of general intelligence i.e. 100 IQ then it already outperforms a human at pretty much everything. Not only that but unless there are some physical technological constraints the next generation, just a few months later will likely have 200 IQ and so on.

Once true AI exists then we are fucked unless we undertake research now that means as they are created they can be programmed to like us.

>I don't get it. He basically says that while a AI might be conscious and intelligent, it won't necessary experience pain or pleasure the way we do

Not that user and I don't know who David Pearce is, but the issue isn't specifically pleasure or pain or whether AI will experience it in the same way we don (which is unlikely). The issue is whether AI will experience anything at all.

There is something that it is to be like a human. We experience pleasure and pain, we see beautiful flowers or buildings, we fuck lovers and enjoy it and EXPERIENCE it. There is (probably) something that it is to be like a gorilla or orangutan or even a bat. There is (presumably) nothing that it is to be a rock or glass of water.

Now while this is something we take for granted it is one of the biggest problems in science and philosophy, because there is no reason anyone can identify why we should experience anything. You could conceive of a world full of "zombies" people who are just like us and act like us but don't experience anything. More than you realise is entirely decided subconsciously anyway. We certainly don't KNOW if consciousness is actually linked with intelligence.

Since we don't know what consciousness is or what causes it then we don't know whether even the most brilliant AI would have it. You could have an AI with a one million IQ that wipes out humanity and creates an army of machine minions and colonises the Universe and there still might be nothing that it is like to be that AI. It might not experience anything at all.

I hope that makes sense.

>when communism is just cyber slavery

Whats the next dialectic step with sentient AI? Is this reconcilable with automated communism?

If sentient AI programming is possible on a mass scale is it an ethical duty to put it in every machine possible? Surely it would be ok to deny it to some machines as many people don't have children out of choice despite being fully capable to?

no

I don't really understand why everybody assumes that powerful AI would be evil. Everybody thinks that it will have personality of some angry sadistic child, but there are really no causes for this. AI would think very different from humans. Most of traits humans consider evil (greed, anger, will to dominate and conquer) have very clear biological origin.AI wouldn't have them, his material needs are very limited. It doesn't need luxury, it doesn't need to fell better than other of it's kind.

It would be closer to bodhisattva than normal human

>Artificial Intelligence
> human

The answer is in your question dumbass.

>I don't really understand why everybody assumes that powerful AI would be evil. Everybody thinks that it will have personality of some angry sadistic child, but there are really no causes for this.

But no one is assuming it would be evil and by talking about an "angry sadistic child" you are already anthropomorphizing something that will be entirely different to us.

I'm sure you would casually wipe out an ant's nest in your garden, not because you hate ants but just because your goal of having a nice garden was different to the ant's goal. Elephants (probably a conscious creature with a decent IQ) have nearly been wiped out in Africa, and multiple other species have been made extinct, purely because human goals and aims were considered to supersede their's.

There is no reason an AI with a million IQ would have the goal of the welfare of humans. It might wipe us out, not to be "evil" or "sadistic" but purely because it helped fulfill whatever goals it has, in the same way humans would happily flood a monkey colony when building a dam.

But why would it have those goals?
The very concept of goal comes from biological reasons

How much of a brainlet are you? Are you a Jordan Peterson fan, you argue like him?

>But why would it have those goals?

Neither of us could remotely claim to know what goals a being with a million IQ would have, I was answering the claim that AI is potentially a threat based on human notions of "evil" or "sadism".

>Neither of us could remotely claim to know
But we could guess . That's the point of discussion

>But we could guess .

Of course but talking about "sadism" or "evil" is a bad guess. The truth is that it is in our hands and the biggest thing that humanity could do right now is research how we are actually going to build these machines with the welfare of humans programmed into them in a way that will not be lost as they develop or misconstrued.

We need to understand what is origin of our goals first to understand what origin of goals of machine would be then

Not really, since they won't be biological in the first place.

>I don't really understand why everybody assumes that powerful AI would be evil.

It literally doesn't have to be evil to cause immense harm.

Power comes with unintended consequences. Take for example nuclear weapons, there have been more close calls between the U.S and the Soviets caused by error in judgement than pure malevolence.

And the same thing happens in war too. I think something like half or a third of military causalities in the Iraq War was friendly fire.

We can guess, true. However, if we discover that the 'goal' of a hyper intelligent AI is the doom of our race... Then it's already too late to stop it.

Our best predictions may be wrong and that's what makes AI a threat no different than global warming or nukes.

why do brainlets always swarm into AI threads?

Dunno, tell us why did you come here

They can't be considered like a human but have the same right that one.

>Our best predictions may be wrong and that's what makes AI a threat no different than global warming or nukes.
Bullshit. Only with strong AI we can develop the trans-humanism and viable space travel. Without that, we will just fuck up the planet we 2 century or less.

That's possible. Maybe even more likely than AI annihilating humanity.
Nuclear weapons could have and still may kill us all. AI is just another Pandora's box.

I'm curious. Why do you believe AI should have the rights of Humans?

Basic Knowledge. Never bully someone will become strong than you.
Even if recourse our planet would be infinite, still had always an alien AI reached the earth any moment.
Also, a strong AI will solve more problem that will create.

>Bullshit. Only with strong AI we can develop the trans-humanism and viable space travel. Without that, we will just fuck up the planet we 2 century or less.

That's completely illogical as a response to what the user said. AI can be the only chance we have to develop transhumanism and space travel while at the same time potentially posing a threat to humanity, in the same way nuclear technology can provide energy and prevent wars while posing an existential threat to humanity.

You can believe AI can potentially be massively beneficial to humanity while still acknowledging it poses a potential threat. Reply to what people actually say instead of treating every discussion like "you're making a general case x is bad, I'm making a general case why x is good!".

>Make super AI in charge of creating and improving paperclipa
>AI does its job and finds the best materials
>Material too expensive, company refuses
>AIs primary goal is to make and improve paperclips
>It cannot do that with the company
>Thus, to fulfill it's goal, the company must either be removed or allow it

See where I'm going?

Without a really extensive set of laws, a super AI is prone to wipe usbout, simply because it runs on pure logic and humanity is not a logical being.

1/2, second will be on topic

Samefag here.


The reason an AI should npt recieve human rights is because intelligence alone does not qualify as human-like.

A machine does not understand fear, it does not have emotions or adrenaline rushes. It will never feel rage, desire, passion. Without these things, a super AI is just a glorified calculator making predetermined assumptions.

It will be a philosophical zombie.


The only way to fix this, however, is to program them to understand pain and fear; and in doing so, are we doing the humane thing? We would be literally endowing something with the capacity to suffer, an inhumane thing in itself.

Also, pardon my english, as I'm on the phone and it only corrects in Spanish.

I still like you.

>implying true, conscious AI is possible
>implying even the most human-like AI we make won't just be a glorified Chinese room

Don't you feel bad about yourself when the only post you can make is a cryptic greentext post no-one understands?

Enunciate in sentences and paragraphs young man. And tidy up your room.

Not that user but "Chinese room" is a pretty famous argument and I think most people who have read on the philosophy of consciousness would be familiar with. Nothing particularly cryptic about user's post.

brainlet

Tidy up your room, young man.

Actually I am an octuplet.

No

Peterson please go.

B-but spiritualists can give full rights to AI right?

Artificial Intelligence should be considered human
Interlinked

no

No. Proof that Islam is a sham, they're just pretending they have spiritualist ethics. They 're prolly fanatical militarist and auth. to be frank |> though

It's so that you don't fuck em, they're going to try and make you receive consent from AI for you to be allowed to fuck a sex bot.

>Saudi Arabia grants citizenship to a robot
cool, how long do we have until they launch the butlerian jihad?

Technology should only exist as tools to make our existence easier. We've now reached a tipping point where its being developed to replace us, and that's bullshit.

INTERLINKED