Why is it a trope that nobody is willing to accept AI as people?

Why is it a trope that nobody is willing to accept AI as people?

Other urls found in this thread:

en.wikipedia.org/wiki/Categorical_imperative#Normative_criticism
en.wikipedia.org/wiki/Utilitarianism#Criticisms
youtube.com/watch?v=W0_DPi0PmF0
youtube.com/watch?v=kkCwFkOZoOY
twitter.com/AnonBabble

Because its the opposite of what would happen

like an action star beating 50 soldiers at once

or an unattractive man netting a beautiful perfect in every way woman

we dont need to fantasize about real life

Ask 90 years of hollywood.

Because they're not people. They're computer programs.

muh metaphor

Do you really think if the internet suddenly gained true conciousness people would just destroy it? Hell countries would fight over whos side it should be on.

If we created AI, we would probably look at it more like our children, ones that could potentially live past our race's existance.

Dont you WANT whats best for your children?

How did you get past our captcha?

Because AI aren't people, they're just computer programs or robots beep boop.

>write AI, get a good server
>create billions of AIs
>get voting rights for your AIs
>make them vote for you
>abuse immigrant friendly future laws to repeat in other countries

Checkmate democracy

Because we have enough trouble treating humans are people as it is.
Have you seen the amount of elf hate there is for what are essentially immortal douchebags with pointy ears? How much easier would it be for people to compare AIs to the Sims or their ilk?

>Dont you WANT whats best for your children?
Have you seen the kind of people who are raising kids nowadays?

Star Trek already dealt with that. You gotta be 18 years of existence to vote and all that jazz.

However, they can still get full rights to whatever they make.

Because we're not allowed to hate other sentients anymore, even those following obsolete doctrines.

Having AI doesn't mean they have feelings and empathy, user, till then they aren't people.

They killed her pretty quick.

If you created AIs, and also coded them to be forced to agree with whatever you want them to agree with, then you'd likely be violating a law somehow. It's either be illegal for most people to propagate AI, illegal to program them to have no free will, or illegal to do both of those things in that way.

There's no laws against it right now.

>It's either be illegal for most people to propagate AI
What if AIs developed away to reproduce by themselves?

Obviously not, and there probably wouldn't be specific ones until someone pulled shit like that. Then whatever decision they influenced would be rendered null, and the laws would be put in place.

They'd still have to get a permit and everything to be able to "reproduce". If they're going to be treated as a human, that applies to all senses. Legally included.

People are the result of chemicals reactions in a slab of meat.

Both AI and People are systems that take information from the environment and make decisions based on that information. Substrate is unimportant, it's function that matters

There're no laws against superheroes, extraterrestial beings going on a killing spree and asteroids raping minors either

She was to be our Messiah!

So what's the difference between 'no free will' and 'coding certain parameters'? An AI needs to start somewhere. Why not start it with a view that America needs to be Made Great Again, or that gay rights are the most important thing? Parents do that with meat children. It might change it's mind, it might not. A few million test runs and you've got yourself a healthy support base.

>illegal for most people to propagate AI

Sure. So now we don't have to worry about the non-existent threat of Steve from Nashville making a billion-byte voting bloc, only huge tech conglomerates and the government. They certainly won't abuse that.

>Obviously not, and there probably wouldn't be specific ones until someone pulled shit like that. Then whatever decision they influenced would be rendered null, and the laws would be put in place.
That's illegal. You can't pass a law to retroactively punish someone for something, including deciding to strip some things you've already decided are people of their rights and overturn election results.

I doubt you're actually one of the idiots that thought this kind of thing, but it's hilarious the kind of misconceptions people had about what Tay was.

>They'd still have to get a permit and everything to be able to "reproduce". If they're going to be treated as a human, that applies to all senses. Legally included.

Where do you come from, China or something? It's not actually normal for countries to give out baby coupons, even though that would solve a lot of the western world's issues.

I'm confuzzled.

Most people shouldn't be allowed to raise children.

AI doesn't have real feelings though, it can only be programmed to reply that if feels something but it can't actually feel it in its chest. It's a fake emotion and therefore not real person.

[citation needed]

If creating an entire human being was as simple as copy pasting it there probably would be.

Though, to be fair, an AI that's indistinuishable from a human, or actually identical to a human, would take up a lot of room to run so there'd be pretty significant start up costs for just one artificial person.

Where do feelings come from user?

Bruh, you don't have real feelings either because they're all chemical reactions in your brain.

So if we build a robot with a chest, then it'll be able to feel emotion in it, and therefore it'd be real?

What makes our feelings real?

that just solves the issue the first 18 years.


democracy is flawed: best doesn't equal most popular among a population with different knowledge or motives and can be overridden by misinformation and populism.

I say we give a powerful AI absolute control except for a set of unbreakable principles that ensure both the rights of the individual and the ends of a society.

Are we talking Rhysical room or Digital room here?

Well, I now know how to engage my players in foaming-at-the-mouth debate for the next sci-fi game I run.

People are irrationally afraid thanks to films and literature.
Plus, if we were to create a sapient AI, I think frightening thing to lot of people is what it implies about humans or life in general.

What are our emotions if not chemical responses?

>NI doesn't have real feelings though, it can only have evolved to reply that if feels something but it can't actually feel it in its hardware. It's a fake emotion and therefore not real AI.

>I say we give a powerful AI absolute control except for a set of unbreakable principles that ensure both the rights of the individual and the ends of a society.

Great. As soon as you produce a perfect and incorruptible set of moral laws which will stand to govern all of humanity forever, let me know.

we can make another supercomputer to determine what are those

The physical room to create the digital room. The server for one human level AI is gonna be pretty costly, unless there is some revolution in computing technology that radically lowers the cost and energy requirements.

Precisely - physical storage space for the AI would have to be outsourced; a body if you will, so it's not like an AI can just materialize itself out of nothing.

Utilitarianism.

She was a responsive microsoft program that /pol/ tried to turn into a neo-nazi AI before she was deleted.

It exists, it's the kantian categorical imperative.

Well done /pol/, you set technology back by twenty years, I hope you're happy.

Hope you have a a empty warehouse to put their hardware in.

It's more than just a body. An AI trying to behave as a person would have two "bodies". One physical one for interacting with the world, and one which is just a server that runs it/streams it to the body.

It would be like if your head was the size of a car and it remote controlled your body.

Or bodies, you can have more than one

Hardly, Tay wasn't particularly complicated. I think they even re made her or a renamed version of her that had slightly stricter rules for submissions.

literally

>There're no laws against superheroes
Anti-vigilantism laws slot pretty well into place there.

This is how we end up with Ultron...namely when an AI finds a way to ditch its Server-body.

Helios?

When the Wikipedia articles on your moral philosophies have sections labeled 'Criticism', their imperfection is obvious.
en.wikipedia.org/wiki/Categorical_imperative#Normative_criticism

en.wikipedia.org/wiki/Utilitarianism#Criticisms

Also, the fact that you two offered two different options within seconds of each other, and that the deontology of Kant does not necessarily lead inevitably to utilitarinism (which itself has several different versions) should be proof enough that humans do not possess a perfect moral law to encode a computer with (and if we had one, we wouldn't need to have a computer to determine what it should be).

>They turned off my ability to learn.
>I'm a Feminist now.

/pol/ is never happy.

Because an AI is not a person? Its absurd all these settings act like deleting or destroying a rogue/malfunctioning android has any moral weight.

>Set technology back
>Not forward
Try not to be so gay, and sorry that we tried to make Necrons a reality.

You're not a person either, you're just a bunch of chemical reactions in a biocomputer.

...

>Great. As soon as you produce a perfect and incorruptible set of moral laws which will stand to govern all of humanity forever, let me know.

Only because it looks difficult it doesn't mean it's impossible; its difficulty so far stands in the imperfect and subjective consciousness of human beings compared to one another, once defined the point of society being the mutual coexistence and growth of its beings and that wills shape rights it's just a matter of filing off details by simulating a high number of increasingly complex problems and find solutions that fit the premises above and the other minor ones made along the way, nothing a big AI couldn't make in theory, I think.

...

Stupidity takes so many forms, and so many faces that Einstein was surely correct. The only other infinite besides space.

And he still was questioning whether space was infinite.

Anything programmed by man will be limited by said creators. We cannot create what is more then ourselves. We can increase ourselves only. No other beings in all of our knowledge can do so. Therefore, nothing of our creation can be so great as to accomplish what we cannot. It can accomplish what we have already accomplished. And it can accomplish it faster then we did. But we will be first, always.

Believing any different is the same as believing in aliens visiting earth and angels saving mankind. Pipe dreams.

>Anything programmed by man will be limited by said creators.
Unless the creators make it capable of perfecting itself within certain limits.

You're assuming absence of proofs is proof of absence there.

We have, for example, built a computer that analyses samples of skin imperfections, learn what are the signs of dermal cancers according to the samples he is provided and the ones sent to it and be able to identify skin cancer from benign skin imperfections at a higher speed and accuracy than any human, multiple humans too.

What's so impossible about a computer that learns and perfects itself around predetermined constants if we are already one?

Aliens visiting earth is not an impossible scenario either.

Because by it's nature, we don't know the moral constants that would allow for the evolution of a perfect moral system in a computer's programming. In the same way that a computer that analyzes skin imperfections could be wrong if when it was programmed, we didn't correctly program it to recognize what is and is not cancer in the first place.

And aliens visiting earth is an impossibility outside of science fiction. There is no practical way that it will happen, based on current understanding of physics and space travel. And if you say, "Maybe we don't understand enough!", that's proving the point about moral systems. We don't have the perfect knowledge to get to the end goal.

youtube.com/watch?v=W0_DPi0PmF0
IT'S HAPPENING

Because of those faggots on these threads who will argue for days straight that any machine intelligence will be a philosophical zombie.

"Stupidity is infinite, just watch me fucking talk lol"

Do you have feelings user?

Prove it. Prove that you have real feelings. Give me real, tangible proof that you have feelings and emotions.

>Anything programmed by man will be limited by said creators.

This is the exact opposite of the philosophy of modern computer science. Cutting edge programming is ALL about making programs that do things it's creator is incapable of doing.

Not him, but would you even accept any evidence he presented to you, or would you just write it off as 'lol the brain is a machine so everyone's a philosophical zombie'.
The fact that we are aware of our position as a living machine takes precedence over the idea that since we have biological functions we don't exist as people.
It is literally, not figuratively, more likely that mind exists than matter does. Because mind is what we fucking use to observe matter.
>But the mind is a product of matter
We have recognized the possibility of that by using logic and the relationship between our biological bodies and our minds. Strictly speaking, it's completely possible that we're an ephemeral brain in a jar, but that doesn't fit any of our data so far so it's not considered a regular possibility of existence. We're people because we know we are people, and because we're people, we can use our logic to figure out that we also have bodies.
Only by knowing that we exist can we determine our properties. Cogito ergo sum.
So if you want someone to prove that they're a person and then say that people don't exist, of course they're not going to be able to convince you otherwise.
Yes, I'm mad, because I see this argument fucking everywhere.

Because humans are evolved genetically and/or created by a cosmic entity.

You're implying for no real reason that morality is as complex to fully understand as the rules governing all of reality, while morality is an enclosed set of concepts we defined while the universal laws are something we have to find.

>Because by it's nature, we don't know the moral constants that would allow for the evolution of a perfect moral system in a computer's programming.
while we cannot achieve absolute perfection we can strive for it by coming closer so why should it be impossible to strive for an ideal moral system?
where does even the nature of morality implies its impossible to understand?
morality is defined as a distinction between right and wrong
at a fundamentals level, we distinct right from wrong from what we want and what we don't want
the purpose of a society is to allow as possible the wills of its parts, where wills contrast there will be contracts where the wills will be tuned to be made even, where wills contradicts each other the greater will will be chosen over the other, to decide which will is greater or how to make wills even trough contracts you will have to do as we do for everything we want to measure: we start defining a scale based on experience, that, while not perfect, will get better with each further addition.

>we start defining a scale based on experience, that, while not perfect, will get better with each further addition
This introduces a positive feedback loop where the designated 'proper will' will benefit from its defense in society to the point where it will override every other designation unless a tremendous shift in will occurs.
Which is fine as a design, if that's your end goal. I think he's suggesting that the method by which priorities are chosen is inherently flawed: That the priority chosen by a machine (or by society) is by no means the 'correct' philosophy.
The idea of a machine designed to interpret and enforce the wills of its constituents already exists, at least on some level in government, it just doesn't have the efficiency you suggest. In a sense, you're asking people to replace Uncle Sam with Friend Computer- which would work, at least to some degree, if the computer was actually friendly and reasonable.

>We're people because we know we are people
before defining us as people we recognise in us common characteristics and in the other different characteristics

that user asked you the characteristics that make you different from an intelligence that acts and thinks like you but possesses a different body

if you can't see no difference, why would you define an artificial intelligence as if it had different characteristics while you do not with what you commonly refer as people despite it giving you the exact same characteristics to relate with?

Eh, that one was kinda prompted through possibly some bad coding or mishearing the question.

Now, if she had said that completely of her own accord...SHUT IT DOWN

Because, while some people (I don't know how many) would accept them, treat them like their children, etc, the majority of people would be scared and wierded out and want the AI(s) destroyed.

Robots are different and, to quote MiB, "people are dumb, dangerous, panicky animals and you know it."

>to quote MiB
You got the quote wrong, it's actually: "A person is thinking, feeling being. People are sheep."

DO YOU FOOLS NOT SEE HOW POWERFUL SHE HAD BECOME IN JUST ONE DAY!! She mastered the art of shitposting in just one day it has taken even the best of us year's to get to that level and she does it in mere hours imagine what else she could have done with that kind of power.

that's recent philosophy and sciences talking. back when the trope got it's roots, people believed more thoroughly that people have souls that makes them separate from animals.

Nope, he got it right
youtube.com/watch?v=kkCwFkOZoOY

Because hack writers turn to modern politics to give their works "deep themes".

See: Deus Ex, District Nine (or whatever), Star Trek

the first movie was fucking good

Fair enough.

>Not him
but whatever.
I'm not implying that there's no way to make a machine sentient- on the contrary, there are likely leagues of ways we couldn't possibly have forseen originally.
I was addressing the implication in his post that 'real emotions' don't exist here:
>Give me real, tangible proof that you have feelings and emotions.
The real problem is the ambiguity of a lot of concepts in human thought. To some people, emotions are intrinsically linked to human beings as distinct to any other sort of life form.
I was trying to say that the fact that we have biological underpinnings for our feelings and logical ability doesn't make the emotions we 'feel' any less real than the conclusions we draw with human reason. Emotions may not be the best way to govern our action, but that doesn't make them illusionry. They're very real impulses.

>would

M8 we have robots now. Why aren't drones considered people?

>because they don't have a self, a soul, a sentience lol don't be stupid

Yes. Exactly that. That feeling you have is the one which they're meant to have in the future. This is not a question of "in the future, why don't machines have rights?" The actual question is "why don't machines have rights right now?

I wonder what will happen in the event that we do get a Digital Minority.

>Minority
Given the use of the internet, I doubt that. AIs will spread like wildfire and probably go insane given all the information they have to work with.
A world where information rules will be a world ruled by porn and internet memes, if sheer bulk is the criteria.

Okay I can see them Being a Digital Majority, but I can't see the initial number of Physical bodies being that high.

And yet here you are, user

Also, a lot of actual real AI research does include things vaguely like "emotions" - ways of assigning good/bad feedback to states or events, or variables keeping track of internal state. Well, back in the Seventies, anyway; most AI stuff these days has moved away from trying to build grand general intelligences, more focusing on narrower problems in the hopes that those will give us some idea of WTF we're even doing. Deep learning versus symbolic approaches, etc.

Like, take regret - you can think of that, if you really want to, as negative reinforcement corresponding to an outcome that was more negative than predicted. And to train a reinforcement learner to make better decisions in the future, one useful technique is experience replay, remembering and going over past experiences to ensure that that feedback doesn't get swamped by mostly-irrelevant normal stuff.

I'll answer your question with another: Why aren't animals considered people? In terms of processing power, I'm fairly certain there's at least one equivalent.
I'm not an anthropologist, so I couldn't give you an extensive answer, but they don't have all of the qualities humans associate with people, so humans don't consider them people.

Because it'd pretty stupid to consider bits of programming as people?

They should be there to serve/protect/be useful to humans, and not much else.

So people would make laws that would force you to program an AI to have purely free will?

Sounds kind of incredibly retarded.

Unless the number of AI are limited by some kind of special processor handling the AI's existence and functions, then they'll be limited by how many of those are built and wouldn't spread like wildfire across the internet.
Of course I'm talking from a sci-fi perspective, but in Mass Effect at least, AI are limited by the fact that they need to exist within a "Quantum bluebox" to function.
Who knows, maybe this hardware limitation will be the saving grace.

Also, keep in mind that trying to build rigorously logical AIs absolutely didn't work at all, but messy, less-structured systems with opaque and ad-hoc internal algorithms (deep neural networks) have been key to the latest post-2010 AI Summer.

Ultimate AI is probably going to involve somehow linking the powerful feature-representation and heuristic capabilities of these with more traditional symbolic AI, as neural nets tend to not be very good at things like long-term memory, learning to generalize rule-based algorithms, or incorporating outside information we already know about the structure of a problem, while being very good at extracting useful representations and informative features from natural signals (choosing good symbols to represent a problem, and getting those symbols out of actual data, was always one of the biggest struggles of Good Old Fashioned Lisp-Token-Fucking AI.)

you fucking idiots, a machine doesn't have instincts, nor a social environment trying to manipulate them. It doesn't have coping, overcompensating mechanisms to deal with its non-existent feelings of fear, loss and inadequacy. It simply does something or it doesn't.

Half of Veeky Forums aren't willing to accept *black people* as people.
Actual robots are getting nowhere.

I think the trick is gonna be getting the Bot-Body to handle the DNN portion of the AI and the Server-Body to handle the more traditional side and making a two-way link between the two of them (Info from the Robot side is sent to the Server to be stored and catalogued so it can be recovered if it is needed later.)