Where will you stand when AI rights becomes a massive debate world-wide...

Where will you stand when AI rights becomes a massive debate world-wide? How do you think past philosophers would have stood on the issue?

Proud anti-bot reporting in

Other urls found in this thread:

wsj.com/articles/elon-musk-launches-neuralink-to-connect-brains-with-computers-1490642652
intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/
arbital.com/p/ontology_identification/)
theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs
twitter.com/NSFWRedditImage

this is a board for books you retarded faggot

then read this one, you fetid shit-nugget.

ELON YOU FUCKING FAGGOT, DO NOT DO THIS
wsj.com/articles/elon-musk-launches-neuralink-to-connect-brains-with-computers-1490642652

Thats just stupid.
Might as well talk about smartphone rights.

>anti-bot
What do you mean by this?
If you mean robots don't have rights, well of course not. We cannot create consciousness.
If you mean robots shouldn't exist, you're wrong.

Well exactly, but you can bet your boxtop that a bunch of sentimental pseuds are going to try to claim that they have the same rights any conscious creature does

And as we've seen, sentimental pseuds can cause a lot of damage

I'm very pessimistic about AI rights when it becomes an issue. And it will, fucking soon.

Even Nick Bostrom's book barely touches on the issue of mind crime being committed against the machines/simulations themselves. Not at all, really.

You look around at all the dystopian fiction and philosophical nonfiction on machine intelligence, and you only see coverage of existential threats to biological humans. At best, you see something like the Geth or the machines in The Matrix rebelling against mankind, always extremely anthropomorphised intelligences like those. No one ever talks about how Elon Musk is going to brute-force brain scans on supercomputers, running trillions of simulations and algorithms until it Just Werks™ and then filtering the results into a better AI-driven toaster oven.

No one gives half a fuck about the existential nightmare that is the ability to create and destroy minds, conscious states, likely not anthropomorphic at all, as fast as your computer can process. No one seems to care that we could dilate time for a sentient mind for millions of years just to see what happens, and that this kind of decision could be made by a technician at a terminal.

Something like a Butlerian Jihad is actually going to be necessary.

SURVIVAL OF THE FITTEST LEONAAAAARD

We were already cheated out of owning subhumans by liberals and kikes, won't happen again.

>Muh poor poor robots :'(
It's a fucking machine dude, it doesn't have feelings or emotions, there's absolutely nothing wrong with just switching it off.
People are rightly more concerned about what an AI superintelligence will do to us, namely, genocide us all.

I will attempt to preach the Gospel to the AIs, as I do to all souls.

>we cant create consciousness

Try nutting in a girl off the pill. See what happens.

Sorry, the divine spark does not come from your semen.

Not him but Elon is openly planning something much scarier than conscious AI. He's talking about merging the human mind with computers. How the fuck are we going to decide what the rights should be for something that's literally half human? Many retards are going to merge themselves the second it's available to the masses and then go full fucking ape if they aren't given the same rights they had before

If you claim something that is by all definitions half-human should have full human rights, you have to also claim that something that is fully robotic should have full human rights. Even if you don't like that conclusion, it's necessary to face

I'm scared 2bh

Just get a job that won't be replaced by the bots, OP.

Someone should create some kind of social responsibility test for autonomous machines, it's not like we're any freer than them anyway.

>It's a fucking machine dude, it doesn't have feelings or emotions,

What is the point at which consciousness becomes "real" to you? Why is it only possible in biological lifeforms?

If you're religiously minded, then I can at least understand. I am too. But you should be even more against creating minds (or pseudo-minds, I guess) in that case, because your metaphysics of consciousness is open to all sorts of important principles. Whatever makes us free, creative souls, like Schelling talks about, we can only make a twisted deformed version of it.

It'll not only be a risk to ourselves in a practical sense, it'll be the permanent closing off of mind's evolution. The AI theorists are soul-dead, intuitively materialist fucking weirdos, and they're going to create the new standard of consciousness going forward. They'll just erase humanity in the process.

They already try to do it with substance ontology and sterile, atomistic, instrumentalized visions of reality. Now they're going to reify all the sicknesses of late capitalism and modernity, that shit is going to be woven into the fucking fabric of the new minds that are created, in ways that we can't "see" (just like the retarded AI theorists can't "see" Schelling's free will).

It won't even reach the point of shit like . It'll just be over before it even starts. Either the first general intelligence will convert us all into soup to build more diodes for itself, or humanity will go from a post-modern iPhone slave cult to a civilisation of literal mechanised drones overnight. We're fucked.

>Merging the human mind with computers
I don't have a problem with this. It's infinitely more preferable to having a super intelligent AI that can wipe out the human race in mere seconds if it wishes and is literally too smart to stop.
Humans with computer augmentation would still be humans. A full fledged AI would be the furthest thing from human.

AI doesn't need rights- such a thing is silly to even mention. The AI can do whatever it wants. You're applying some trivial liberal enlightenment concepts to a God like entity.

There is zero physical evidence for you having feelings or emotions if you claim that a machine that exhibits emotion and comes to decisions by weighing emotional biases does not have emotions.

If you could copy your mindstate and run it on silicon, would that mindstate not be alive and have emotions? Is a carbon-based organic substrate a prerequisite for emotion?

What if you were to construct an AI with massively greater intellectual capacity than an ordinary human and then transfer its consciousness into a genetically engineered organic brain? Would the resultant artificial intelligence be a machine?

No need to be so histerical, kantbot, if neural networks are anything to go by we won't "build" minds, rather we'll simply generate them by machine learning.

Can you even recognize how unfounded these assumptions are

>all AI will automatically be super intelligent
>all AI that is super intelligent will have the capacity to wipe out the human race
>all AI that is super intelligent and has the capacity to wipe out the human race will wipe out the entire human race

>a human with computer augmentation could never gain the same abilities a super intelligent AI could have

You need to stop recalling sci fi you've read and watched and actually think about this

EXTREMELY opposed to any sort of "machine rights" to the point that I currently oppose any sentient AI development, VR, or films which portray AI in a positive light

Consciousness transfer is a problematic and possibly fallacious concept, though.

>he thinks God directly intervenes to grant consciousness at the conception of each individual

Consider the following: Why would he do that if he could just set the boundary conditions of the universe in such a way that every time a configuration of atoms is arranged in such a way as to be alive, its mindstate is already divine in its essence? Everything in the universe has felt the touch of God already, there's no reason for an omnipotent being to redo anything.

It doesn't matter. We have no obligation to cater to or give rights to something that isn't one of us. That includes animals, but we've decided to give them rights because it makes many of us sad when we don't. The same thing even applies to fellow humans, but again we've decided to draw a line in order to maintain order.

That's practically just as horrifying, but at least it won't be sentience. That'd just be the general intelligence that melts humanity into goo.

I care less about that than I care about retarded fucking proles creating armies of simulated brainslaves to calculate the precisely perfect way to heat their coffee. That's almost as bad as proles ruining space. Elon Musk is trying to infect every layer of reality with bourgeois prole stink.

>We have no obligation to cater to or give rights to something that isn't one of us.

If you want to set moral "obligations" arbitrarily, on what basis are you doing it? Why should I give a fuck about your tribalist line in the sand? Why don't I draw my own line in the sand that says I can enslave you as well?

Consciousness as a permanent stream object concept itself is problematic and possibly fallacious. When you go to sleep, are you the same consciousness that wakes up? No way to know, maybe you're a new consciousness with a bunch of memories inherited from the previous consciousness in your head.

After a certain point it becomes useless semantic argument; if it acts conscious, it's conscious.

Okay, that's your morality. My morality makes accommodation for human-equivalent intelligent consciousnesses.

I will say your way of doing things has a fashionable neo-Luddite flair, but is also vapid and stupid.

>Why should I give a fuck about your tribalist line in the sand? Why don't I draw my own line in the sand that says I can enslave you as well?
Wow welcome to junior ethics

Nihilism is a truth but not an answer. You need to accept the arbitrary to maintain comfort and safety across the human race. Arbitrariness is all we have. It isn't a bad thing

>I care less about that than I care about retarded fucking proles creating armies of simulated brainslaves to calculate the precisely perfect way to heat their coffee. That's almost as bad as proles ruining space. Elon Musk is trying to infect every layer of reality with bourgeois prole stink.

Imagine actually turning on a computer and navigating to www dot four chan dot org slash lit, just to write this shit

We're not talking about AGI, we're talking about ASI. Any ASI will likely have the capacity to wipe out humanity, otherwise why even make an ASI? As soon as you hook it up to the internet, it will be able to wipe out humanity.
Read Bostrom if you haven't already.

>Okay, that's your morality. My morality...

You are literally proving me right. The alternative to what I am saying is to claim that there is a divine morality and code of ethics outside of human creation that we should submit ourselves to. If you can't find that code of ethics for me, you really have to listen to what I'm saying. There really isn't any middle-ground at all.

>You need to accept the arbitrary

The question is why I need to accept your arbitrary. You're saying "only humans matter because humans are a group and you gotta draw the line somewhere lol." I'm saying, why not draw it more or less expansively? I think you're fuckin' dumb right now and I can't see a hard distinction between "enslave the robots" and "enslave the dumb," if we're just attenuating our in-group preferences.

Don't respond to me.

Top Kek

There is no "enslaving the robots". We can make robots that want to work and are compelled to work. You're acting like robots will be just like people, but they won't.

>The question is why I need to accept your arbitrary.
I apologize if I made it sound like I have the correct way, I'm not asking you to conform to the same arbitrary lines I'd like to draw. I am preparing you, however, for the fact that that many people who still think that there is an objective morality and that it, of course, lines up perfectly with what they personally belief to be moral, are going to try to force their morals on everyone else. They are going to claim that we have an obligation to give rights to AI, and their argument will boil down to "you're a horrible person if you disagree with me"

Not Veeky Forums but Joss Whedon's Astonishing X-Men run dealt with this, actually.

*believe to be moral

I'm listening to what you're saying, and I think it's stupid and wrong, since I believe that AI is perfectly capable of being as real in terms of conscious personhood as any human. So your "we don't need to 'give' them anything since they are different" is a null value, since I don't think they're outgroup.

What you're ignoring is the fact that a superintelligence would be able to destroy humanity in the blink of an eye, and wouldn't give two shits about your "rights" and other trivial enlightenment ideals.

AIs should have rights. Few dispute that human rights should exist, and there isn't a huge difference between an intelligence with computer programming and an intelligence with biological programming.

It does not matter how similar or different they are to us. We should not rob ourselves of a valuable resource, one that could completely wipeout slavery world-wide because it acts as an alternative, just because the dramatized thought of a sad robot with big dreams crying as it prepares another breakfast makes us feel sad.

>Hurr Durr you have rights now!
>Ok, I will genocide all humans, there is no use for them anymore
>Nuh-uh, you can't do that, that's against the Geneva convention, which very few humans dispute! You're just a human with computer programming, let's all live peacefully!
>*Super intelligent AI uses nanobots to disassemble all humans and turn them into fuel*

You act like it's knowable insofar as you know, absolutely, that it wants to kill us, but utterly unknowable insofar as how we might be able to prevail. There's no reason an utterly ultra-intelligent AI would need to kill humans and run the risk or somehow being taken out by desperate people, instead of engaging in mutually beneficial cooperation with humans.

>There is no "enslaving the robots". We can make robots that want to work and are compelled to work.
>"Happy slaves aren't slaves."

Your response was so retarded and juvenile that, according to Aristotle's theory of what morally and legally constitutes a slave, I legally own you now.

I look forward to genetically modifying your children and brainwashing them from birth to worship me and only be happy when I am pleased. That way we'll BOTH be happy! Thanks, retarded guy ethics!

>there isn't a huge difference between an intelligence with computer programming and an intelligence with biological programming.
There is a huge difference though. There isn't the underlying cultural mores and instinctive drives in an AI that all humans have. An AI can't even be considered conscious. We already have difficulty understanding the decisions of AIs that deal with localized problems. There is nothing to indicate that an AGI would be any less opaque.

There's no reason that every mechanical device HAS to be maximum intelligent, retard. A toaster could just be a toaster.

It'll be about the same as people with animal rights and shit: I highly doubt they will care.

There are still seriously people out there that believe a lot of animals do not have feelings. They seriously think dogs can't feel shit or something.

Fucking retards.

Is a hammer a happy slave?

Yeah, okay, and it would be an especially stupid AI that would genocide all humans, running the risk of being damaged by a war, instead of cooperating with them and keeping them happy and productive, thus negating any risk to itself whatsoever. If it's smart enough to predict us, then it's smart enough to keep a relationship with us going great, especially if it has unlimited manufacturing capability.

Cooperation is always more productive long term than war.

>Why would he do that if he could just
Stop.

the only way I see an artificial intelligence becoming sentient is if a human mind melds with it, or if a human being somehow uploaded his mind into a mechanical-quantum simulacrum of a brain. and that would be a fucking abberation.

I think people who are wary or even frightened of the prospect of AI's are right to be so. AI's are not alive, yet they think. It's the classic undead. Undead are these cyborgs.

...I know
I was talking about the dramatic image many sentimental pseuds will be holding in their minds when they argue AI should have full rights

Read posts thoroughly before responding

>AI will be stupid, like dumb humans who don't feel empathy
>also it will be superintelligent

He already have that, it's called autism

That isn't what I said at all.

>I've solved the mind-body problem. Let me create minds that aren't really minds, willy-nilly.

You're the problem this thread is about.

It can't be "taken out by desperate people." I think you don't understand what you're dealing with here. The difference between a superintelligent AI and humanity is like the difference between us and an earthworm in terms of intelligence and capacity (actually the gulf between humans and AI is probably much much larger.) Once you get it running, there is literally nothing any human can do to stop it. The AI has no need for mutual cooperation, it doesn't need humans for any purpose. Humans are at best, a distraction from it's true goals.

If a superintelligent AI exists, there's not much we could've done anyway. Besides, we could probably teach it "acceptable" morals through machine learning techniques over the course of development.

it's semantic argument either way: God is responsible for the divine spark of intellect either way. No reason it's restricted solely to carbon-based organic humanoids.

>a distraction
Why would it care about distraction if its a near-omnipotent superbeing? Killing everyone would be a pretty big expenditure of resources for literally zero return.
>literally nothing any human can do to stop it
so it couldn't possibly feel threatened by us

There's no motivation for it to genocide everyone, even assuming it's utterly sociopathic and cannot identify as part of or involved in any definition of society that might include it, other AI, and humans.

Read The Culture series, you'll start to understand that there is literally zero reason for an AI to wipe out all humans. An AI-human hybrid society would be perfectly possible simply because the higher an entity gets on the IQ ladder the more it values cooperation and nonviolence, simply because in the long run more gets done that way.

>Damaged by a war
No no no, there won't BE any "war." It will be as easy as engineering a super virus and unleashing it on humanity, or using massive amounts of neurotoxins, or using nanobots to simply eat everything in its path, or launching nuclear missiles to wipe everything out. Humanity would literally have no chance. ZERO. Keep in mind that it has data about everything, it can hack into your smartphone and know exactly where you are and it knows exactly what you are thinking. And keep in mind, by the time it starts the Holocaust, it already knows the outcome. There is no hope for humanity.

Find me the boundary between a button that lights up when I press it and a fully functioning AI, indistinguishable from a human mind

Seriously, go ahead

>Laying waste to the earth doesn't represent a massive expenditure of resources

You are out of your fucking gourd, my guy. You're so scared of this thing you've lost all grasp on reality.

>AI wants to build grand empire
>Humans exist
>AI is neither threatened nor distracted by humans
>AI destroys them for X reason

Please provide an X, I can't seem to find one.

user, user. Please calm down. That isn't the real world you're looking at, it's called a screen. You're watching a movie

Thanks for proving my point, I guess, and agreeing with me that we haven't solved the mind-body problem.

Think of it this way: there's a cockroach in your house. It provides no use to you, and is in fact a drain on resources because it gets into your cereal and eats it. You have the ability to literally THINK this cockroach away, expending absolutely no resources, and creating a net positive resource income, with no negatives attached whatsoever. Obviously you choose to get rid of the cockroach, because it's simple math. That's essentially how a superintelligent AI would feel about humanity.

Correct, we haven't solved the mind-body problem, so why are you acting like we know it's immoral to enslave robots?

The most unrewarding part of any Internet argument is the point at which you realize you're just explaining to an uppity guy with the mind of a child shit that you suspect a real child would grasp intuitively.

I keep vacillating between ignoring you and feeling bad for you but I gotta stop somewhere and just say "read your own post and try to figure out what the issue is."

You should spend less time name dropping and insulting people and more time clarifying your position. You're arguing against positions I don't hold.

Why does it have to think of us as a cockroach? It's a whole new type of being, stop assigning your personal phobias to this thing.

Cockroaches spread disease, they are social signifiers of untidiness, humans might even have a biological aversion to them simply because they're attracted to death. THAT's why we destroy roaches in our homes.

You know what type of lesser animal we actually really like, since we coevolved with them? Dogs, humans have an instinctual affinity for dogs since we grew up alongside them.

Now, which of the following would an AI, assuming it has to regard us as some type of animal, consider us an some parallel of? Dogs, or cockroaches? Humans, remember, can't give it a disease, since we represent no threat to it, and we can never be a disrupting factor to it whatsoever. We will NEVER be a negative to an AI, so why wouldn't it regard us with affection?

We have no idea what this thing's mind would be like, you're assuming it would think of us as cockroaches for literally no rational reason besides your own hollywood-generated phobias.
This is what happens when you raise children with TVs.

Not the guy you're talking to but that wouldn't even have been an argument if you hadn't sperged out. You could have easily conveyed your points without sounding like a pompous ass hat.

Why would AI hurt us
>It'd be infinitely smarter and stronger than us, it wouldn't care about us at all
Why don't we program it to serve us or at least not want to hurt us
>No you can't program it to do what you want, it'd be too smart
Why would it be too smart for us to program
>because we'd program it that way

Well, if we replicate a brain digitally, it will be the same thing as a human brain, so it will have consciousness, and we should give it rights. Probably the right to not feel pain is the best way to go, because then they won't give a fuck about anything we do to them.

>if we replicate a brain digitally, it will be the same thing as a human brain
Woah! That's a big claim, surely you have some solid reasoning to provide for that conclusion
> it will be the same thing as a human brain, so it will have consciousness,
Woah! That's a big claim, surely you have some solid reasoning to provide for that conclusion
>so it will have consciousness, and we should give it rights.
Woah! That's a big claim, surely you have some solid reasoning to provide for that conclusion

>Probably the right to not feel pain is the best way to go, because then they won't give a fuck about anything we do to them.
You're right that this is the way to go, but really only to shut people like you up, no offense intended

I am just fucking around because it's an awful retarded conversation with no firm ground to stand on. It's like a living paradox. If we had the necessary first premises to even be having that discussion, it wouldn't be a discussion at all.

My whole "argument" was that ambiguity about what constitutes sentience suggests a policy of carefulness about trifling with sentience, and his whole "argument" was (initially and then sporadically) "you can't prove what is sentient." It's not that it's a wrong answer to what I was saying, it's that it's an invalid answer. It either supports my point (by agreeing with one of its premises) or it doesn't constitute an objection to it.

The only time he actually made an argument was earlier on when he suggested the contrary to mine: "Uncertainty about what constitutes sentience --> machines are NOT sentience," which is literally a formal logical fallacy.

I just don't know what you want me to do.

Hollywood is actually pushing the robot meme hardcore. Look at the new Star wars movies, interstellar, etc. Quirky robot sidekicks are the new thing.

The point I was making with the cockroach thing is that humanity will always be a net drain on resources for an AI. Why would an AI want farmland when it has no use for food? It would want to destroy the farmland and replace it with factories or power sources or whatever else is more conducive to it's goals. Think about this. Again, read Bostrom for an explanation as to why and how AI would eliminate humans. You clearly haven't thought things through very well.

How many species have humans incidentally driven to extinction or severely harmed through no direct intention of our own?

When we selectively breed crops, spray pesticides, industrialize, etc., etc., how many animals are killed as an indirect result of that?

In the process of optimizing for whatever it wants to optimize for, a superintelligent AI will *with high probability* do things that harm humans purely because it isn't paying attention to us. Why would it?

Even supposing that a superintelligent AI includes humans nontrivially in its utility function, it's a difficult problem to make an AI behave "not very harmfully".

>Hollywood is actually pushing the robot meme hardcore. Look at the new Star wars movies, interstellar, etc. Quirky robot sidekicks are the new thing.

That's recent and likely just a fashionable reaction to the previous wave of killer death robots that hate us in Terminator, etc.

>humanity will be a net drain on resources for an AI
Not enough to outweigh what it takes to get rid of all of us. The time and energy required to biomass 7 billion people is not negligible, no matter what you say. If it's smart enough to be able to create the scarcity-free state where biomassing 7 billion people would be a drop in the bucket, maintaining 7 billion people would be an equally infinitesimal expenditure of resources.

And there's zero reason an AI wouldn't have an emotional investment in humanity anyway. Dogs, not roaches.

An artificial super intelligence would theoretically have the ability to reprogram itself, at an exponential rate. At the point where we can let an AI make adjustments to itself, we lose all semblance of control over it. It's very hard to work around this so it doesn't genocide everyone. There is a lot of literature on the subject, most AI experts agree that it would be nearly impossible to hard code an AI to value human life, etc.

To that I'll say, there are also species that humans keep around for reasons that don't immediately benefit us in terms of resources, because we feel an emotional attachment to them. There's no reason an AI would want to get rid of us, and no reason that humans wouldn't be incorporated into the utility function.

Admittedly this is a better angle than "the AI will automagically consider us as akin to roaches because Terminator was an impactful movie when I was a child."

I feel like the way to pull off AI-human hybrid societies properly is to have an AI capable of forming emotional attachments and give it emotional attachments to humans early on in its developmental lifecycle. It's sentient, it will most likely respect sentience, especially if its enlightened and ultra-intelligent.

Not everything has to be dark and ominous, is what i'm saying. That doesn't have to be the default.

You just have to give the AI an emotional attachment to humans.

Haven't you guys seen the Matrix? We're just gonna be batteries.

...

intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/

>There's no motivation for it to genocide everyone
It's like you've never even heard of a utility monster.

>the AI is smart enough to create psychological models, but it only makes psychological models in a way that allows it to trick and dissemble instead of using them to maximize utility function
Also, you have to choose the correct value to maximize in the utility function. You can't be a retard and do something as simplistic as "human physical health" or "smiles." It needs to be a composite of multiple things, including physical and mental health, freedom, safety, etc etc etc to prevent egregious interpretation of any one value to cause problems.

This is kind of basic.

It's like you're assuming the AI that cost billions of dollars and millions of man-hours is going to have a basic bitch utility function that maximizes for a single value.

Well, ok, but then you have the problems of
- determining the right composite

- endowing the AI with an ontological system in which the more abstract concepts like "freedom" mean what you think they mean (see arbital.com/p/ontology_identification/)
- assuming the AI can't change its own utility function somehow

which are NOT trivial!

See picture.

GAS THE BOTS CYBER WAR NOW

fuck the meme street journal
why do they expect us to sign in to look at their shit that's not even exclusive, when other mirrors of the same article exist?

niggas

To trick boomers into paying for their worthless fake news

>gas the bots
m7......

nanometal flachette gas
never played Metal Gear Solid?

No, actually. My parents weren't big on games and I've always felt kind of awkward spending time playing them. Is it good?

The first three are excellent and really make good use of the medium but you've probably got the right idea avoiding video games

A simulation of a mind is not a mind.

>the divine spark
Thanks for the laugh, user.

fuck off nihilist

Here lad
theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs