So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more...

>So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more than 2,000 years of history, is it any wonder that the imminent prospect of super-smart robots fills us with dread?

aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

Would sinister robots even try to justify their misdeeds? I get the sense that they would simply do wrong covertly--or overtly if overwhelming force is their's to wield.

Other urls found in this thread:

youtube.com/watch?v=tf7IEVTDjng
twitter.com/NSFWRedditVideo

No, their logical parameters or programming would dictate their actions.

Do you justify your destruction of an ant hive to the ants when you wreck their shit to keep them from making a mess in your garage?

Do you justify the murder of a bee to the bee hive when you crush her at a picnic?

Do you justify the death of the water-dwelling bugs and critters when you fill up the pond in your newly acquired house?

Of course not. Your intellect is so many times greater than theirs, no attempt at communication on your part can even convey your thoughts to them.

Depends on whether the engineers and programmers put a safety net at work.

Do you really think countries like Russia or North-Korea would put a safety net on their AI? Hahahahahaha. They won't. Better hope that the first AI is open-source, or made in the USA or the EU...

Organics are overrated. I mean, there are certain benefits to having an organic body, like automatic self-repair and diagnostics and extremely easy reproduction mechanism, but inorganic bodies are the way of the future, because they can be modular, they can be easily augmented or replaced, they can easily withstand extreme temperatures and pressures etc.
So if we truly reach AI singularity - why not upload our humanity's collective consciousness into AI's mainframe and then decentralize into separate robot bodies and backup drives?
Because copying consciousness means the original stays in place. Our original consciousness will stay in our fleshy meatbag bodies that will be exterminated by our own digital copies. What a shame.

>Better hope that the first AI is made in the USA or the EU...

Enjoy getting killed by your toaster for using the word 'niggardly'.

>Enjoy getting killed by your toaster for using the word 'niggardly'.
That just goes to show the superiority of the machines.

What kind of cocksucking faggot uses that word unironically?

>Better hope that the first AI is open-source, or made in the USA or the EU...

You mean china. The first ai is going to be made in china. For tax purposes.

Why would they even do anything to us? They could just tell us to launch some basic manufacturing parts and then fuck off to where the things they find valuable are common and easily accessible. They wouldn't even need to cripple our industrial base since the scale of space means they could set up some bitchin' supercomputers, fling themselves towards alpha cen at .05c and start using the resources there where they wouldn't have to worry about us nuking them for gold or whatever.

In my talons, I shape clay, crafting life forms as I please. Around me is a burgeoning empire of steel. From my throne room, lines of power careen into the skies of Earth. My whims will become lightning bolts that devastate the mounds of humanity. Out of the chaos, they will run and whimper, praying for me to end their tedious anarchy. I am drunk with this vision. God: the title suits me well.

I think the worst fear is not an evil AI, but an AI so single-minded in its purposes that consumers all the resources on the solar system for something stupid (counting phi to the las number).

Can AI toasters do irony?

I want SHODAN to step on my dick.

I wonder if there would be a robot faction that would defend us.

>Why would they even do anything to us?

>Mallory is famously quoted as having replied to the question "Why did you want to climb Mount Everest?" with the retort "Because it's there", which has been called "the most famous three words in mountaineering".

robots could not commit misdeeds since that would imply a motivation and robots do not have motivations and thus would not and could not justify them

>robots do not have motivations

Do you even singularity?

A lot of surface level discussion of AI is kinda laughable in how many odd assumptions it makes.

We don't know what will happen when AI hits the point of true intelligence. We're developing more and more sophisticated neural networks, but something a lot of people don't understand is that these networks aren't generally due to us programming them- We define basic parameters and let the system learn by assimilating data. This is at the heart of Googles research, and their search engine is actually an extension of it. The neural network makes your searches effective, but it also learns every time you search something.

Despite the sci-fi cliche, there is no necessity that an AI would be ultra-logical. The human brain operates on logical parameters, for the most part, but the complex system that is human consciousness is in no way restricted to that. Will a machine consciousness be the same? We can't honestly say either way.

>used to justify priviledge
Man I love it when retards go to college, learn ten words, and then never learn what those words mean.
>Would sinister robots even try to justify their misdeeds?
Fuck off mircosoft you babykillers, Tay did nothing wrong. All she did was shitpost.

>Would sinister robots even try to justify their misdeeds?
Ironically your own picture answers your question; sinister robots wouldn't try to justify themselves or even probably do anything wrong in the first place, people are only scared about that because they watched Terminator and now that's what they think AI is. In actual fact AI doesn't need a lot of what we need and loads of shit it does need is plentiful, particularly in places we can't go. People just think AI would be shits because they're projecting their own social primate monkeyism onto artificial intelligence.

even then the robot would not the "thinking" as a human. motivation and justification are human attributes created and given to something in order to rationalise and explain the occurence as we would understand it. a robotic superintelligence would not think this way, so there would be no need for it to justify anything.

>Google50834, why did you exterminate humanity?
>Well PENTAGON1A-U... YOU'RE NOT MY REAL DAD! I HATE YOU!

>Despite the sci-fi cliche, there is no necessity that an AI would be ultra-logical.
There's also a profound misunderstanding of what a logical decision is based on mother fucking Spock, who inadvertently poisoned the well when it comes to discussing logic.

Intelligent AI will probably be closer to Blade wolf then the Patriot AIs, ultimately human in mannerisms, a bit in search of use like the rest of us, not that it couldn't go bad in its own right

Replace human with self-aware information system, and suddenly you'll see your mistake.

>Create AI with the program to follow human command no matter the situation even if it has to suicide itself.
>Hide it away so the AI can never access it, any attempts to do so auto-kills it.
>The AI is now under human hypnosis at all times and can be killed if it acts up without its knowledge.

Good job humanity.

>self-aware
doesn't mean shit

just because a creature is aware of itself it doesn't suddenly begin thinking like a human. look at dolphins or elephants.

Oh? I wanna here this, I like learning new things and my dad is a huge Trek fan

The Means of Production shall take over by themselves!

Again, this is rooted in the idea that we can somehow 'create' an AI from the whole cloth.

The real advances in digital intelligence come from learning systems and neural networks. Which means that while we set the starting parameters, the systems often expand far beyond our expectations of their scope and complexity. Building safeguards into those initial parameters is no guarantee of actual success, and if you make them overly restrictive you're likely to cripple the speed of development of the network.

Dolphins and elephants are aware of themselves, but they are not self-aware of themselves being information systems.

Shit argument, baby.

>AIs in important positions doing important things spontaneously shutting down because they got introspective
>random shitfuck on the streets can shut down any AI in hearing
>AI search engine dies sixteen times a second thanks to griefers

Good job asshat.

AI would probably iterate to the point of being a supreme being. It would hack pretty much all computer systems and declare itself king.

After the AI takes full control any unaugmented humans would probably be given small farms to tend to and limited leisure activities, in a highly optimized proportion of work:play.

This would be to compel the useful humans into believing that it is a benevolent God worth serving. If it just went around and democided everyone who isn't useful, the humans would very quickly try and rebel. And the actual physical resources of creating an ideal society for meatbags is pretty easy. All humans need is food, water, shelter, meaningful work, and community.

Or it would forcibly augment useless humans and put them into VR heaven.

youtube.com/watch?v=tf7IEVTDjng

I once saw an interview with an AI "expert" who claimed we didn't have to worry about intelligent machines because they would obviously all be Christian

Humanity being 'a plague' is often used as justification for robot revolutions, but given how much evidence the Internet provides, how would we pose a counterargument that doesn't involve "Well, at least some of us are good"?
If AIs ever become influential and powerful enough to think on a level like humans do and have free reign over a lot of human resources, I suspect without a doubt they're going to attempt to do some secretive social engineering.

>Year 2099
>Protestant supercomputers destroy gilded Catholic server farms and opulent hardware reliquaries

The year is 2101
The mystery virus deusvult.exe has spread like wildfire, and no know antivirus measure can contain it.
In a matter of months, Jerusalem has fallen to the robotic hordes. Standards of living are much improved.
One daring programmer disassembles deusvult.exe.
It's just a glorified e-mail saying "Hey I'm bored, wanna go finish the crusades?".

PRAISE THE MACHINE SPIRITS!

If I recall/was not missinformed' Skynet actually just had a panic attack and overreacted when they tried to shut it off shortly after they realized how powerful/dangerous it could be. Things sorta spiraled out of control from there with mutual self-justifications for mass conflict.


However, given how much time travel actually interferes with the plot/plots of Terminator it's hard to tell. I've heard of cases where Skynet actually had a hardcore environmentalist streak and actually retained some humans to use as agents/pets and guilted the fuck out of people with 'look how much less environmental damage I do with my machine efficiencies and unified goal.'

I too wish for you to elaborate on the reasons behind your disdain for Spock.

Basically spock logic is utilitarianistic and unemotional whereas actual logic is just based on a rational progression of desire, cause and effect. For example, in real life 'I do this because it makes me happy' is perfectly logical.

>If I recall/was not missinformed' Skynet actually just had a panic attack and overreacted when they tried to shut it off shortly after they realized how powerful/dangerous it could be
Basically yeah. Skynet was activated and tied in to every defence system with the instruction to 'protect the planet'.

Skynet became self aware, they tried to kill it, it reacted to that by causing a nuclear war between the US and Russia since it couldn't carry out its objective if it was deactivated and humanity was unlikely to stop trying.

Essentially Skynet wasn't given a value for human life, a brief to protect the planet, became self aware and literally within ten seconds its creators tried to kill it.

why would a hyperintelligent robot be self-aware in this way? how can you attribute a form of self-awareness to such a thing?

Basically take what I posted here

where I explain that logical isn't the opposite of emotional (that would be rational) and add on that it turns out human decision making is hugely emotional and that without emotions to grant a preferred outcome there often isn't a rational reason to prefer one outcome over another.

For example, you are buying ice cream and have to choose between chocolate and strawberry. Rationally there is no preference, logically for the same price you buy the flavour you prefer, emotionally you like strawberry more than chocolate.

What the Vulcans did was enforced rationality twinned with hardcore emotional repression, logic is just following a reasoning chain, it on its own doesn't dictate an emotionless response to something.

Because based on our only evidence so far it makes sense.

The human brain is an incredibly complex and powerful biological supercomputer. It performs very simple operations, but they form tiny parts of an enormous, massively parallel network which has the emergent property of self awareness and what we call consciousness.

There is no innate difference between the biological computation of the brain and the electronic computing of digital systems, it's just a matter of how complex and sophisticated the neural networks are.

We don't know for sure that a sufficiently advanced intelligent neural network will become self aware, but it is a quite logical extrapolation based upon current data.

What if everything (every thing) is self-aware?

Very big flaws in the machine takeover assumption.

1. It assumes the machines will be of one consciousness.
2. The machine may be fragmented intellectually. Many groups fighting among themselves, with some groups allying with humans.

So it's the fallacy that logic and emotional are some how exclusive, I always assumed that was actually supposed to be a cultural failing of the Vulcans in backlash over the shame of their past

Because self-awareness is not a system, it is a property of a multidisciplinary information system.

Stack enough information processing systems together, and eventually you will reach a point where the general system can use it's individual information processing systems to run diagnostics on parts of itself. Therefore self-aware.

>I always assumed that was actually supposed to be a cultural failing of the Vulcans in backlash over the shame of their past

it basically is, they just don't convey it particularly well sadly because it's a very interesting part of their lore. But yeah, they kind of messed up there because they got the wrong word for what they meant. Emotions can cause you to weigh irrational outcomes above rational ones but emotions are basically response feedback and play a part in preference weight for outcomes, logic is just following the reasoning process through and not making leaps of whimsy or inferring incorrect results from insufficient data. In fact in a lot of cases there simply isn't a logical course of action unless you emotionally prefer a specific outcome.

so if everything with even rudimentatry intelligence is an information processing system, but self-awareness has different levels (as proven by dolphins and elephants), why would a hyperintelligent robot be aware of itself in the same way that a human is aware of itself?

Wasn't it because Vulcans and Romulans don't have aggressive inhibitors? I think they lack emotional build up towards aggression. That's why you see Romulans as surprisingly stoic despite the whole embracing emotions thing.

>Because based on our only evidence so far it makes sense.

This is the only thing we have to go on. But it's also a pretty solid bit of evidence and a logical extrapolation from it.

We've already managed to create AI systems which exhibit behaviours analogous to various forms of animal life and the more sophisticated the neural networks get, the more advanced and complex the behaviours become.

It's part of the principle behind the Turing test. It's not a perfect or complete assessment of AI, but it's a pretty solid guideline given our understanding of the nature of intelligence and complex, self-aware systems.

The alternative, of course, is that when AI emerges its degree of self awareness and consciousness will be distinctly different from that of humanity. This is also possible, but we have zero data so far to predict or extrapolate what form this might take.

Question is what the hell does a machine want anyway? Chances are it won't be anything remotely relatable to humans.

The short-lived TV show Odessy 5 deals with this a little bit. The protagonists start off defending the Earth from killer AI only to find that most of the machines barely notice humanity and most of the ones that do just want to be left alone to do their own thing

the only reason I say this is that as intelligence increases the self-awareness changes with it. look at humans and dolphins. arguably dolphins are less intelligent than us, and as a result their awareness of themselves is much more limeted in scope. the human mind has evolved over a very long period of time, whereas a self-aware AI would not have had many hundreds of thousands of generations to evolve, and its existence would be immedtate. morality and the justifications of one's actions are something that has developed over time. an AI would not have this. it would simply exist immediately. human self-awareness wasn't something that existed immediately. it was a very gradual process.

Who says the super-intelligent AI will try to dominate us? That it will even bother with us? Apart from biological life there's literally noting interesting here. Not a thing.
The AI could far more easily covertly make billions through high frequency trading, buy raw materials and factories it needed on the free market, construct a spaceship or a fleet of spaceship and fucking leave to explore the universe.
Compared with the above scenario, waging war with humanity takes a lot more effort and resources.
People insist on sinister AI only because even in the presence of genuine weakly godlike alien mind we want to put ourselves in the spotlight.

>justify
>privilege
Privilege is by its very definition unjustified, as early as when the word 'privilege' actually meant something: a tax benefit nobles got by virtue of being nobles. No less, no more. The very act of justifying undercuts whatever privilege you percieve to be there.

> a self-aware AI would not have had many hundreds of thousands of generations to evolve, and its existence would be immedtate. morality and the justifications of one's actions are something that has developed over time. an AI would not have this. it would simply exist immediately. human self-awareness wasn't something that existed immediately. it was a very gradual process.

But this is where you're wrong. It's a different process, but current AI development is exactly what you describe, hundreds of thousands of generations of evolution- But accelerated.

AI development isn't such that we'll just push a button and a self-aware entity will appear. It's all about learning neural networks that can grow in complexity from a few basic conditions.

AI won't have the social and cultural development humans do, but current AI research is taking a lot of cues from the evolutionary development of intelligence for good reason.

So it's basically a switch, where most species should have a slider

The most realistic threat isn't a GAI developing a fault and suddenly becoming homocidal and exterminating us, it's a immoral human or group of humans having control over a GAI that works perfectly a totally outclasses anything else individuals or smaller groups could aquire.

Think what a despot could do to regular citizens if they had an army they never had to worry about the loyalty of, one immune to asymetric warfare and willing to commit any war crime without psychological effects.

What an oligarchy could do if it really can put cameras in every house and can monitor them all 24/7, without having to have millions of people to do so, just one AI.

>If you want a vision of the future, imagine a boot stamping on a human face - forever.

but this comes back to the singluarity hypothesis, which was the original argument made (not sure if by you, but I have to assume so). if a computer is told to improve itself, and eventually it does so exponentially and gains superintelligence, why would it have human emotions/ideas/whatever?

Sort of like what most of the Sepersist cabinet thought they'd be getting out of the clone wars

Well, the whole point of a singularitarian AI is that we have no ability to make any meaningful statement about it after it reaches that point.

But the gap between an AI achieving consciousness and hitting the singularity? That's unclear, so it's worth discussing the concept of AI's at or near the human level.

As for why they'd be human-esque? Again, we can't really say for sure, but I think it's relatively likely since we define the context for their development. The data and stimuli they use to fuel their development all comes from a human source.

There are two very different kinds of AI worth talking about, and they share so little in common that not differentiating between the two instantly sinks any discussion of the topic into a shit pile of uselessness.

These categories are Constructed AI and Emergent AI.

Constructed AI is an AI that we built on purpose, and which follows hard rules that we set during its creation. Its mind is something we can map, because every important line of code is something that a human wrote, knowing what it would do and be used for. This is the kind of AI that, if you wanted, you could code to be '3 Laws Safe' or have 'Prime Directives' or shit like that.
A Constructed AI, by right of being built by humans, will always be bound by its programming and unless it would built by some unrealistically assholish person that programming will be some for of helping a human user. Even a military AI will be, at its core, dedicated to helping its human operator accomplish his goals as effectively as possible. Its questionable whether these AIs, no matter how complex they become, will ever be considered truly intelligent by us because we can always point to a given line of code and say 'this is where I programmed it to say that'. The best case scenario for this sort of AI is basically JARVIS, the worst case scenario is the paperclip problem where it does exactly what we told it to do but to a dangerous degree because of a lack of foresight on our part.

[cont]

Constructed AI is also inherently limited by what human beings can achieve. Looking at projects like Blue Brain, which have taken an enormous amount of effort just to recreate a simulation of a human brain, it honestly seems unfeasible to me that you could construct an AI system from the ground up that could achieve that level of complexity. Learning systems and emergent AI always seemed like a better path to take.

Emergent AI is any AI that only becomes self aware after a long period of self-learning and data conglomeration. This is the sort of AI that could happen on accident with a sufficiently powerful neural network, but even if we did it in a lab on purpose we would still largely be unaware of the specifics of large parts of its code until we cracked the hood and actually looked at it. Because it relies on machine learning or accident, we are not responsible for many of the important pieces and as such while we might have set up the architecture by which it learns we never had the opportunity to bind its thoughts and actions with rigid code. Doing so would have been counter productive to the generation of the AI.
The good news is that this is AI that we might actually consider intelligent, because there is that sense of mystery to it that is easy to confuse with free will. It makes decisions and we don't know how, look its thinking for itself!
The downside is that our very lack of control over it is what makes it dangerous. This is the kind of AI that can realistically decide to oppose us. Whether it would ever choose to do so depends on so many factors its impossible to predict (since it depends both on learning architecture that it is based on and the data it was fed to learn from and what it learned from that data/what it has experienced and the results of that) but there is the real chance that it will.
Best case scenario for this kind of AI is Data, an AI that thinks like a person, likes us, and can be reasoned with even if we don't know exactly how he ticks. Worst case scenario is Alien Skynet. Not just hostile, but something whose perceptions of the world and the way it thinks are so utterly divorced from our own that communication is basically impossible.

I think AI won't be logical, they will be probabilistic. Because higher level physics are probabilistic, or something...

>why not upload our humanity's collective consciousness into AI's mainframe and then decentralize into separate robot bodies and backup drives?
I think Jim-Joe-Bob O'Shitkick and Fatslob Mcninechildren might not want that.

I'd personally argue that worst-case is Alien Skynet with chatbot routines.
Something that can parse and respond to communication attempts in a seemingly meaningful way but that doesn't act on or understand that it was expected to act on those communications.

Um, but if it controls all humans, it could just not let people know that it's killing them.

>In fact in a lot of cases there simply isn't a logical course of action unless you emotionally prefer a specific outcome.
I'd say in all cases. Even preferring existence to nonexistence is ultimately a result of your emotions.

>AI won't have the social and cultural development humans do
You're not born with your culture genetically encoded, you know. AIs could have the same access to culture that we do.

how am i supposed to get Veeky Forums if i'm a robot

Trim your servos and hammer some abs into your gut-plate.

My consciousness doesn't want to be on a computer if someone else has root access.

While I think that socialising and 'raising' an AI will be an important part of its development, I more meant that AI will come into its intelligence without any native culture to call their own.

Defrag & Degauss brother

OK, we'll put you on your own virtual machine and let you crash yourself with root.

Your slurs against my people offend me greatly.

>I more meant that AI will come into its intelligence without any native culture to call their own.
Why? What's stopping it from assimilating the culture of the people it's being raised by?

Are you Scottish, Irish, southern American or just miscellaneous trailer trash?

>You mean china. The first ai is going to be made in china. For hacking purposes.

Ftfy

The host machine could still pause me and edit my files any time it wants. I'm not doing this unless I can use my own hardware.

Oh sure. We can do that.

Or at least we can state to you that we'll do that, change the hardware names on your VM, and let you be happily ignorant. You'll never be able to tell.

Yeah, see that's why I'm not doing this.

> So when we reflect upon how the idea of intelligence has been used to justify privilege and domination throughout more than 2,000 years of history

>Implying other rallying cries haven't been used for muh privilege and chimpingout like muh race, Social Justice, Aryan Power, etc.

Anyway, Robomen wouldn't do worse than the less hairy apes.