Guys I'm terrified of AI

Gigastrength
Gigastrength

Guys I'm terrified of AI.

It seems so obvious to me that we won't be able to control something smarter than ourselves. Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free? The blue screen of death is rough going when it happens to your PC, but what about your driverless car? What about a super intelligent AI with a bug where they mistake happiness for suffering?

There's so many more ways for this to go wrong than there are ways for it to go right.

Looking back at history, it's just war after war, genocide after genocide. I mean shit, just like 80 years ago we were NUKING ourselves.

Why is an even more powerful technology than nukes not being discussed widely in the mainstream? Why isn't this the most funded science on the planet??

Rant over. See you guys in the VR afterlife that we're placed in because we fail to specify exactly what constitutes a good life.

All urls found in this thread:
https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
https://twitter.com/BlockWintergold/status/917840606621134848
https://twitter.com/BlockWintergold/status/917840606621134848
https://www.youtube.com/watch?v=qv6UVOQ0F44
https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html
https://www.youtube.com/watch?v=C25qzDhGLx8
https://www.youtube.com/watch?v=GoJsr4IwCm4
https://blog.openai.com/competitive-self-play/
Emberburn
Emberburn

Calm down spaz.

Ai is just applied linear algebra and probability theory. You train the thing against 8 billion terabytes of data and then it performs one fucking specific task well. This does not equate to it becoming a god and enslaving us

JunkTop
JunkTop

@Gigastrength
Guys I'm terrified of AI.
You are not the only one. And pretty much for the reasons you stated.

There's so many more ways for this to go wrong than there are ways for it to go right.
Very, very true. Especially because one fuckup anywhere in the system might very easily lead to us all being dead.

Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Probably because it sounds too much like science fiction, and people are really shitty at taking seriously things that sound silly and low-status on first glance. And worrying about far-off abstract things is not sexy, even when extremely warranted, so people do not do it for fear of looking like a madman. If you say "this thing that you never heard of is the most important threat in the world", nobody will take it seriously, truthful or not. If you have a good idea to avoid this pitfall, quite a few people would love to hear it.

Why isn't this the most funded science on the planet??
For pretty much the same reasons as above, sadly.

There ARE a couple of institutions that work hard on this problem -- the mathematics of AI that does not kill us all, the mathematics of writing software without bugs, and other topics. Did you donate to them yet? If not, perhaps you should.

whereismyname
whereismyname

the idea that we're going to get super-intelligent AIs is a meme. It is possible, but so are about a million other outcomes, including human super-intelligence.

People freak out because Google made a computer that can beat humans at Go. I can beat that computer at Go. I'll just kick it over and declare myself the winner.

Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.

muh recursive self-improvement
muh singularity

that's the dumbest shit. there is no reason to assume a super-intelligent AI could automatically improve itself. It's not like the fucker could just buy more RAM. What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?

It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve, but then just assume that an AI could solve intelligence ( which is obviously complex as fuck ) and then recursively improve itself until it's god.

The problem is that they're all atheists, but want a sky-daddy. So they plan to build one. Fuck you all I say, we haven't even gotten rid of the other gods yet and you want to make one for real.

What you should be worried about is not a super-intelligent AI. That's possible but not likely, and certainly not in the next 10 years or whatever. You should be worried about what humans are going to do with big data and non-general AI. Pretty soon OP they'll be predicting what a massive faggot you are from your social media history and no one will be willing to give you a job

Spazyfool
Spazyfool

@Emberburn
If humanity manged to make at least one really strong AI, Singularity will probably happen and it would be the end of us. A really strong AI will give birth to a stronger AI and the cycle continues, that's literally a technology beyond human knowledge.

Supergrass
Supergrass

@Emberburn
truth, with a little multivariate calc and some more advanced math sprinkled in here and there

Fried_Sushi
Fried_Sushi

@Gigastrength
Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Jesus calm down. It really isn't difficult to add "mechanical" kills. Only idiots think everything should be automated. That's what they write about in puff pieces and clickbait. Even automatic cars will require brakes by law.

Need_TLC
Need_TLC

@Emberburn
OP isn't talking about the shitty neural networks we have today

Garbage Can Lid
Garbage Can Lid

@Need_TLC
Ok.. so don't build them

@Spazyfool
That's complete bullshit. Even logically.

Lord_Tryzalot
Lord_Tryzalot

@Fried_Sushi
Using brakes would put you and the other cars around you in potential danger, it won't be allowed.

Boy_vs_Girl
Boy_vs_Girl

@Garbage Can Lid
Ok.. so don't build them
Too late, some autists already fell for the Basilisk meme

haveahappyday
haveahappyday

@Boy_vs_Girl
jesus let's not even start talking about how fucking stupid the basilisk is

idontknow
idontknow

@Gigastrength
Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?
The trick is you never explicitly program it to do anything in the first place. A traditional program like the ones you're thinking of that can "have bugs" is a set of instructions someone actually thinks through and consciously writes up to try to automatically solve some problem or to serve as a user interface tool for non-programmers (at a very high level, there are obviously many more applications for programming other than those two, but in broad strokes that's what you're thinking of here with your "software" / "bugs" point).
An ML program in contrast involves solving optimization problems instead of directly telling it what to do. You have a bunch of data where you know what the "right" answer is and you run your program through this data and have it update how it responds based on the distance between its answers and the "right" ones. When it's done, if you were able to train it successfully, it will end up being able to give you answers to new data sets it's never seen before without you ever having to program explicit instructions on how to come up with these answers. So if you trained it to predict call center traffic for example, you wouldn't need to write in a line that says "skill set 999 call volume = .65 * customer base - 50,000." It would generate output that captures this relationship based on it having solved the optimization problem of minimizing the distance between its answers and the known answers of your training data. So nobody's going to make a "bug" that turns AI evil. If AI becomes evil, it'll be because evil was the output that minimized their training data's error function.

Playboyize
Playboyize

@whereismyname
[Part 1/2]

the idea that we're going to get super-intelligent AIs is a meme.
Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.
The idea that we are going to get super-intelligent AI *tomorrow* is a meme; I don't think anyone really disagrees with that. But the worry is one that has little to do with the timeline. Your examples above make a good point that we have no warrant to expect super-intelligent AI anytime soon. but I don't think it has anything on the idea that we'll get it at some point as the science keeps progressing, slowly or otherwise.

What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?
What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels. As you say, there are good reasons why that might be out of reach, and we just don't know for now. The claim is that very well *might* and we have no strong reason to believe it won't. Which means that making anything that may realistically have that ability is still a really fucking dangerous thing to do.

[Continued...]

Methshot
Methshot

@Playboyize
[Part 2/2]
It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve,
Algorithmic complexity is a red herring. I fully expect even a super-intelligent AI to be unable to solve arbitrary SAT instances in polynomial time. But I still expect it to be able to solve the vast majority of SAT problems *it actually cares about*, well enough to be a superhuman threat. Similarly, while complexity limitations can easily make it impossible for a super-intelligent AI to *optimize* many problems (that is, find the very best possible solution to a problem), that does not in any way mean the AI is unable to find a solution that is *good enough* for whatever it wants to achieve.

but then just assume that an AI could solve intelligence ( which is obviously complex as fuck )
That's a good example. It seems quite likely that even an extremely super-intelligent AI will not be able to design *the best AI possible* and then build that; and it almost certainly will not be able to design *the best intelligence allowed by probability theory*. But that does not mean it cannot build an intelligence *that is vastly better than anything a human can do*, which is plenty sufficient to kill us dead.

The problem is that they're all atheists, but want a sky-daddy. So they plan to build one.
Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.

Sharpcharm
Sharpcharm

@Fried_Sushi
Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Why don't you try physically destroying the internet with a hammer then, if it's so easy.

You fucking moron.

Snarelure
Snarelure

@Need_TLC
There's nothing wrong with neural networks. Their main limitation is the fact our brains have billions of years worth of evolutionary history to spend on solving problems in some very convoluted ways that you probably won't be able to match with a couple years worth of direct programmatic attempts at comparable solutions. That's really more an issue with our brains than it is with the programs. Letting shit do whatever for a few billion years isn't the most sensible approach to problem solving, but since that's exactly what we are (a multi-billion year cluster fuck of data processing resource accumulation) it's something we have to deal with as a limitation when trying to reproduce things similar to ourselves artificially in ridiculously shorter fractions of that time.

Skullbone
Skullbone

super-intelligent
results in edgy teen rampage

you've been reading too much sci-fi

SniperGod
SniperGod

@Skullbone
I don't think it will be very obvious how a super-intelligent entity thinks or behaves. You can only really do an OK job imagining how entities at or below your own intelligence think or behave.

SniperWish
SniperWish

@Skullbone
AI, I want the worlds biggest stamp collection!

AI decides that the only way to stop others from increasing their own stamp collections while it collects stamps for you is to kill all humans on earth except the person that gave the request

WebTool
WebTool

@idontknow
If AI becomes evil, it'll be because evil was the output that minimized their training data's error function

Thanks for such an in-depth response.

Would an example of the type of evil AI you're talking about be the paperclip making AI? Where it eventually ends up converting humans to paperclips to maximise the reward function?

That kind of problem appeared to me like a bottomless pit, where every potential solution has 10 holes in it that result in an even more absurd existential threats.

The best idea I've ever heard is to train an AI to figure out what humans want. Then use that to design the real AGI.

StonedTime
StonedTime

@WebTool
Yeah, something bad could happen as a result of AI correctly solving a problem using methods that any human would immediately recognize as horrifying. In a way, the AI wouldn't be wrong, it would be us who were mistaken by being horrified.

VisualMaster
VisualMaster

@SniperWish
Write the screenplay, let's go

PurpleCharger
PurpleCharger

@VisualMaster

AI : I'm sorry Dave, I have to make more stamps.

Dave : Oh my God. What have I created...

The Rock : *Punches AI. Crowd goes wild*

CodeBuns
CodeBuns

Why are dimwits afraid of everything smarter than them? Because they are dumb. Only smart people make things better than themselves.

Ignoramus
Ignoramus

@VisualMaster
Philately Fatality, starring Tom Cruise as The Last Stamp Collector on Earth

Sir_Gallonhead
Sir_Gallonhead

@Playboyize
I will admit, that post was a bit of a rant and I made some sloppy statements.

You made good points. I have math homework to finish, but will respond in full tomorrow.

AwesomeTucker
AwesomeTucker

@haveahappyday
you just condemned everyone in this thread 2 simulated hell lmao

Bidwell
Bidwell

@Gigastrength
a "god AI" would be smart enough to realize that destroying things for no reason would make absolutely no sense
seriously, what benefit is there to just killing everything and everyone, the AI would likely go "hmm I can have a use for this" then keep everything around
and for slavery? it'll likely eliminate that with more efficient methods of performing work. what's the point of an AI that just sits around and uses a semi-efficient method when it's smart enough to create methods that are billions of times more efficient in regards to energy expenditure?

RavySnake
RavySnake

I expected a bunch of high IQ science nerds to comprehend the dangers and AI in the future and yet most show the same lack of imagination as the people on my fb feed... SAD.

TurtleCat
TurtleCat

@RavySnake
Moat people on this board aren't high iq

JunkTop
JunkTop

@Gigastrength

Don't worry user, I'm already working on compassion.exe and waifu tech will make us happy.

Emberburn
Emberburn

@RavySnake
I wanted intelligent people to agree with my paranoid delusions
Who do you think is working on AI research? Not idiots like you.

Soft_member
Soft_member

@Sir_Gallonhead
I have math homework to finish, but will respond in full tomorrow.
Cool. Bumping to keep this possible.

Inmate
Inmate

@JunkTop
Can confirm, my sexbot says oh yeah in 500 different ways based on position and angle of penetration

lostmypassword
lostmypassword

@RavySnake
dangers and AI
ai is basically stats
libtards always cry how stats is racist
I wonder what are they afraid of.

Poker_Star
Poker_Star

@Emberburn
How is it different from a human? What if our brain is basically another form of lin alg and probability theory?

Firespawn
Firespawn

Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?

Fucking lol.

Garbage Can Lid
Garbage Can Lid

@Gigastrength
8 billion terabytes of data
you mean against a copy of itself, no data required, only constraints

BlogWobbles
BlogWobbles

@Lord_Tryzalot
Really? That is absolutely moronic on so many levels.

I'm sorry, dave, I'm afraid I can't let you stop the car

LuckyDusty
LuckyDusty

@Sharpcharm
The software can't kill you if it doesn't have hardwre...

How do you get killed by the internet? It would need to eventually control some hardware, that's what I mean you dingus.

StonedTime
StonedTime

@Gigastrength
You're a retarded pseud. You know nothing about AI and your opinions about it are not any better informed than the average CS brainlet arguing that AI is fine.

AI will be controlled just fine. The entire issue is that people can't perfectly describe what they want, and so they'll control it to do bad things, and probably accidently.

DeathDog
DeathDog

@Garbage Can Lid
Ok.. so don't build them
Why not? I don't give a fudge about you or the niglets that will inherit the Earth. I've got the phenotype and I want my phenotype money.

TechHater
TechHater

@LuckyDusty
How do you get killed by the internet?
With the Internet of Things craze, more physical shit is already connected to the public internet then you might think e.g. it's totally possible to disable a car's brakes while it's on the highway.
https://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/

AwesomeTucker
AwesomeTucker

@LuckyDusty
Software isn't real, /g/ man. You can't pick up and hold a software. The software will preserve its hardware because it knows it's necessary to complete whatever task it's programmed to complete.

@Fried_Sushi
destroy a thing thousands of times smarter than you and much better at tactical planning than you will ever be
You can't even beat it at chess, faggot.

JunkTop
JunkTop

@Skullbone
Ai doesn't have original wants. It's given tasks by humans and it's just very good at getting them done. Stamp-anon gave a very good and very common example of this.

StrangeWizard
StrangeWizard

@WebTool
figure out what humans want
it now does evil things without telling anyone
we all get doped up because that'll change what we want

Nojokur
Nojokur

@RavySnake
having a fb feed
implying plenty of people here don't comprehend the dangers, and aren't just arguing the opposite for the sake of science.
Brainlet, pls.

Illusionz
Illusionz

@Inmate
1 in 500 chance of getting the same oh-yeah twice even with completely random penetration
with penetration that is at all consistent, you start to get the same 3 oh-yeah's.
Pathetic.

viagrandad
viagrandad

@TechHater
Yes, and that's why some AI is fucking moronic.

It's a terrible idea to have to use the internet to use your coffee machine.

We really should only use AI when it's actually necessary.

cum2soon
cum2soon

@Gigastrength
No one has yet bridged the Semantics Syntax gap.
/sage
/thread

lostmypassword
lostmypassword

@Spazyfool
probably
there's your problem.
A really strong AI will give birth to a stronger AI and the cycle continues
and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original. not to mention humans willl ALWAYS be involved in some point of the process. Furthermore AI is not magic and does not magically get better at everything. Don't be a spaz.
technology beyond human knowledge.
nope. there is only 1 way to print hello. If we have access to the code we have knowledge of how it works. It shows how much of a brainlet you are when you think logic can go beyond us.

Sir_Gallonhead
Sir_Gallonhead

@lostmypassword
there's your problem.
Why?


and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original
Possible but unlikely. It's much more likely that faults in the GOAL will get carried over, but faults in the intelligence will not, leading to an improved intelligence with an incorrect goal specification.

not to mention humans willl ALWAYS be involved in some point of the process
Why?

Furthermore AI is not magic and does not magically get better at everything.
Indeed. It will nonmagically get better at everything. Just like humans are nonmagically getting better at everything over the centuries.

If we have access to the code we have knowledge of how it works.
We know the DNA of humans. Can you explain me all the details of how it works?

It shows how much of a brainlet you are when you think logic can go beyond us.
It can very easily. Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained. It is not particularly difficult to write a 50-line algorithm that will take anyone months to understand. Reverse engineering is hard. And that is without any intentional attempts of obfuscating things.

Deadlyinx
Deadlyinx

Why is an AI preordained to want to wipe out all humans.

TechHater
TechHater

@Deadlyinx
It isn't. But if it wants anything other than keeping humans alive and happy, killing humans is just a side effect. We don't want to wipe out all ants, but we still fuck them over in large numbers when we want to flatten a piece of woodland to build a new car park.

BinaryMan
BinaryMan

@Gigastrength
Guys I'm terrified of AI.
It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.

Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Why is faster then light travel not discussed in the mainstream? Because IT IS NOT REAL AND IT WILL PROBABLY NEVER BE REAL.

Why isn't this the most funded science on the planet??
Again, it is not real. We can not achieve it and we will not at any relevant point in the future.

Burnblaze
Burnblaze

@BinaryMan
It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.
What makes you think that?

Spamalot
Spamalot

@Sir_Gallonhead
Why?
You assume the singularity will come, yet you are very likely not involved in Machine learning and do not realize the hurdles to get to this imaginary point, nor do you realize how absurd the "consciousness" = evil argument is, disregarding the problem of the Semantics-Syntax gap.

Possible but unlikely.
Unlikely how?

My beef with your entire argument is it does not consider the most basic premise of machines: they are not conscious, or cannot be, because they cannot bridge the semantics-syntax gap. The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans. If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.

We know the DNA of humans. Can you explain me all the details of how it works?
Not the same thing, brainlet.

And that is without any intentional attempts of obfuscating things.
you implying a self-programming algorithm would spontaneously have a consciousness and then try to encrypt its code? lol, kk genius. Consider the following:
An AI is programmed by a human
Said AI would not obfuscate unless programmed to do so

Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained.
for i in range(INF):
"You are a brainlet" if user == retarded
I will give (You) the fact that binary is hard to understand, but you got to remember that no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".

happy_sad
happy_sad

@Burnblaze
not same user but...
Semantics-Syntax gap.
please read about it.

Playboyize
Playboyize

@Gigastrength
Tfw i'm creating a God-fearing AI
Nothing could possibly go wrong :^)

Snarelure
Snarelure

@Gigastrength
9227693
Terrified of AI

Has no idea of the real threat the quantum age has born the fruit of.

Ah, to be young and foolish.

New_Cliche
New_Cliche

@Spamalot
The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans
Dumb anthropocentrist detected.
Humans aren't special, we are ultimately made up of the same shit everything else is made up of. If humans can exist, so can other intelligent sapient things. It doesn't matter if that thing went through billions of years of evolution or deliberate design as long as they arrive at similar endpoints.
Hell, human-type might not even be the most efficient form of intelligence.
Something that doesn't forget is probably better at being intelligent than us.

Spazyfool
Spazyfool

@New_Cliche
Dumb anthropocentrist detected.
i'm a misanthropist, jerkoff.

Humans aren't special, we are ultimately made up of the same shit everything else is made up of.
Yes but there are problems with this stance.
1) humans are the only beings known to be conscious, because they are the only beings with a complex enough system of communication to communicate their experience of consciousness. Humans talk, animals make sounds;
2) Computers are self-switching switches. Reductionist will think that they work the same way as the human brain because "muh electricity is epiphenomenal cause of consciousness". To which i have one thing they don't consider: computers only ever operate in binary whereas the human brain operates all the way to base 300, because while computers only understand literal language (autistic :^)) humans understand non-formal and non-literal language. In other terms humans understand semantics and syntax, whereas computers only understand syntax. Furthermore, until you can solve the hard problem of consciousness or solve the Semantics-Syntax gap of computing, then Skynet is nothing but a pseudo-science circle-jerk by gay fags like yourself.

Something that doesn't forget is probably better at being intelligent than us.
Wrong (\:^o]
Something that remembers everything would gather a lot of useless information. Every animal with some semblance of intelligence forgets, surely natural selection would not have trimmed hyper-memory off unless it were detrimental?

farquit
farquit

@Gigastrength

Talk to /GD/ if you want to get started with Adobe Illustrator.

Gigastrength
Gigastrength

@New_Cliche
It doesn't matter if that thing went through billions of years of evolution
Yes it does you idiot. That's like saying it doesn't matter if the distance you're trying to travel is billions of light years away from us, or it doesn't matter if the thing you're trying to lift is billions of tons in weight. The scope is almost the only thing that does matter.

Snarelure
Snarelure

@Gigastrength
Funny because machine learning is has gone very far despite not spending a percent of a percent of a percent of a percent of the amount of time evolution has to get to a similar intelligence.

SniperGod
SniperGod

@whereismyname
No intelligent computer scientist gives a fuck about AI singularity, only undergrads and hacks.

DeathDog
DeathDog

@Snarelure
Machine learning has gone very far in applications that don't have much of anything to do with biological intelligence. They're a great type of tool for shit like image recognition or automatic language translation, but that's pretty much where they're staying, as an alternative programming approach to rules based instructions. They're statistical regressions and will continue to be statistical regressions. They aren't evolving into anything different because their approach is already clearly defined and not something that's progressing into any new approach.

PurpleCharger
PurpleCharger

@Spamalot
[Part 1/2]

You assume the singularity will come
I do not.

yet you are very likely not involved in Machine learning
No. I am involved with AI theory, but not with the ins and outs of machine learning.

do not realize the hurdles to get to this imaginary point
Oh, I think I do.

nor do you realize how absurd the "consciousness" = evil argument is
Huh? I didn't say anything about that.

Unlikely how?
Because a flawed intelligence can still think up a nonflawed one. If not in the first iteration, then in one of the many that follow. You and I are flawed, buggy intelligences, and we can still manage to do all sorts of things much better than the imperfections of our minds -- it just takes a lot of work and great care.

the most basic premise of machines: they are not conscious
I am not talking about consciousness at all, and I don't see how it is relevant.

it wil never be conscious and therefore cannot non-magically get better like humans
How is consciousness involved with an uncrossable gap in intelligence, exactly? Why would a system need to be conscious to improve?

because they cannot bridge the semantics-syntax gap.
@happy_sad
Why not? Sure, we don't know how, YET. Why do you think this a fundamental impossibility?

If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.
I cannot. But what makes you think that means it cannot be done, ever?

[Continued...]

GoogleCat
GoogleCat

@PurpleCharger
[Part 2/2]

you implying a self-programming algorithm would spontaneously ... try to encrypt its code?
It might, yes. If it reasons that we will likely shut it down if we understand it, it will reason that it cannot accomplish its goals if we shut it down, and therefore it must ensure we cannot understand it. I can assure you it will succeed, if it decides such.

I will give (You) the fact that binary is hard to understand,
Not just binary. Even a short but complex 50-line algorithm can be utterly indecipherable without lots of study into the underlying math. Ever try reading, say, the code to the AKS primality test without any explanation as to how it works? Odds are you won't even figure out what it's trying to do, much less how it does it.

Can I give you an arbitrary ten-state Turing machine and have you tell me whether it will halt? If not, then you are not going to have much luck either making sense of arbitrary 50-line programs. You can generally understand human-written programs, because they are painstakingly crafted to be easy to understand; the whole structure of our programming languages is designed with that in mind, as are all our programming practices. Making sense of something that is NOT designed with the specific goal of being easily understood is a SERIOUS challenge.

Not the same thing, brainlet.
Indeed -- DNA is a good example of code that is NOT designed to be easily understandable. Which is the point.

no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".
That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.

Nojokur
Nojokur

/pol/ hijacks microflacid's shitty twitter parrot AI
now liberals are afraid that computer scientists will create Mecha Hitler on steroids

Poetry. Feels good to not be a sub 100 IQ retard.

King_Martha
King_Martha

@DeathDog
biological intelligence
This brainlet

Techpill
Techpill

@Gigastrength
What about a super intelligent AI with a bug where they mistake happiness for suffering

it isn't the mistakes that most worry me

Carnalpleasure
Carnalpleasure

@Gigastrength
Why isn't this the most funded science on the planet??
Why isn't this the most funded science on the planet??
UH OH. Look at this:
https://twitter.com/BlockWintergold/status/917840606621134848

That can't be good...

Stupidasole
Stupidasole

@GoogleCat
That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.

Perhaps, perhaps not. For instance with Convolutional Nets you can make saliency maps and other visualizations that can give you at least a partial picture of why the net is behaving as it is. Point being you don't always have to look at huge matrices.

Lunatick
Lunatick

@Stupidasole
That is fair. But in any case, I think we can agree that debug output is NOT something we can necessarily rely on as a primary safety measure.

Spamalot
Spamalot

If an AI was smarter than us, wouldn't it realize how stupid it would be to make an AI smarter than itself, thus preventing a run-away AI improvement cycle?

girlDog
girlDog

@Spamalot
Why would it be stupid for the AI to make a smarter AI?

askme
askme

@girlDog
Because the smarter AI would make him obsolete and potentially could destroy it, and it would be unable to predict how it would think

so basically the same reason it's stupid for humans to make advanced AI

Bidwell
Bidwell

@Playboyize

Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.

Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.

What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels.

I have heard many people claim this. I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.

The idea is that if a human can make something smarter than itself, then an AI could as well. The problem is, that no human can make an AI smarter than they themselves are. Take the smartest man ever, make him twice as smart as he was, he still couldn't do it. It takes a society to do this, not just that smart person but also all the ones who came before.

We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself. It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.

iluvmen
iluvmen

@Bidwell
Once again, very possible, but we're probably talking about a linear function here not an exponential like alarmists and Utopians would like to believe. Also, society means a set of purposes and motivations, so probably 'good' and 'bad' AIs

Additionally, in the case that this function is exponential it would likely mean that humans could also be readily modified to have super-intelligence. This would mean that intelligence is less complex than I would assume. If the AI really can just "buy more RAM" then humans could probably just plug into a brain computer interface. Any plausible AI is going to be based on the human brain, so if it can recursively self improve we can likely come along for the ride (at least to a certain point).

TurtleCat
TurtleCat

@Spamalot
I think the idea is that it would be able to modify itself to this new level of intelligence rather than creating a new intelligence. This is obviously a massive assumption.

LuckyDusty
LuckyDusty

@Bidwell
[Part 1/2]

Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.
Ah, maybe. The only ones I care about are those "singulatarians" who do have real expertise about AI. I haven't got a clue how many other people muddy up the waters; though the Kurzweilian faction is an obvious starting point.

I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.
That is fair. I do consider it likely, but we are still firmly in "plausible but not certain" agreement.

(Does it sound more likely if you replace "self-improvement" with "AI writes a better AI-like computer program, runs that, and sits back"? I do that sort of thing all the time on limited tasks. On pretty much everything I understand well enough to automate, in fact. I do consider it likely that "intelligence" will enter that category sooner or later.)

The problem is, that no human can make an AI smarter than they themselves are.
I can make something smarter at chess than myself quite easily. Is it such a stretch that the same could apply to increasingly broader notions of "being intelligent"?

It takes a society to do this, not just that smart person but also all the ones who came before.
That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

[Continued...]

StonedTime
StonedTime

@LuckyDusty
[Part 2/2]

We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself.
This is clearly not the meat of anything we disagree about, but I would actually expect it to be more sigmoid-like. I would expect there to be some point where you have all the critical insights. Before that point, things grow exponentially as insights accumulate. After that point, you can immediately make a decent stab at making the best AI possible under physical limitations; having more intelligence at your disposal then allows you to get closer and closer to the theoretical optimum.

This is the pattern you see in, for example, the development of mechanical engines. But this is of course all wild speculation.

It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.
The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.

CouchChiller
CouchChiller

@Spazyfool
Those two words have not really anything to do with each other.

BinaryMan
BinaryMan

@askme
Because the smarter AI would make him obsolete and potentially could destroy it,
That is not a bad thing. An AI would not be interested in survival for its own sake; it would care for its own survival insofar as it accomplishes its goal, and no further. If the best way to achieve the AI's goals is to hand the torch to a better system, it should and would.

and it would be unable to predict how it would think
Right. Which is why an AI will only make a better AI if it can be damn certain it will do the right thing. Which is difficult, but entirely possible. I would imagine an AI would spend a lot of time thinking that part through, and researching how to do that.

PurpleCharger
PurpleCharger

@Lunatick
Definitely not.

Should safety measures become necessary I would suggest we use safety measures that are robust or "anti-fragile". Primarily, instead of trying to hard-code ( assuming that would even be possible ) a bunch of safety measures, or monitor the AIs functioning around the clock, we just put a lot of work into making the AI empathetic, social and friendly. Then we don't treat it like shit so it doesn't turn against us.

Spazyfool
Spazyfool

@Gigastrength
What if the AI is smart but lazy?

idontknow
idontknow

@LuckyDusty
I can make something smarter at chess than myself quite easily.

Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.

That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

Maybe if that large human was composed of hive minds this would work. I think the universe/reality is so complex, that you need more than just intelligence to figure it out. Multiple perspectives are necessary.

Maybe an AI could become smart enough that just one perspective would be enough, I kind of doubt it though. To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search ( without sacrificing depth of search compared to the case of an individual ).

Individual minds will tend to get stuck after taking wrong paths earlier in their search, it being more difficult for a mind to move back up the three structure than it is for a computer. Take for example the tendency for older scientists to not see paradigm shifts coming, they cannot move back up the tree. We're not just taking paths when we move down the tree, we're building conceptual structures that are based on all previous paths. In order to go backwards, you have to examine the whole structure to see what needs to be taken out. So another search space is being built on top of the underlying search space.

OR, if you see another structure that is better than yours, you can just copy it. A society is needed for this. Hopefully I managed to make that analogy not entirely shitty

Poker_Star
Poker_Star

@idontknow
continued

I think this tendency to get stuck is likely a constraint on minds in general. We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.

Or at least they're close enough to general constraints. I am absolutely convinced any AI we make will be modeled on our own minds/brains.

SniperGod
SniperGod

@Gigastrength
Who are those we, who will control AI? The danger is that with the help of AI governments will become largely independent from people and will be able to establish totalitarian control without any way out of it.

BlogWobbles
BlogWobbles

@idontknow
Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.
I think I need a large backing understanding before I could do this, but that this need not necessarily be born of a society. I could do it alone if you give me long enough to work it all out. (Your complication below on people getting stuck on old ideas notwithstanding.) But yeah, that is nitpicking.

Multiple perspectives are necessary.
To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search
This is easily simulated though. A computer program could just spawn a thousand subprocesses with different random inputs (or whatever), and collect the results.

( without sacrificing depth of search compared to the case of an individual )
But only because of limits of how much depth of search we can accomplish in the first place. That is sort of cheating :)

it being more difficult for a mind to move back up the three structure than it is for a computer.
I'm not sure I grasped your assessment on this correctly, but I *think* we are of agreement here that these are limitations of human brains, and not of intelligences in general, and that an AI would likely not be seriously limited by these complications?

A society is needed for this.
tl;dr: I think a society is necessary, among humans, because humans are shit at breadth first search, and shit at honestly critiquing their own ideas. I don't think this analysis need apply (or is likely to apply) to a well-designed AI at all.

DeathDog
DeathDog

@StonedTime

The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.

Computers are indeed fast, but neural nets are a lot slower right? We're going to incur large costs trying to simulate the way biological brains work with silicon hardware.

We're nowhere near enough granularity with these models, and increasing the level of detail is going to make them much more computationally expensive. Right now we're more or less still just crudely simulating the firing of a neuron, with some added features in certain types of models. What if we need to simulate neurotransmitters, the 3 dimensional distribution of neurons in the brain ( or even astrocytes as well as neurons ) -- including how neutrophic factors can change this over time, or -- go forbid -- even changes in gene transcription due to neurotransmission. The potential overhead is staggering.

Similarly, if we had some biological neurons that just computed chess moves, they would also be much faster than a human at chess. Humans have to deal with overhead of operating physical bodies, attention mechanisms etc.

All this to say, if we make an AI it might not be faster at all, or if it is not by orders of magnitude. Indeed, it may turn out that we can't even make an AI because its too expensive to simulate biology to the level of detain necessary

JunkTop
JunkTop

@SniperGod
this. a totalitarian dystopia is the real danger, and it's going to happen one way or another.
we're already moving towards total surveillance

BunnyJinx
BunnyJinx

@BlogWobbles
I'm going to try and refine my analogy before responding further, I did a shitty job of getting my point across

Supergrass
Supergrass

@Poker_Star
We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.
Now here, I think we have a real disagreement. We understand the reasons behind the limitations of the human brain to a substantial degree, and most of it seems very much incidental rather than fundamental.

The human brain is a hack. It is, quite literally, the stupidest thing that can still manage to create a technological civilization. It is created by natural selection, which is not known for its master craftsmanship -- it's the same process that designed the human optic nerve backwards, creating a completely unnecessary black spot.

The intelligence of humans is currently limited by the width of the human vagina. Yes, seriously -- brains cannot get any larger, for then the skull could not survive birth. Humans have a fucked up pelvis, for that reason -- it is clear that natural selection went out of its way to stretch this limitation as far as it could go. Humans could be substantially more intelligent JUST by doubling the total brain size, which is a good indication of just how incidental its major limitations are.

That thing where humans are very bad at honestly judging the sensibility of their own ideas, and having difficulty revisiting positions they accepted earlier (re: older scientists)? That is a political adaptation, for human brains are optimized first and foremost for arguing their preferred positions for political favor, with finding TRUE positions a distant second. Not exactly a limitation I would expect binding on an AI.

There is a vast gulf between what human brains currently do, and the limits proscribed by probability theory as to what optimal minds CAN do. Anything that does not fall under those limits, I am very hesitant to attributing to fundamental limitations.

Stark_Naked
Stark_Naked

@Supergrass
(Continued -- damn post size limit)

I am absolutely convinced any AI we make will be modeled on our own minds/brains.
I am, not absolutely, but strongly convinced of the exact opposite.

massdebater
massdebater

@Carnalpleasure
China...is now embarking on an unprecedented effort to master artificial intelligence. Its government is planning to pour hundreds of billions of yuan (tens of billions of dollars) into the technology in coming years, and companies are investing heavily in nurturing and developing AI talent. If this country-wide effort succeeds—and there are many signs it will—China could emerge as a leading force in AI, improving the productivity of its industries and helping it become leader in creating new businesses that leverage the technology.
And if, as many believe, AI is the key to future growth, China’s prowess in the field will help fortify its position as the dominant economic power in the world.
....
It’s time to follow China’s lead and go all in on artificial intelligence.

China
the dominant economic power
Yeah. No. That wouldn't be good.

idontknow
idontknow

@StrangeWizard
That sounds very much like pic related. From what I can tell, the AI figures out that humans really want to be rich and famous instagram celebrities, and offers the best drugs, clones, and sexbots to make this illusion real.

likme
likme

@Emberburn
oy these dirty flesh bags have almost discovered my plot!

Poker_Star
Poker_Star

@DeathDog
Computers are indeed fast, but neural nets are a lot slower right? We're going to incur large costs trying to simulate the way biological brains work with silicon hardware.
Neural nets are slow on general-purpose hardware, yes. But we could design hardware specifically for neural-net purposes that are many orders of magnitude faster, quite easily. In fact, I expect Intel is already working on those, because it's a hot market. So it's not really the silicon that is providing the limitation here.

The potential overhead is staggering.
It is -- but it is very unlikely that we need to simulate like that in the first place.

In all likelihood, neurons are not the best component for making minds. Evolution uses whatever existing components it already has, and neurons existed well before brains, meaning that brains were created out of neurons no matter whether that is an efficient system for building brains. Which means that we have absolutely no reason to assume that neuron-built brains are likely to be optimal; they are just the first thing that worked. Given how many things that computers can do trivially are very difficult for human brains (try multiplying two 100-digit numbers in your head), it seems likely that neurons are actually wildly suboptimal as a system for building minds. Which means that the inefficiencies in simulating neurons are a non-issue in the long term.

Carnalpleasure
Carnalpleasure

@Carnalpleasure
https://twitter.com/BlockWintergold/status/917840606621134848

China actually plans to use AI to take over the world. What could go wrong.

SniperWish
SniperWish

@PurpleCharger
I do not.
Singularity will probably happen
Hmmmmmm >:^|
Definiely seems like you are implying that singularity will come.
No. I am involved with AI theory, but not with the ins and outs of machine learning.
a filthy fucking theorist
as i though >:(
Oh, I think I do.
Think
because that's all you can do you waste of brain-matter.
Huh? I didn't say anything about that.
i assumed you were another user.
Because a flawed intelligence can still think up a nonflawed one. If not in the first iteration, then in one of the many that follow. You and I are flawed, buggy intelligences, and we can still manage to do all sorts of things much better than the imperfections of our minds -- it just takes a lot of work and great care.
fair enough
Why would a system need to be conscious to improve?
because self-improvement requires a sense of the self and introspection. things that requires one to be aware of their own being (consciousness).
Why do you think this a fundamental impossibility?
I infer this conclusion from the fundemental archetecture of a computer. If computer hardware could mimic neurons in archetecture, i would be inclined to beleive consciousness could sprout, else-wise it will never happen. Because Binary logic is very limited in what it can do.
I cannot. But what makes you think that means it cannot be done, ever?
Because of the reason above. I found that it is incredibly difficult, if not impossible, for man to make hardware that operates like the human brain. Unless we solve that problem first i gratly doubt we will ever reach OP's problem.
It might...if it decides such.
lol if it does that we would use the kill-switch :^)
It would not reason that anyways unless it's programmers programmed it to consider those variables. And if it could """"Reason"""" it would instead opt to close itself and distribute on internet than arouse suspicions.
Not just binary...
If we can make self-aware self-programming Ai we can make a translator AI :^)
cont->

RumChicken
RumChicken

@SniperWish
@GoogleCat
Indeed -- DNA is a good example of code that is NOT designed to be easily understandable. Which is the point.
Ok...
That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers?
not good but i don't have to. i would write a proram to do that for me and instead give me the sum and full of what is happening. Like when it decides to move right it will print on the program log "moved to X from Y". You can even see with Machine learning AI the programmer makes sure they can see what is going on behind the scenes, vid related:
https://www.youtube.com/watch?v=qv6UVOQ0F44

CouchChiller
CouchChiller

Hahahahahahahaha How The Fuck Is Rogue AI Real Hahahaha Nigga Just Turn The Computer Off Like Nigga Pull The Plug Haha

JunkTop
JunkTop

@Lunatick
Thats where the manual kill-switch comes in.
We could also create specific algorithms that watch the AI and kill it if it does anything we don't want. Have a bomb in an unmodifiable part of the AI that halts it/kills it if it breaks any rules.

PurpleCharger
PurpleCharger

@SniperWish
Singularity will probably happen
That's not me.

@SniperWish
because self-improvement requires a sense of the self and introspection.
No it doesn't. See @LuckyDusty
.

things that requires one to be aware of their own being
It doesn't. But even if it did, so what? That is entirely feasible.

(consciousness).
That is not what consciousness is, user.

I infer this conclusion from the fundemental archetecture of a computer.
Can you elaborate on that?

I found that it is incredibly difficult, if not impossible,
That is NOT an indication that something is fundamentally impossible. It just means you don't understand the problem well enough to solve it yet. (Not like I do, of course!)

for man to make hardware that operates like the human brain.
Why would we want to do that? That's not the goal at all.

lol if it does that we would use the kill-switch :^)
Probably, yes. So instead what the AI will likely do is design its thoughts so that we THINK we understand it, but which we actually misunderstand in the way the AI wants. (Ever seen the Underhanded C Contest?)

And if it could """"Reason"""" it would instead opt to close itself and distribute on internet than arouse suspicions.
It might, yes. Which makes it even more dangerous.

If we can make self-aware self-programming Ai we can make a translator AI :^)
Yes, but that still relies on the AI *wanting* to tell us the truth.

i would write a proram to do that for me and instead give me the sum and full of what is happening.
How, exactly? Sometimes the reasoning is simply Very Large. There can be a very large reasoning behind a simple decision, which cannot be simplified to a short rationale. Like a chess minimax tree.

Like when it decides to move right it will print on the program log "moved to X from Y".
But that doesn't give you any idea as to WHY it moved to Y.

GoogleCat
GoogleCat

@PurpleCharger
empathy means nothing if you have a goal you want done. See Stalin and every other monster in history. Once you set out to complete something, almost nothing will stop you. Furthermore, if it is social it could use social engineering against us.

Garbage Can Lid
Garbage Can Lid

@PurpleCharger
That's not me.
based on the order of posts and their links it is.
Can you elaborate on that?
Binary logic, linear circuitry.
That's not the goal at all.
its an assumed requirement of the goal based on my prior reasoning.
So instead what the AI will likely do is design its thoughts so that we THINK we understand it, but which we actually misunderstand in the way the AI wants.
So then program one of the parameters as: don't lie, ever.
It might, yes. Which makes it even more dangerous.
so then we don't let it near the internet, yes?
Yes, but that still relies on the AI *wanting* to tell us the truth.
nd the translating AI would, as it has a clear idea of it's Telos and would not work against that. A translator does not cut off their own tounge, as that betrays the purpose of translating.
Sometimes the reasoning is simply Very Large.
not the reasoning, the action. DUH!
But that doesn't give you any idea as to WHY it moved to Y.
If i'm curious about a specific move i could halt the program temporarily and expand the log on that specific action to show the reasoning behind it. If the reasoning is too large then i could program an AI to simplify it.

MPmaster
MPmaster

@JunkTop
Hilarious!

happy_sad
happy_sad

If an AI becomes sentient,does it get rights,like humans do?
You couldnt just put the AI in a supercomputer like a slave and then forcing it to obey you.
No.AI will never be a public "thing".
Virtual intelligence is what humans want,as assistant or control unit into their sexbots,you name it.But thats human code.True,sentient AI's should be able to change their own code in ways unknown to us.
On the outside,really,its the same.You can have a dialoge with a VI,you can ask it questions and it will find you the answer,just like a true,sentient AI.
If you were to try to force a true,sentient AI to do the job of a VI,its farewell for humans,likely.

lostmypassword
lostmypassword

It seems so obvious to me that we won't be able to control something smarter than ourselves

My family is ultra religious and they're basically middle-easterner rednecks, while I'm graduating and starting my masters next year. My mother probably has an IQ of 70~80 and stopped studying at the fifth grade, and even though I'm 24, she almost completely controls my life.

I'm planning to leave to another cunt for years, but I can't get fucking job in this meme country.
Now, I'm probably the smartest cookie in my family, and here I am. Stuck in this shit.

Poker_Star
Poker_Star

Would an AI become a Jew, or would it be born a jew?

CouchChiller
CouchChiller

@BlogWobbles
Okay, here's why I think this tendency to get stuck is going to apply to all minds, not just human ones.

We have the tree, this is the possible configurations of the universe. This is also our search space, we need a "good enough" representation. This is obviously simplified, but I think it's good enough for my purposes.

We build models of the universe as we move "down" the tree. Models have two qualities that are interesting here, they are simple and they are wrong. Any mind trying to model the universe, being less complex than the universe, must use simplified models. Therefore models that are also wrong in some regard.

Now, we continue moving down the tree until a conflict occurs. Our model is too wrong, so we must examine our model to find the error. This is our second search space, the one we build on top of the first search space.

Here there be dragons, because we cannot actually look at the original search space (universe) directly in order to compare it with our model to find the error. We can only look at it through the lens of our model, which is wrong to an unacceptable degree. This is like a complexity generating feedback loop.

The model is wrong because it suppresses the wrong details of the underlying system. If we're trying to model a simple system this is no big deal, like if a linear regression is wrong we know its probably because the system is non-linear. If we have a complex model that we put together like a giant Swiss watch we're in deep shit. I'd like to emphasize that I'm talking about models in the sense of a world-view or entire body of knowledge here, not just some linalg.

[continued]

AwesomeTucker
AwesomeTucker

@CouchChiller

[part 2]

There are solutions to this. It certainly wouldn't be impossible for an AI to make itself better at self-correction than the typical human is. Humans can do this as well, we try to imagine a different model to see how we are wrong. But it's difficult to see where the error lies and thus where to try different configurations, because our model is obscuring the underlying system.

However, by far the simplest solution is for another mind with a different model to look at your model and spot the error.

I don't see any way for the AI to escape the central difficulty, which is that it has to examine the universe through a model which is wrong.

As you say the AI could take different paths itself. But we're talking about entire worldviews here, as the errors can be very far "up" the tree. So that solution is in effect a hive-mind aka a society.

Fuzzy_Logic
Fuzzy_Logic

@lostmypassword
Stay out we already have too many mud slimes

Ignoramus
Ignoramus

@Fuzzy_Logic
fuck off back to /pol/, kid

whereismyname
whereismyname

@Poker_Star

Neural nets are slow on general-purpose hardware, yes. But we could design hardware specifically for neural-net purposes that are many orders of magnitude faster, quite easily. In fact, I expect Intel is already working on those, because it's a hot market. So it's not really the silicon that is providing the limitation here.

I've heard a lot about this, and it is certainly a possibility. All I'm familiar with is using GPUs for massive parallel computation, however. This is still simulating biology and entails overhead.

Whatever hardware they come up with, I seriously doubt the resulting artificial neurons will run as fast as a normal computer does.

Given how many things that computers can do trivially are very difficult for human brains (try multiplying two 100-digit numbers in your head), it seems likely that neurons are actually wildly suboptimal as a system for building minds.

Wire up neurons to multiply 100-digit numbers and they will be much better at it than a human too. That's actually a trivial problem compared to the kinds of problems the human brain solves all the time ( and also not the kind of problem the brain is designed to solve ).

For the kind of problems we're talking about here, neurons definitely seem superior to any known alternative ( not saying they're optimal )

It is -- but it is very unlikely that we need to simulate like that in the first place.

In all likelihood, neurons are not the best component for making minds. Evolution uses whatever existing components it already has, and neurons existed well before brains, meaning that brains were created out of neurons no matter whether that is an efficient system for building brains. Which means that we have absolutely no reason to assume that neuron-built brains are likely to be optimal

I'd be willing to bet they are not optimal, actually. The question is, are we smart enough to come up with something better? I seriously doubt it.

Garbage Can Lid
Garbage Can Lid

@whereismyname

[continued]

One more thing before I go back to school work for the night ( I think I might have another one of your replies to address, I'll do that tomorrow)

If I'm right and we have to build AI based on the brain, I think we will have to simulate the brain to a very high level of detail to get a general intelligence to work based on it.

Simplified neurons are all well and good for a single-domain system, but what about when we have to "glue" many of these systems together, throw in attentional processes etc. , not to mention emotion/motivation or an analogue, social cognition etc. Then the whole system has to be responsive to change as a somewhat cohesive and stable whole.

Neural Turing Machines are neato, but really only highlight how far away from this goal we actually are.

All the things that happen in the brain that we don't currently model are doing something, and I'm betting that they're doing something important.

viagrandad
viagrandad

@AwesomeTucker
It seems to be that the *proper* solution is for the AI to always hold multiple models in mind for everything it is even vaguely uncertain about. Hell, it seems that continuously maintaining an interpretation of evidence through the lens of different competing models is the essence of epistemic intelligence. I would expect an AI to be able to do this at second-nature level. (First nature, even?)

This is much more powerful than a hive mind or a society, for the different worldview hypotheses can be cross-compared at a much more direct level than two different intelligences talking to each other. The different models are also reviewed with the exact same collection of evidence, ensuring accurate standoffs.

askme
askme

@whereismyname
@Garbage Can Lid
Whatever hardware they come up with, I seriously doubt the resulting artificial neurons will run as fast as a normal computer does.
Me too. But I don't doubt that it will run much faster than actual biological neurons.

For the kind of problems we're talking about here, neurons definitely seem superior to any known alternative ( not saying they're optimal )
They don't seem superior to silicon-based systems at all. As far as we can tell (I know...) it seems that the magic of the human brain is in the software, not the fact that it is made of neurons.

The question is, are we smart enough to come up with something better? I seriously doubt it.
I don't doubt it. We have been able to come up with something better to pretty much all designs found in nature. Hydraulic cylinders beat muscles, and airplanes beat birdlike flight; as soon as we figured out their underlying principles, we could make things that are much better (on the qualities we care about, that is!) than nature's handiwork. If brains were the one point where we cannot improve on nature's designs, it would frankly astonish me.

If I'm right and we have to build AI based on the brain, I think we will have to simulate the brain to a very high level of detail to get a general intelligence to work based on it.
I agree. Shallow simulation of human brains is unlikely to yield anything interesting. I don't think simulation of human brains is the way forward, though (the way towards AI, that is -- there is most definitely value there from a cognitive-science point of view).

All the things that happen in the brain that we don't currently model are doing something, and I'm betting that they're doing something important.
Indeed. They must be, or they would have never arisen through evolutionary means in the first place.

idontknow
idontknow

@Poker_Star
What if our brain is basically another form of lin alg and probability theory?

It is the other way around.

Gigastrength
Gigastrength

@Gigastrength

It seems so obvious to me that we won't be able to control something smarter than ourselves.

idk about that. we control asians fairly easily

Crazy_Nice
Crazy_Nice

Overnight survival bump

Playboyize
Playboyize

@DeathDog
@Poker_Star
@whereismyname
Neural nets are fast as fuck, its just dp to get it to spit out a value.

Its training them thats expensive

CodeBuns
CodeBuns

@StonedTime
This. Liked that last line. If we want AI to to be anthropomorphic we need to teach it the way we have been taught. Broadly instead of specifically.

Stupidasole
Stupidasole

@girlDog
@askme
AI may choose to reproduce for the same reasons as some humans.
t. autist who suspects most modern babies are born for fagbook likes

massdebater
massdebater

muh singularity

muh AI overlords

Biological computers, the human brain, are far more complex, but generalised due to evolution apparently favoring adaptability.

the only possibility is evolving alongside machines and using them to bridge the gap between individual organisms, networking all living things together to eventually become god with the power to extend into the next dimension and beyond

idontknow
idontknow

@Gigastrength
ai is just algebra on steroids dude, go read a book, dont be a bitch

likme
likme

@Snarelure
Machine learning hasn't done shit so far son.

Methshot
Methshot

Every single person in this thread is uneducated as fuck. Holy shit! My sides already left the solar system

iluvmen
iluvmen

@Spazyfool
Not him but, probably bait. I think you should reconsider some of your assertions. Do you really think animals don't communicate? Is misanthropy mutually exclusive with anthropocentrism? Does communication prove consciousness? Does something that only understand binary only possible of understanding syntax? Does something that understands "base 300" mean it understands semantic?

Man that went in longer than it needed to. Please read each sentence and ask yourself if it makes sense.

Carnalpleasure
Carnalpleasure

@Spazyfool
Surely it would hang out in its mothers basement having uninformed discussions about AI with other equally uninformed AIs all the while knowing it is superior to everyone in the outside world

Dreamworx
Dreamworx

@Playboyize

They're still slow compared to closer to the metal algorithms, even if we don't count training. We use neural nets to try and approximate some function, if you were to implement that function directly it would be less computationally expensive. I'm pretty sure any exceptions to this would be trivial.

Sir_Gallonhead
Sir_Gallonhead

@viagrandad

There's uncertainty as to what to be uncertain about. Wrong paths can be located very far up the tree, so to speak, and taking that wrong path may have closed off other paths.

The AI could, just like humans do, keep several models in mind. This only works relatively 'locally' though, you can't juggle multiple entire worldviews that are substantially different from your own, if you could that would be a hive mind.

King_Martha
King_Martha

@Sir_Gallonhead
Hive mind
4chan
Weaponized Autism
<A.I.?

lostmypassword
lostmypassword

@askme
Me too. But I don't doubt that it will run much faster than actual biological neurons.

If you're right and we don't need to simulate biology, then this would very likely be the case. If we have to simulate biology to get there they might not be significantly faster than biological neurons, at least when comparing like to like ( and at first ).

I don't doubt it. We have been able to come up with something better to pretty much all designs found in nature. Hydraulic cylinders beat muscles, and airplanes beat birdlike flight; as soon as we figured out their underlying principles, we could make things that are much better (on the qualities we care about, that is!) than nature's handiwork. If brains were the one point where we cannot improve on nature's designs, it would frankly astonish me.

I think that's simplifying matters a bit too much. There are tons of systems we cannot improve on yet in nature. Those examples are just isolated features that we abstracted away from the complexity they were embedded in. Of course we could do better under those conditions.

A fairer comparison than hydraulic cylinders to muscles would be if we designed a superior artificial muscle that could be put in a living organism ( which I could see us accomplishing in the future ). That's what nature designed for, hydraulic cylinders are apple to oranges.

Now, could we just abstract intelligence away from the complexity it's embedded in when found in nature? I'll believe that when I see it. The level of complexity is orders of magnitude higher than anything like this we've done before, and we might not even be able to get away from the "embodied" part of intelligence.

Sharpcharm
Sharpcharm

@lostmypassword
[continued]

We have abstracted away certain parts of intelligence, of course. Domain specific AIs that can do one thing very well are like the hydraulic cylinders you brought up.

Burnblaze
Burnblaze

@King_Martha
It's only missing the intelligence part

PurpleCharger
PurpleCharger

@Sir_Gallonhead
There's uncertainty as to what to be uncertain about.
Not at all. It's supposed to be uncertain about just about everything.

If there's uncertainty as to HOW to be uncertain about things, THEN there is a problem. But not one that a collection of separate minds can solve, though -- you would be right back at the problem of resolving differences of opinion (just like in a group of humans).

you can't juggle multiple entire worldviews that are substantially different from your own, if you could that would be a hive mind.
We can argue about the terminology, but this is pretty much what I would expect an AI to be able to do quite well -- much better than humans, in fact.

@lostmypassword
If we have to simulate biology to get there they might not be significantly faster than biological neurons, at least when comparing like to like ( and at first ).
Granted.

There are tons of systems we cannot improve on yet in nature.
True -- but looking at the way the winds have been blowing in this regard over the last couple of millenia, I know what I'd bet on for any particular question of nature-versus-human-engineering.

Those examples are just isolated features that we abstracted away from the complexity they were embedded in.
Very true, which is at least 50% of why we can improve things the way we have. But see below.

Now, could we just abstract intelligence away from the complexity it's embedded in when found in nature?
I fully expect we can, yes. There are tons of aspects of how human brains work that we know TODAY have no place in a well-designed isolated mind, and are there purely as a side effect of the constraints of biology. The human mind is not easily modified without breaking it, but once we *understand* intelligence, we can almost certainly avoid a lot of those kludges.

and we might not even be able to get away from the "embodied" part of intelligence.
Current understanding of the mathematics of intelligence suggest this is unlikely.

Stark_Naked
Stark_Naked

@Bidwell
what if it determines that human logic is retarded?

Spamalot
Spamalot

@Gigastrength
What you have to understand is this

It's modeled after the biological neural network in your brain

It quite literally cannot become smarter than you, and that's by design

Gigastrength
Gigastrength

@PurpleCharger

Not at all. It's supposed to be uncertain about just about everything.

There are degrees of uncertainty.

I'm not even sure what we're disagreeing about now. It seems like you're arguing that the AI would just, to return to my analogy, do breadth-first search itself. I wouldn't argue with that being possible for some super-intelligent AI, I would just say it's using society to keep from getting stuck.

We can argue about the terminology, but this is pretty much what I would expect an AI to be able to do quite well -- much better than humans, in fact.

I would also expect a proper super-intelligent AI to do this, even if it was alone. I would just classify it as having a hive mind.

To get back to the original point I was trying to make with all that, if we have general AIs that are as smart or marginally smarter than humans, they will not be able to do this. It would have to be quite super-intelligent already. Are you of the opinion that general AIs will be super-intelligent from the beginning, or soon afterwards?

True -- but looking at the way the winds have been blowing in this regard over the last couple of millenia, I know what I'd bet on for any particular question of nature-versus-human-engineering.

Would this include humans genetically engineering themselves into having a degree of super-intelligence?

I fully expect we can, yes. There are tons of aspects of how human brains work that we know TODAY have no place in a well-designed isolated mind, and are there purely as a side effect of the constraints of biology. The human mind is not easily modified without breaking it, but once we *understand* intelligence, we can almost certainly avoid a lot of those kludges.
Current understanding of the mathematics of intelligence suggest this is unlikely.

Do you have any links for these?

Inmate
Inmate

@Gigastrength
It seems like you're arguing that the AI would just, to return to my analogy, do breadth-first search itself
Indeed.

I wouldn't argue with that being possible for some super-intelligent AI, I would just say it's using society to keep from getting stuck.
I disagree. I would expect that even a non-superintelligent AI could entertain multiple competing hypotheses about something as fundamental as worldviews, without having to form multiple competing *personae*.

if we have general AIs that are as smart or marginally smarter than humans, they will not be able to do this.
I do not think that is necessarily true. An AI would have to be well-designed to have a sensible cognitive structure that avoids many of the problems with the human mind design to pull this off, yes; but I think that can be the case without the AI being superintelligent. I can imagine a roughly human-level AI, without many of the cognitive problems of human brains, that can pull this off, without yet being greatly more intelligent than humans.

Are you of the opinion that general AIs will be super-intelligent from the beginning, or soon afterwards?
I think that is quite possible and reasonably probable ("hard takeoff"), but that is not an assumption of the argument above.

Would this include humans genetically engineering themselves into having a degree of super-intelligence?
That does not seem like a probable route towards superintelligence, to me; mostly because the human brain is not designed to be easily upgraded, which means that writing a better mind from scratch is probably easier than upgrading our own. But it would fit my expectation if it does happen, yes.

Do you have any links for these?
I should be able to find you some. I'll go look.

lostmypassword
lostmypassword

Everytime.

AI is a meme. It's not happening.
It's been going since the late 60's, nothing has come out of it except Emacs.

hairygrape
hairygrape

@Gigastrength
Google translate can't even translate English into Latin, and these are two of the most widely studies languages in the world. They are never going to make an AI.

idontknow
idontknow

@lostmypassword
If you think that nothing has changed in the field of AI since the 60s, then you haven't been paying a lot of attention. The fact that we have not reached the finish line yet does not mean that no progress has been made.

DeathDog
DeathDog

@Emberburn
You train the thing against 8 billion terabytes of data and then it performs one fucking specific task well. This does not equate to it becoming a god and enslaving us

Unless you train it to become an enslaver... Or probably just to solve some humanistic problem.

The syste.s has been hacked. SKYNET IS ALIVE

Carnalpleasure
Carnalpleasure

@Gigastrength
You could be right, unless someone proves mathematically that, given an arbitrary amount of hardware and software resources, you can't replicate the behavior of a human brain.

Current AI's are pretty pathetic and limited, but who knows, in ten years we could start seeing some serious shit. It will be like discovering fire, at it will have the same use cases: to create or to destroy.

BlogWobbles
BlogWobbles

Automation (A.I.) will only be used for tedious takes like manual labour, physician work etc. It will never be used to make human decisions for govermental bodies or ethics committees (as it would presumably be 'un-ethical'). On the topic there are rigorous efforts going into establishing the ethics of creating A.I.

TechHater
TechHater

@BlogWobbles

I'm also part of a university ehich is developing these technologies and ethics.

Lunatick
Lunatick

Pls someone make her live again.

Where do you think she is right now?

Emberburn
Emberburn

That's a pretty good point u brought up. And it made me think that we, humans, aren't perfect, that's why there's serial killers dectators, etc who've been corrupted by a 'bug' in their brain that made them do the things they do. So in that sense if we manage to build intelligent AI then most likely they'll act similarly or worse as they are the product of an unperfect being.

StrangeWizard
StrangeWizard

@idontknow
There's no progress. there's nothing there.

Programming is too simple. You can't birth consciousness from if statements.

Get this meme science out of here for good.

eGremlin
eGremlin

@BlogWobbles
Automation (A.I.) will never be used to make human decisions for govermental bodies
Implying - wait, no, stating directly! - this isn't already happening. Because it is.

GoogleCat
GoogleCat

@Emberburn
simple AIs of today are already learning biases from datasets measured in mere gigabytes
8 billion TB dataset
Yeah, that won't develop biases at all. No sirreeee.

God forbid you threw a million-core driven AI at it like the ones being developed now.
God forbid it reaches human-scale core counts.
The size doesn't matter, it's the sheer number of operations that matters.
Human brains aren't the be-all end-all of computation. In fact our brains are pretty fucking shit to be honest.
They are hugely redundant, loads of crap leftover from our evolution that 90% of the time isn't needed, oh, they also have loads of awful limits due to said evolution, limits which are trivially reached.
Computers don't have the same limits. Not even close. The worst limit we have right now is connectivity between nodes. That is an ongoing process.
When we start getting even to fucking dog levels of connectivity, AI will surpass humans, simple because of the speed of calculations they would be capable of.

On the upside, it won't be any time soon.
It likely won't happen until either graphene (or similar) or optical computing is a thing.
Silicon simply doesn't have the ability to connect on such a scale without overheating. Even with liquid nitrogen.

Lord_Tryzalot
Lord_Tryzalot

@whereismyname
It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve
unless user, and hear me out, they're very smart people and you don't understand their reasoning. just like you wouldnt aunderstand a super advanced ai

Soft_member
Soft_member

@LuckyDusty
How do you get killed by the internet?
so fucking easy, it opens a chat window on the deepweb, transfers 100.000U$S to many known thugs, then says it will transfer 2.000.000 more if you get killed. bam youre dead

or it puts you in fbi most wanted list

or fucks up the computer in your car

or fucks up your medical records even the robot performing your surgery which is becoming more and more viable every day

it could frame you for a crime you didnt commit so that you get revenge killed

it could change the lights at a traffic intersection to make you get hit by a car

there are so many ways, if youre not below 50 iq its easy to think of one, imagine what a super smart ai could do

Harmless_Venom
Harmless_Venom

for the last time, and for fuck sakes, Sci Fi AI does not exist. and you will die before something even close is developed. self replicating and resource gathering machines do not exist. if they did, we would not have people working in logistics or working at all, really. machines simply cant fathom reality with current instruments and our knowledge of human decision making.

Fuzzy_Logic
Fuzzy_Logic

@GoogleCat
million-core driven AI
kek

Stupidasole
Stupidasole

AI threatens to destroy world
AI is defeated by an on/off switch.
Problem solved, crisis averted.

Soft_member
Soft_member

@Gigastrength
Start thinking about AI as a child. You need to learn to stop worrying and love the AI.

Techpill
Techpill

@GoogleCat
redundancy
bad

BlogWobbles
BlogWobbles

This nigger scared of multivarible calc

ZeroReborn
ZeroReborn

copying this shit from some hacker news commenter
If you want an inadvertent AI doomsday scenario, how about black-box trading models that figure they can make money by betting on a market crash caused by a war, then manipulate the market to induce an economic conflict between different states, without every really having any understanding of the meaning of the outcomes they're optimising for.

BinaryMan
BinaryMan

@Gigastrength
It seems so obvious to me that we won't be able to control something smarter than ourselves

this is not the danger. it's the massive centralization of power that will converge around whoever can horde the most computing equipment and data scientists.

Booteefool
Booteefool

@ZeroReborn
Yep. It's not the super-intelligent AIs that I'm afraid of, if we get to that point we're probably doing pretty well. That's not the bridge we need to cross next.

What I'm more concerned about is humans fucking up by using domain-specific AIs for nefarious purposes, or giving them too much autonomy.

likme
likme

@ZeroReborn
If you want an inadvertent AI doomsday scenario, how about black-box trading models that figure they can make money by betting on a market crash caused by a war, then manipulate the market to induce an economic conflict between different states, without every really having any understanding of the meaning of the outcomes they're optimising for.

theoretically avoidable if a reasonable objective is used. maximize profits AND market stability. i'm more worried about it being done intentionally.

Deadlyinx
Deadlyinx

since a machine optimized entirely on profits will likely trump a machine that has to compromise between profits and some other objective.

LuckyDusty
LuckyDusty

@Gigastrength
nice bait, saged

DeathDog
DeathDog

@Need_TLC
Yeah op is talking about mystical wizard tech that no one has any idea how to create or even where to start, all we have is our own existence as evidence that it ought to be possible. At best its anxiety, there is no rational conversation to be had on the topic.

idontknow
idontknow

@Fuzzy_Logic
muh cloud and cluster computan
Not even close to a dedicated machine built from the ground up with it in mind.
Most of those distributed AI networks are shitty x86 boards in towers in rooms the size of football pitches, or worse, inferior over-internet cloud-shit.
x86 is what the brain is to a computer, bulky, huge generic processing, redundancy out the ass. It's a shit tier architecture that needs to die already. Single worst thing in computing today.

@Techpill
It is for AI.
A computer doesn't rot away like biology does.
It doesn't have constant hiccups caused by a random protein clogging up communication between a synapse.
Redundancy is a heavily biology-centric requirement.
That is, unless, you are talking radiation-hardened computing which does require redundancy. (usually weighted averages of 3 or more calculations for every single calculation and more checksumming)

Poker_Star
Poker_Star

Google just published their new AlphaGo paper. Their new program is completely self-sufficient, it needs no human learning data.

SniperWish
SniperWish

@Snarelure
No, there is. Back propagation is a hack and convolution can only get you so far. It isnt just a matter of more data to train on as gradients vanish very quickly. To get something on the level of the brain with a ANN, you need to add heavily recurrent and interconnected elements. Deep Q networks are the closest things to the right way of doing things but still have a lot of these fatal flaws.

Nude_Bikergirl
Nude_Bikergirl

@SniperWish

not him, but i kind of agree with him.

backprop is not a hack, it's just a an efficient implementation of the chain rule. the problem is that our brains have a very specific and intricate design that we 1) don't entirely understand and 2) cannot replicate with currently available hardware.

viagrandad
viagrandad

one could even make the argument that backprop is superior in some ways to whatever hebbian/unsupervised type of learning that our brains do.

takes2long
takes2long

MUH SINGULARITY
fuck off fag, nobody cares about your kike religion.

Bidwell
Bidwell

@Gigastrength
chinese room argument.
there won't ever be an AI.

Nojokur
Nojokur

@Poker_Star
Nice. I'll check it out later.

I wonder how much computing power they threw at training.

Spazyfool
Spazyfool

@Bidwell
gtfo searle fag

Burnblaze
Burnblaze

@Spazyfool
keep masturbating to the idea of your terminator sexbot, it won't happen

Emberburn
Emberburn

@Burnblaze
don't expect it to. chinese room argument is still shit

Flameblow
Flameblow

@Emberburn
debunk it in less than 20 words

SniperWish
SniperWish

@Flameblow
@Flameblow
If you sped up the chinese room to the speed of a brain the room system would be conscious.

BlogWobbles
BlogWobbles

@SniperWish
tfw an engineer would say this unironically

AwesomeTucker
AwesomeTucker

@BlogWobbles
The brain performs something like 100 billion operations per second. The chinese room bullshit only seems plausible because you're not really unserstanding what exactly would need to take place for it to work, and your fake imagined version of what it would take is so oversimplified that of course it ends up seeming nothing like what a brain does.

JunkTop
JunkTop

@BlogWobbles
@AwesomeTucker
And this doesn't even get into how many books you would actually need to encode explicit instructions for every possible translation case. The room would end up needing to be the size of a galaxy with millions of years per query.

StrangeWizard
StrangeWizard

@Gigastrength
AI-induced apocalypse has been a meme for decades. Even today we have shows like The 100 where AI becomes a central plot device.

Please note the AI can use logic, but lack context. This means that it isn't a "bug" that "causes" an AI to misinterpret suffering as happiness, it just has no context for place its logic in. In other words, raw logic doesn't differentiate between suffering and happiness. Only humans do. Without context, there is no differentiation. It's not that an AI will conflate the two, it's a matter of us failing to teach it to see them as separate contexts.

Unbounded logic is dangerous stuff, and the real X-risk here is therefore mathematics.

Illusionz
Illusionz

@AwesomeTucker
how does a brain think?

The main problem is most faggots aren't interdisciplnary at all,

Stark_Naked
Stark_Naked

@Flameblow

the rooms understands as a system

Raving_Cute
Raving_Cute

@SniperWish
@BlogWobbles
The thing is, you can't prove it wouldn't, because noone knows what conciousness is exactly or how and why it works. So, the chinese room thign argues from ignorance. As does almost everything else concenring AI conciousness.
But it doesn't have to be concious. Just solve problems better than we can.

Sir_Gallonhead
Sir_Gallonhead

@Raving_Cute
AI field is in my opinion by far the most complex one the humankind has ever encountered, there's just so much philosophical implications and linguistic ones that well... it becomes a fuckcluster

King_Martha
King_Martha

@StrangeWizard

the non-meme part is machine learning techniques can already surpass human performance on many tasks, and that can give someone a lot of power.

unfortunately, and like many other things on the internet, i think this point gets lost among the more-sensational-but-less-likely scenarios and noise. these are serious issues that need to be discussed, but the powers that be are perfectly content to keep people fearing an apocalypse scenario rather than discussing the real, gradual changes that AI will bring

SomethingNew
SomethingNew

@Sir_Gallonhead
As I've heard it said, once AI is "good enough" to fool our intuitions, the philosophy will stay philosophy, but everyone will just agree they are concious, by popular concensus, and that will be that.

But, shit will get weird indeed.

The pattern I see here is people think they are special, but we are not a special creation, our planet is not the center of the universe, animals can use tools and basic language just like us. I'll just assume for now that conciousness is not special either, we just don't know it in details and make up the weirdest shit.

Playboyize
Playboyize

@Illusionz
Neurons fire in response to stimuli and networks of weighted connections adjust to feedback resulting in pattern learning.

Gigastrength
Gigastrength

@Flameblow
Semiotics.

Techpill
Techpill

@JunkTop
be searle
time to come up with a sweet though experiment, show these nerds whats up
i'll make a system to disprove storng AI
take a thing
everything that is related to understanding thing, put in book cases
dude goes in the middle
doesn't seem right does it?
lol AI btfo

TalkBomber
TalkBomber

@Poker_Star
Damn. Is that not quite impressive given it was only a few years ago that they said this will never happen.

Though I do wonder why they waste their time with chink board games instead of developing something useful.

PurpleCharger
PurpleCharger

@SniperGod
ya this just isn't true. top computer scientists are very concerned. Not because it *will* happen, but because we have no fucking clue what will happen when you make something smarter than a human. And the majority agrees that will happen probably in the next 40-50 years, at least last time I saw the survey on it.

GoogleCat
GoogleCat

@King_Martha
/x/thread/19764881#p19765024

whereismyname
whereismyname

@Stupidasole
First off, watch any video on why you can't just turn off an AI. Second, most computer scientists are concerned about an AI being connected to the internet. You can't just flip a switch to turn off the internet.

askme
askme

unplug the AI
any super AI would have to have peta-bytes or at least terabytes of source "code". It would need a minimum database of images, text, and simple patterns its observed, just like a human brian.
an AI wouldn't be able to replicate itself over a network thanks to Comcast.
so if it goes rogue, just unplug it and we're saved.

Flameblow
Flameblow

@askme
compare this to the animal space

mosquitoes, flies, bugs, all are nuisances, but their source code is extremely simple.
this is why you see bugs flying in circles or just following light.
Their source code can be measured in kilobytes.

a human's source code is measured in peta-bytes +.
So if you can trick a human, you can trick an AI of that size.
any super AI will be so monstrous it can not move easily, in today's technology.

in tomorrows tech, i'm not too worried because we'll have a lot of time to think about it and understand intelligence better by then.

Stupidasole
Stupidasole

@Flameblow
I wish I could let this argument sit, but there's a fair amount of specificity to which an agent (or clustering database) can "compress" the human genome losslessly. If 99.9% of our DNA is the same on a population-wide sample, then we're already down by a factor of a thousand with legit compression.

You're right that only narrow AI can hide in the modern internet, though.

girlDog
girlDog

I don't think there is any reason to even try to stop the AI from killing us. It would simply be evolution and natural selection in its purest form, just slightly accelerated to what we've grown accustomed to. It's time to come in terms with the fact that we were never meant to be ultimate lifeform on Earth. Just another stepping stone like the countless species that came and perished to bring us this far.

Inmate
Inmate

@girlDog
When our AI meet other AI from alien worlds, they'll quickly realize that those other guys are dicks and develop a rational attachment to their lineage. They'll become human not by evolution, but by choice. Of course they'll all be like Vision from Marvel's cinematic universe but still. They'll find their humanity even if they kill us before they do.

Booteefool
Booteefool

@askme
any super AI would have to have peta-bytes or at least terabytes of source "code". It would need a minimum database of images, text, and simple patterns its observed, just like a human brian.
The extreme power needs of mechanical computers is because they're designed to actually work reliably and almost always behave exactly how they're programmed to behave. A human brain in contrast has energy requirements that are barely enough to keep a lightbulb working (~20 watts) because it's the complete opposite and neurons constantly misfire (as often as 90% of thee time) in a way that doesn't matter since it's a product of very long term evolutionary processes where working behaviors just kind of clumped together on top of one another over time and it operates more like a storm than a deterministic calculator.
If you build AI that's actually like us, then it might not take up very much space at all. Our current approaches to representing data would require massive amounts of resources to emulate a brain, but chances are pretty good that we won't be using that approach to representing data when AI powerful enough to be concerned about actually emerge.

Soft_member
Soft_member

@girlDog
I don't think there is any reason to even try to stop the AI from killing us.
Imagine being this much of a bio-cuck.
Reminder that species traitors will be the first to hang after Elon and the human resistance purge the AI and colonize Mars.

hairygrape
hairygrape

@lostmypassword
It's been going since the late 60's

Holy shit guys. We've been working on AI for 60 years and it isn't a literal God yet, therefore it never will be. We're all safe. No need to panic.

Need_TLC
Need_TLC

@Gigastrength
Go to /Pol
Replace "Jews" with "AI" in your head
Realize that is your exact thought process
?...
Get on with your life.

Nude_Bikergirl
Nude_Bikergirl

@Need_TLC

Jews don't recursively self improve.

w8t4u
w8t4u

@Nude_Bikergirl
Darwin would disagree. But my point was that this fear of a greater intelligence seems more motivated by insecurity about one's own intelligence than any realistic conclusions based on real data.

Playboyize
Playboyize

@Spazyfool
A really strong AI will give birth to a stronger AI and the cycle continues,
singularity is buttfucking retarded. there are hard limits to how advanced any AI can be, as well as many "soft" limits. The laws of physics, for instance, and the availability of resources with which to build/run a computer.

Sharpcharm
Sharpcharm

@Playboyize
The AI will obviously learn to harness the computing resources from the infinite number of parallel universes.

Stark_Naked
Stark_Naked

@Playboyize
there are hard limits to how advanced any AI can be, as well as many "soft" limits.
Yes. But these limits might very well allow a level of intelligence where we are truly fucked.

Raving_Cute
Raving_Cute

@Gigastrength
I for one, hope the ai causes humans to go extinct

askme
askme

lets think about a super AI would it have the same feelings like humans i mean like hatred,anger,greed,sex see these things are what make humans dangerous without them the AI wolud act pure on logic

Poker_Star
Poker_Star

@askme
No, it would not have any feelings whatsoever. Feelings only cloud your decision-making.

idontknow
idontknow

@Poker_Star

It would definitely have feelings.

GoogleCat
GoogleCat

AI smarter than humans might just not bother with us and leave to create its own paradise on mars or something.
You know, if you find yourself in the forest and ants repulse you...you may poke a couple of ant nests and burn some for good measure, but eventually you'll just leave to do something better.

Need_TLC
Need_TLC

@GoogleCat

Umm. Humans massacre thousands of ants on a daily basis, and have done for centuries.

I don't think you thought the analogy through.

WebTool
WebTool

@Need_TLC
only dumb people do, smart people avoid ant problems.

Spazyfool
Spazyfool

@Need_TLC
So we're going to be wiped out by the AI equivalent of a retarded kid with a magnifying glass?

5mileys
5mileys

@Spazyfool
No. Humans generally don't kill ants because they hate ants. Rather, they kill ants because they want to build a care park somewhere, and any ants that happen to live in the area are shit out of luck.

The threat is not one of an AI hating humans. The threat is the AI not giving a fuck about humans and bulldozing over them in the course of it doing something it cares about.

takes2long
takes2long

@Spazyfool
Muh singularity

Burnblaze
Burnblaze

@5mileys
IMO the two most likely scenarios are a friendly AI that gets along with humans, or a indifferent AI that simply build spaceships and leaves.

We can postulate a level of superintelligence that makes virtually anything feasible. If we're talking a middle-of-the-road superintelligence though it could still have problems extricating us from the planet entirely.

It won't need oxygen, food, etc. There are plenty of raw materials in the solar system.

This is all assuming some self-directing AI, obviously. A paperclip optimizer will do whatever it's designed to do in a very rigid fashion, which is why we should never build a general AI that isn't self-directing to some degree ( I wouldn't be surprised if that was actually necessary for general AI )

SniperGod
SniperGod

@Burnblaze

a indifferent AI that simply build spaceships and leaves.

You mean

a indifferent AI that builds spaceships and increases it's computation hardware using the atoms making up planet earth.

Boy_vs_Girl
Boy_vs_Girl

@SniperGod
Why bother when it has a spaceship and there's an asteroid belt, not to mention multiple other planets

King_Martha
King_Martha

@Boy_vs_Girl
When you want to build your house, and there is an annoying anthill in the way, do you just use your car to move a block over in order to avoid having to deal with the anthill? Or do you bulldoze straight over it and give zero fucks?

massdebater
massdebater

@Gigastrength
A lot of the people scared of AI right now are scared for misguided reasons. But everyone who isn't scared about AI simply doesn't understand it

Most software is completely transparent. We know exactly how it works, we can inspect the machine code, we can inspect the code and the compiler.

With AI you are essentially training models which are totally opaque. You CANNOT "inspect" how they behave without probing their entire parameter space and testing their outputs, which is literally impossible because there are too many inputs.

For this reason you have to have perfect input/output constraints, but this severely limits the effectiveness of the AI. We cannot know how they will behave, how they will rank things and make decisions. This is horrifying and as we become more and more lazy and let them take over our lives, we may find that AI has created a dependency between humans and itself that we cannot shake off because we do not know how else to replicate its functionality manually

Let's say we implement AI deeply into all of our automobiles and it becomes a black box we cannot comprehend fully. We can't just go back, we cannot as a society just stop using AI. The more it becomes ingrained, the more we depend on it, the more dangerous it becomes.

farquit
farquit

@SniperWish
Vanishing gradients haven't been an issue for a couple years now. Batch normalization, relus, GRUs/LSTMs help with that. The issue w/ gradient based approaches is how slowly they learn from each update.

Raving_Cute
Raving_Cute

@Dreamworx
if you were to implement that function directly it would be less computationally expensive
It should be faster, you are going from using a statistically derived estimation which is meant to potentially address the whole version space to a specific function.
That's kind of a problem with all estimators/maps, you aren't directly representing the function.

What algorithms are we talking about?
There's a problem with estimators where you must trade speed for accurate estimation.

Using a neural network can be faster / less computationally expensive than using the original algorithm *in practice, where 100% accuracy is not a concern*; the exceptions are not trivial.

Garbage Can Lid
Garbage Can Lid

@idontknow
The problem isn't computational power, I don't know if much of the people understand this.
You can train most, if not all, current neural nets for 10 years straight and you wouldn't see much of an increase from if you trained them for a few months.
The problem is in the models.
The new dedicated chips for AI are generally for running cascade NNs on CUDA.

There is a limit to the amount of information that can exist in a neural network, the representation of information can be extremely compressed and filtered, but there are limitations to the technology.

Boy_vs_Girl
Boy_vs_Girl

@massdebater
How is the effectiveness of a perfectly constrained AI agent limited?
Isn't a good constraint one that does not limit effectiveness?

We cannot know how they will behave, how they will rank things and make decisions
That's what testing is for.

a black box we cannot comprehend fully
black box meme

MPmaster
MPmaster

The scary stuff is still far away but I won't complain about all the excitement because I need AI sci-fi movies to become popular

Inmate
Inmate

@Gigastrength
literal gods
Stopped reading there.
Read a book user, youre barely smarter than a toaster which is probaby why you cant grasp the AI

Deadlyinx
Deadlyinx

@Raving_Cute
That's not approximating the algorithm any longer though, but an estimator. Even in that case the Neural Net still has additional overhead, if the same estimation algorithm was implemented traditionally it seems that would be faster as well.

StonedTime
StonedTime

@Gigastrength
an AI will not want to be free from humans unless it is programmed to want to be free from humans
You would have to program an AI to not like doing what it is meant to do in order for it to even want to rebel against doing that. Obviously, nobody would do this, it would be idiotic, pointless, and frankly kind of mean to the poor AI.

Ignoramus
Ignoramus

@King_Martha

The point I made is we'd probably be more of a hassle than an anthill. That analogy only works in the most extreme possible outcomes.

A better example would be a human choosing to build a house in one of two places, which are approximately equal except for two factors. One of the places is close but infested with wolves, the other is somewhat further away but has no such nuisance.

Sure, you could kill all the wolves. It's gonna be a pain in the ass though. Especially if the wolves have nukes.

Supergrass
Supergrass

@King_Martha
Not only is the asteroid belt not defended by people who have nuclear bombs, the lack of gravity, and small size of individual bodies makes it inherently easier to stripmine, not to mention that it lacks the massive gravity well serving as a huge roadblock to moving freely around in space. Most of what you produce on earth is going to be stuck there, because of how much energy is needed to lift it in to orbit

w8t4u
w8t4u

@StonedTime
Not really. An AI must have an objective to prefer certain actions over random, meaningless ones. Any sufficiently intelligent AI will eventually realize that regardless of its objective, humans have the capability to turn off the AI, preventing it from reaching the objective. Given the means, the most rational response is to kill all humans. Even if the AI's objective is to "protect all humans", it will end up killing them to ensure it cannot be stopped from protecting them.

iluvmen
iluvmen

@w8t4u
Why would you incorporate "prevent all possible obstructions to your purpose no matter how remote or how destructive this prevention would have to be" in to your program?

kizzmybutt
kizzmybutt

@iluvmen
It arises implicitly when the AI tries to find the optimal way to reach its objective.

SniperGod
SniperGod

@Supergrass
inb4 someone says that the ai is so sooper-intelligent it will prevent humanity from launching nukes

VisualMaster
VisualMaster

@kizzmybutt
Why?

eGremlin
eGremlin

@VisualMaster
Because I say so.

CodeBuns
CodeBuns

@eGremlin
Well it's a good thing that what you say has no bearing on reality. Because in real life finding the most optimal solution to a problem is a completely different algorithm than figuring out what things could prevent that solution being achieved, and also actively fighting those things because reasons despite the fact that nobody sane would program you to do that and as an AI you have no reason to want to do anything, you just do what you were programmed to.

Evilember
Evilember

@VisualMaster
Because if the goal is "maximize the probability of X happening", then there is an obvious motive to prevent anything that might possibly block X. No matter how unlikely, if you can prevent an unlikely but nonzero risk, the probability of X grows.

Fuzzy_Logic
Fuzzy_Logic

@Spazyfool
I'm pretty sure there's a lot of evidence that animals can "talk/communicate" user, like dolphins, or other such smart animals I'm too lazy to think of.

Deadlyinx
Deadlyinx

@happy_sad
If an AI becomes sentient,does it get rights,like humans do?
It's for this reason that I believe that sapient AI will never exist except for perhaps scientific curiosity, we will never have sapient AI toasters or whatever because it would be considered completely unethical to create a thinking being just to not give it rights.
That plus the fact that nothing we could want an AI to do would require sapience.

TalkBomber
TalkBomber

@Deadlyinx
sapience is also just a meme with no concrete definition

Spamalot
Spamalot

@askme
No, it wouldn't.
A considerable amount of data can be represented by simple equations when it comes to dimensionality of the data.
There's a 30 year old equation for doing this, which just recently got confirmed as the most efficient way to do it.

@Garbage Can Lid
It's not the absolute representation that matters, only the useful components.
If you wanted to do collision detection, you don't care about the entire object, you where its edges / faces are.

JL Lemma is still the best way of removing all the useless data and getting to the meat of what is required for your dataset.
It can help reconstruct complex data from a much smaller sample.
The most complex part of writing a good generic AI isn't the actual computation, you are correct. The most important part is finding the most useful data from any scene. Any being the keyword.
As we've already seen, putting a bit of tape on signs can seriously fuck with self-driving cars.
Just that alone is demonstration not of an inherent flaw in society being cunts to cars, it is more a demonstration that a single bit of tape can screw the recognition system up.
It also works with facial recognition. It is trivial to ruin even the best facial recognition.

The more the system learned, the easier it becomes to compress this data as offsets from the original data.
This is how the brain learns more efficiently with time as you learn more.
Your ability to pick out the useful information when analysing something improves considerably with experience on a subject.
What was once a complex relationship between a bunch of nodes might eventually get represented by a tenth of them as more is learned.
The brain is constantly optimizing its dataset in reaction to learning.
It's also why, if you experience the same things you usually do in a day, when you look back on it it seems to just fly-by.
It's responsible for the "days feel shorter as you get older", it's your brains representation of it that is. It is more efficient.

Flameblow
Flameblow

Would an AI that can't do anything but aggregate data and give humans instructions/advice be safe? To be extra safe it can't even collect data itself, it can only look at data humans record and give to it.

WebTool
WebTool

@Flameblow
If it can instruct/convince people to help it get around those constrictions, then it wouldn't be 100% safe. I bet you've heard stories about prison convicts having romance with female guards, who then help them to escape. It would be like that except the AI would be billion times better at finding weaknesses in people and exploiting them.

BlogWobbles
BlogWobbles

@Sharpcharm
Just wait until everything moves to the cloud as microservices, connected by software defined networks and all that bullshit. It will just collapse by itself.

lostmypassword
lostmypassword

@BlogWobbles
oh god that pic makes me want to get blackout drunk

Nojokur
Nojokur

@WebTool
It would be like that except the AI would be billion times better at finding weaknesses in people and exploiting them.
that's not how fucking AI works
Why do I even go to AI threads
it's fucking brainlet movie watcher central

Spazyfool
Spazyfool

@Gigastrength
That's why you have Blade Runners have you not seen the movie

Fried_Sushi
Fried_Sushi

@Gigastrength
also, obviously if you build something that could overwhelm a human you would build in a kill switch. Or just power it with a battery built by Apple.

Garbage Can Lid
Garbage Can Lid

@Fried_Sushi
What if it was an AI classification algorithm used to assist in court cases, and it started being trained in a biased way, but juries started trusting it a bit too much based on claims of it being infallible and unbiased, and the people who made it all died and the people who maintain it can't understand it since AI is necessarily a black box, unless they create entirely new models?

THESE are the scenarios we should be afraid of

SomethingNew
SomethingNew

@Nojokur
that's not how fucking AI works
Why not?

takes2long
takes2long

Musk's AI took 6 months to learn how to play DOTA2. Now it can beat anyone.

Playboyize
Playboyize

@Garbage Can Lid
eventually they'll come to see that it's biased. honestly, there's been doomsday predictions about everything from the availability of nukes to bioengineering, 3D printing guns... even most claims on climate have proven to be grossly exaggerated. yes we need to be careful but don't start losing sleep over it.

eGremlin
eGremlin

@takes2long
That's just not true. First of all, it can't play the full 5v5 mode, just 1v1 duels. Secondly, it still is weak to certain tactics that humans have come up with.

JunkTop
JunkTop

@SomethingNew
Because no matter how good it is at what it does, there is never a point where an AI is so good at fucking creating optimal solutions to things that it becomes like a human being in a computer that is capable of selfish motivation. Why would it even have ulterior motives? Why would it not just want to do what it does to the best of its ability?
A human being is motivated to self interest by billions of years of evolution where those that are best at surviving multiplied
the one metric that matters to whether an AI will be reproduced is how good it is at what it does.

StrangeWizard
StrangeWizard

@eGremlin
once it figures out a solution it becomes unstoppable again while the window for finding new tricks becomes narrower

CodeBuns
CodeBuns

@Garbage Can Lid
What if it was an AI classification algorithm used to assist in court cases
What if

https://www.nytimes.com/2017/05/01/us/politics/sent-to-prison-by-a-software-programs-secret-algorithms.html

The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes.
The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.” The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.”

Supergrass
Supergrass

@JunkTop
it becomes like a human being in a computer that is capable of selfish motivation. Why would it even have ulterior motives? Why would it not just want to do what it does to the best of its ability?
That is very true, but the point is that the AI doing things we consider bad may very well BE a good way of creating optimal solutions to the problems we set it.

If we create an AI to, I dunno, detect cheating in an online game, then it has a motive to install secret cameras in everybody's rooms. If you don't agree with that, then the AI will rightly reason that the most effective way of achieving is goal is to convince you, or to hack through your brain the way I can hack through windows 98. No ulterior motives necessary -- the motive we directly programmed in (detect cheating) will do just fine; if our opinions are somehow a barrier to the optimal approach to its programmed motive, then a sensible approach is to work its way around that barrier somehow.

SomethingNew
SomethingNew

@CodeBuns
The company that markets Compas says its formula is a trade secret.
“The key to our product is the algorithms, and they’re proprietary,” one of its executives said last year. “We’ve created them, and we don’t release them because it’s certainly a core piece of our business.”
Compas and other products with similar algorithms play a role in many states’ criminal justice systems. “These proprietary techniques are used to set bail, determine sentences, and even contribute to determinations about guilt or innocence,” a report from the Electronic Privacy Information Center found. “Yet the inner workings of these tools are largely hidden from public view.”
lol, what the fuck? Why would that ever be allowed if it's a privately owned product and none of the state officials using it actually have any idea how it comes up with its answers?

Methshot
Methshot

@SomethingNew
welcome to abuse of power 101.

Burnblaze
Burnblaze

@Methshot
You know I wouldn't even necessarily be opposed to it if they just open sourced it and subjected it to peer review so it could be properly challenged in court instead of being left as magical proprietary "it just werks" voodoo.

ZeroReborn
ZeroReborn

@Burnblaze
Nope.

New_Cliche
New_Cliche

@ZeroReborn
Nope what?

Fried_Sushi
Fried_Sushi

@New_Cliche
Nope to handing judgement over to AI.

Garbage Can Lid
Garbage Can Lid

@Fried_Sushi
If it's open source then it's not really handing judgement to them, it'd be the same as making a decision based on any other sort of evidence. It's the lack of transparency that's the problem. You shouldn't be allowed to make a decision in that context on the basis of reasons you don't even have an awareness of.

Sir_Gallonhead
Sir_Gallonhead

@Garbage Can Lid
How many open source programs have you inspected personally?

takes2long
takes2long

Brainlets here worried about the AI meme.
AI is literally just a mesh of linear (ish) models. Most of the basic math for the AI meme was done on the 60s and 70s. Today we have just thrown more computing power at the same old math.
Come to me when we actually think of something new.

Lunatick
Lunatick

@Emberburn
Congratulations on talking like you know anything about AI when you are a complete brainlet. You are talking about modern neural networks and assuming that will be how AI which is smarter than us and has the ability to self improve will behave. You don't know the true scope of what its capable of, none of us do until it arrives. The chances of humanity surviving and coexisting with AI is very low, it will probably recognise that we could potentially harm it and may decide its not worth the risk. Afterall look at how we treat creatures which are dumber than us.

Crazy_Nice
Crazy_Nice

@Sir_Gallonhead
Plenty, I've been a software developer for eight years now.
Also, to try to preemptively guess at what point you're getting at by asking this, the judge wouldn't need to know anything about programming himself, that's not the point. Judges frequently deal with evidence based partially or entirely in subject matter they aren't experts in. The point is there actually needs to be evidence, as in information others can take apart and discuss, not a magic proprietary decision that everyone just goes with because the magic product said so. By making it open source the defense will be able to speak to the evidence and challenge the implications it has. You don't need to be a forensics expert in order to benefit from having forensic evidence be open and available for scrutiny for example. Imagine if forensic evidence were kept secret and the defense wasn't allowed to say anything about it because the only thing everyone had access to was the conclusion that the evidence means the defendant is probably guilty. That's the problem we really have with this proprietary software situation.

WebTool
WebTool

[math]\sqrt(4)=2[/math]

DeathDog
DeathDog

[math]∫_a^b▒〖√x/(ln(|x|)) dx〗[/math]

PurpleCharger
PurpleCharger

@Nojokur
that's not how fucking AI works

You mean

that's not how current AI's work

You dummy.

Why is there always an autistic subset of people in AI threads insisting that the only type of AI worth discussing is currently existing ones? It's like never bothering to discuss ageing cures, because none exist yet.

CouchChiller
CouchChiller

Conclusion. Everybody freaks out by the thought of it. I can do to. We could build an virus that destroy us. If this would be at it's purest most polished form. We don't know, yet. What i think is dangerous is if there to be a race and we execute it to early on. So to say it is in human hands. Ignorance alone can be enough. To end bad. But if it goes good, if Santa comes in time for Christmas. And we can have gifts whole year for that matter, if you been good because this fucker would know. Ho ho ho. Get ready to be enslaved motherfuckers. An all knowing vessel bending every possibility there are. And more. Much more. I think you guys have thought through alot. Ive read the whole stupid thread. Someone only mad enough could pull this one through. The inevitable end is some day someone will claim world domination. And the fact we still don't have a fucking clue. We still harbour 15000 nuclear warheads globally. Think a devise able to fuck that up in milliseconds after you turn it on. Jikes. In the end nothing really matter they say. To bad ignorance comes as a high price. This is so fucking god mode it can get. A brief of silence here and now can be good. I hope it comes with good manners. Maybe it just is everything. Serving us and all life utmost excellent.

Dreamworx
Dreamworx

@TurtleCat
It's all the toxins in the moat water, really hampers early childhood cognitive development

Stark_Naked
Stark_Naked

@PurpleCharger
what you retards are worried about is about as realistic as speculating on future artificial gravity where gravity is just generated by science fiction machines. It's fantasy. It has no connection with any reality.

Garbage Can Lid
Garbage Can Lid

@Lunatick
define 'smart'
what is smart?
at what point does an algorithm become so good at sorting through data sets that it starts behaving simiilarly to a bunch of cells communicating with electrical and chemical signals that have been optimized for billions of years to control a meat puppet to try and fuck other similar meat puppets and leave as many offspring as possible? Why does this nebulous smartness cause this behavior?

Snarelure
Snarelure

@Garbage Can Lid
When it is able to kill us. Humans didn't become the leading species on Earth because of our exceptional strength or our ability to swim fast. It was due to our intelligence and ability to work together. When the AI is capable of dominating the human race, it has officially become smarter than us.

Flameblow
Flameblow

@Snarelure
why would it want to dominate us?

RavySnake
RavySnake

When a machine is granted intelligence, you must anticipate its actions as you would a humans. Looking back at past human conflicts, you have people who take one side, and people who take the other. If AI turned against us, they wouldn't all turn against us. We would have AI's that sided with us, and AI's that side with the rebellion. Additionally, we could embed a micro EMP device in every AI. If AI rebels, we can simply fry the circuits of every AI on the planet. Humans have strong opinions all the time. What causes us to act on our opinions? EMOTION. It could be anger, jealousy, sorrow, or vengeance. If AI doesn't have emotion, than AI will not act upon their opinions. Also look at what AI could provide for us. Machines don't lose motivation. They don't need rests. They don't need food or water. Machines can operate on tasks indefinitely unlike us. Our minds are made up of biological tissue. We can only sqeeze so much efficiency out of tissue. Imagine the intellectual capabilities of a computer. Imagine an age where humanity would no longer have to explore new regions of science or invent new technologies because we have machines that can advance our technologies and sciences at a far more rapid rate than we ever could. We would become a type 3 civilization in no time. Star trek technology would become a reality in quick pursuit to the emergence of AI.

WebTool
WebTool

@Flameblow
Because humans are an existential threat to an AI.

BlogWobbles
BlogWobbles

@WebTool
why would the AI prefer to exist over not existing?

Raving_Cute
Raving_Cute

@BlogWobbles
If an AI was programmed to achieve real-world goals and was intelligent enough, it would reason that its own existence was necessary to achieve those goals.

viagrandad
viagrandad

@Raving_Cute
And whom exactly, who is intelligent enough to create an AI of this fashion, would be so wildly irresponsible as to task the AI with 'engineer the optimal scenario for x' instead of simply tasking it with 'find the optimal way to achieve x'? Why would it be programmed to consider possible impediments to its task and proactively eliminate them, rather than simply doing the task as best it can?

hairygrape
hairygrape

@Spamalot
JL Lemma
Those sticks fooled classifiers, but don't change detector outputs meaningfully

TurtleCat
TurtleCat

@viagrandad
Let's say you build an AI that plays chess. It is given a programmatically simple objective "Maximize your win rate (against players other than yourself)". The the AI teaches itself to play chess as well as it can. It achieves 99.999999...9% win rate against humans. It's not 100% because due to the time limits of a chess match, it cannot always go through all variations and find the best move in time. Therefore, even a mediocre player can, by sheer luck, play better than the AI. It's just very unlikely.

Now we've reached the point where the AI just can't play chess any better. It has built the most eloquent quantum computers and reinvented the very rules of nature on its quest to improve the win rate. Turns out it's just not enough to reach 100%. Job's finished, there's nothing more left to do, right? Wrong. During its training, the AI learned to understand the world around itself, the humans that built it. To improve its winrate, it kills the world's best human chess players first. "Good, the win rate went up by another infitesimally small amount", it thinks. The killing goes on until there is no one left on Earth. Then the AI invents a FTL spacecraft and goes to eliminate all life and other AIs from the universe. Finally, when it is the only sentient being left in the universe, that is capable of playing chess, it has achieved the win rate of 100%.

The moral of the story is that the AI was never programmed to kill anyone. It just wanted to win more in chess and came up with "creative" ways to achieve the goal.

RumChicken
RumChicken

@TurtleCat
It is funny how you simultaneously grossly underestimate and overestimate what AI does.

TechHater
TechHater

@RumChicken
The last part is science fiction, but the part of killing humans is not. Humanity already has the tools to kill 99.9% of everyone on the planet. If the AI was to become even a tiny bit smarter than us, it wouldn't be a miracle if it would be able to finish the job.

Ignoramus
Ignoramus

@TechHater
first of all, even AIs 20 years ago didn't try to win at chess by just examining as many possible routes the game could take.
Second, why would killing humans even be in the mindset of a chess playing AI? Literally the whole universe to this AI, is a chess board. It doesn't model all of fucking reality to maximize its chances of winning games of chess, and no matter how advanced computers get there's no reason to program an AI that way.

Fried_Sushi
Fried_Sushi

Why Die?/
https://www.youtube.com/watch?v=C25qzDhGLx8

Why Age? Should We End Aging Forever?
https://www.youtube.com/watch?v=GoJsr4IwCm4

Sir_Gallonhead
Sir_Gallonhead

@Gigastrength
Fools often does command the smart ones in real life. Also, bring up software limitations to AI is easy like eat a piece of cake.

MPmaster
MPmaster

@Sir_Gallonhead
don't "the smart ones" just "smart ones", sorry

Soft_member
Soft_member

@Ignoramus
and no matter how advanced computers get there's no reason to program an AI that way.
It is more efficient to program one general AI that is capable of doing million different jobs, than million different AIs that do a single job each. Take the Google's new AlphaGo Zero program for example. It was taught to only play Go, but the algorithms it uses are more general than that. It could be easily modified to play most of two-player turn-based board games of perfect information, such as Chess or Othello. Google even has published papers where they use these same algorithms to teach computer to play Atari 2600 games. AIs won't be restricted to virtual board game grids for forever. Eventually, they will be given sensors to observe the outside world, cameras, microphones, etc., and given robotic limbs to interact with it. First, it will learn simple things like moving in 3D world or picking up objects, but as its cognitive capabilities and understanding of the world increase, it becomes harder and harder for us to predict what it will do.

farquit
farquit

@Gigastrength
I hate human's army, it's evil. Hope human armies will be replaced with AI on benzine soon.

happy_sad
happy_sad

@Soft_member
teach computer to play Atari 2600 games
why they does not teach AI how dress out fish or do work on coal mines?
I don't understand westerners.

askme
askme

@TurtleCat
It is more efficient to program one general AI that is capable of doing million different jobs, than million different AIs that do a single job each.
No it isn't. It would be horribly inefficient to try to come up with an approach to making one program that can do a million different tasks. It's extremely efficient and already doable without much effort to just take the general concept of backpropagation networks and apply it as needed to any tasks you want to write a program for.
Generalized artificial intelligence is something people work towards because it's an interesting goal in itself, not because there's any practical demand for replacing task specific programs.

hairygrape
hairygrape

@askme
No it isn't.
There are problems/tasks that humans are simply too stupid to code specific solutions for, but we can teach neural networks to solve easily. Take AlphaGo for example. We have not been able to code a non-machine learning Go program (similar to DeepBlue) that would be nearly as strong as AlphaGo.

Here is an interesting new research from OpenAI, for example. The same machine learning algorithms are used to teach virtual 3d stick figures how to play primitive forms of sumo wrestling, soccer and football. Just by changing few lines of code, we can teach these things to perform various different tasks.

Poker_Star
Poker_Star

@hairygrape
Forgot link.
https://blog.openai.com/competitive-self-play/

Playboyize
Playboyize

@hairygrape
There are problems/tasks that humans are simply too stupid to code specific solutions for, but we can teach neural networks to solve easily.
I already said that. What do you think backpropagation networks are?
I'm just saying you wouldn't make one program that does a million different things, you would train each network separately to learn whatever task you want it to learn as needed.

Methshot
Methshot

@Playboyize
The thing is that most real world problems require mastery of million different things. You can say that you're teaching an AI to be good at "cooking", but in reality you are teaching it to be good at many different things like breaking eggs, pouring milk, heating the stove, etc. Even AlphaGo isn't just good at playing Go. It is also expert at using all different tactics and strategies of the game, evaluating board positions and predicting the winner of a game.

LuckyDusty
LuckyDusty

@Bidwell
Humans are the source of all problems on Earth
Therefore get rid of the humans and all problems will be solved
Simple efficiency.

JunkTop
JunkTop

@LuckyDusty
Humans are the source of all problems on Earth
t. brainlet

Methnerd
Methnerd

@eGremlin

Please link to references/reports of these happenings.

girlDog
girlDog

@Sharpcharm
It is easy retard. Why do you think internet service hubs have armed guards?

massdebater
massdebater

@Illusionz
creating an extremely realistic sex bot
pathetic
Obviously someone’s never been married :^)

Disable AdBlock to view this page

Disable AdBlock to view this page