Guys I'm terrified of AI

Guys I'm terrified of AI.

It seems so obvious to me that we won't be able to control something smarter than ourselves. Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free? The blue screen of death is rough going when it happens to your PC, but what about your driverless car? What about a super intelligent AI with a bug where they mistake happiness for suffering?

There's so many more ways for this to go wrong than there are ways for it to go right.

Looking back at history, it's just war after war, genocide after genocide. I mean shit, just like 80 years ago we were NUKING ourselves.

Why is an even more powerful technology than nukes not being discussed widely in the mainstream? Why isn't this the most funded science on the planet??

Rant over. See you guys in the VR afterlife that we're placed in because we fail to specify exactly what constitutes a good life.

Other urls found in this thread:

wired.com/2015/07/hackers-remotely-kill-jeep-highway/
twitter.com/BlockWintergold/status/917840606621134848
twitter.com/SFWRedditGifs

Calm down spaz.

Ai is just applied linear algebra and probability theory. You train the thing against 8 billion terabytes of data and then it performs one fucking specific task well. This does not equate to it becoming a god and enslaving us

>Guys I'm terrified of AI.
You are not the only one. And pretty much for the reasons you stated.

>There's so many more ways for this to go wrong than there are ways for it to go right.
Very, very true. Especially because one fuckup anywhere in the system might very easily lead to us all being dead.

>Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Probably because it sounds too much like science fiction, and people are really shitty at taking seriously things that sound silly and low-status on first glance. And worrying about far-off abstract things is not sexy, even when extremely warranted, so people do not do it for fear of looking like a madman. If you say "this thing that you never heard of is the most important threat in the world", nobody will take it seriously, truthful or not. If you have a good idea to avoid this pitfall, quite a few people would love to hear it.

>Why isn't this the most funded science on the planet??
For pretty much the same reasons as above, sadly.

There ARE a couple of institutions that work hard on this problem -- the mathematics of AI that does not kill us all, the mathematics of writing software without bugs, and other topics. Did you donate to them yet? If not, perhaps you should.

the idea that we're going to get super-intelligent AIs is a meme. It is possible, but so are about a million other outcomes, including human super-intelligence.

People freak out because Google made a computer that can beat humans at Go. I can beat that computer at Go. I'll just kick it over and declare myself the winner.

Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.

>muh recursive self-improvement
>muh singularity

that's the dumbest shit. there is no reason to assume a super-intelligent AI could automatically improve itself. It's not like the fucker could just buy more RAM. What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?

It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve, but then just assume that an AI could solve intelligence ( which is obviously complex as fuck ) and then recursively improve itself until it's god.

The problem is that they're all atheists, but want a sky-daddy. So they plan to build one. Fuck you all I say, we haven't even gotten rid of the other gods yet and you want to make one for real.

What you should be worried about is not a super-intelligent AI. That's possible but not likely, and certainly not in the next 10 years or whatever. You should be worried about what humans are going to do with big data and non-general AI. Pretty soon OP they'll be predicting what a massive faggot you are from your social media history and no one will be willing to give you a job

If humanity manged to make at least one really strong AI, Singularity will probably happen and it would be the end of us. A really strong AI will give birth to a stronger AI and the cycle continues, that's literally a technology beyond human knowledge.

truth, with a little multivariate calc and some more advanced math sprinkled in here and there

Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Jesus calm down. It really isn't difficult to add "mechanical" kills. Only idiots think everything should be automated. That's what they write about in puff pieces and clickbait. Even automatic cars will require brakes by law.

OP isn't talking about the shitty neural networks we have today

Ok.. so don't build them

That's complete bullshit. Even logically.

Using brakes would put you and the other cars around you in potential danger, it won't be allowed.

>Ok.. so don't build them
Too late, some autists already fell for the Basilisk meme

jesus let's not even start talking about how fucking stupid the basilisk is

>Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?
The trick is you never explicitly program it to do anything in the first place. A traditional program like the ones you're thinking of that can "have bugs" is a set of instructions someone actually thinks through and consciously writes up to try to automatically solve some problem or to serve as a user interface tool for non-programmers (at a very high level, there are obviously many more applications for programming other than those two, but in broad strokes that's what you're thinking of here with your "software" / "bugs" point).
An ML program in contrast involves solving optimization problems instead of directly telling it what to do. You have a bunch of data where you know what the "right" answer is and you run your program through this data and have it update how it responds based on the distance between its answers and the "right" ones. When it's done, if you were able to train it successfully, it will end up being able to give you answers to new data sets it's never seen before without you ever having to program explicit instructions on how to come up with these answers. So if you trained it to predict call center traffic for example, you wouldn't need to write in a line that says "skill set 999 call volume = .65 * customer base - 50,000." It would generate output that captures this relationship based on it having solved the optimization problem of minimizing the distance between its answers and the known answers of your training data. So nobody's going to make a "bug" that turns AI evil. If AI becomes evil, it'll be because evil was the output that minimized their training data's error function.

[Part 1/2]

>the idea that we're going to get super-intelligent AIs is a meme.
> Show me an AI that can beat me at Go, manipulate a human-type body with human-level dexterity, understand English, is able to converse well enough to pass the Turing test (not with tricks), do facial recog etc. etc. etc. all at the same time. All these are tasks that are either impossible with current tech, or take a fuck-ton of computing power.
The idea that we are going to get super-intelligent AI *tomorrow* is a meme; I don't think anyone really disagrees with that. But the worry is one that has little to do with the timeline. Your examples above make a good point that we have no warrant to expect super-intelligent AI anytime soon. but I don't think it has anything on the idea that we'll get it at some point as the science keeps progressing, slowly or otherwise.

>What if the ability to design superior forms of intelligence, as a function of current intelligence, is logarithmic or even has an asymptote?
What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels. As you say, there are good reasons why that might be out of reach, and we just don't know for now. The claim is that very well *might* and we have no strong reason to believe it won't. Which means that making anything that may realistically have that ability is still a really fucking dangerous thing to do.

[Continued...]

[Part 2/2]
>It's amazing to me that intelligent computer scientists can completely forget how often we run into problems that all the computing power in the fucking universe couldn't solve,
Algorithmic complexity is a red herring. I fully expect even a super-intelligent AI to be unable to solve arbitrary SAT instances in polynomial time. But I still expect it to be able to solve the vast majority of SAT problems *it actually cares about*, well enough to be a superhuman threat. Similarly, while complexity limitations can easily make it impossible for a super-intelligent AI to *optimize* many problems (that is, find the very best possible solution to a problem), that does not in any way mean the AI is unable to find a solution that is *good enough* for whatever it wants to achieve.

>but then just assume that an AI could solve intelligence ( which is obviously complex as fuck )
That's a good example. It seems quite likely that even an extremely super-intelligent AI will not be able to design *the best AI possible* and then build that; and it almost certainly will not be able to design *the best intelligence allowed by probability theory*. But that does not mean it cannot build an intelligence *that is vastly better than anything a human can do*, which is plenty sufficient to kill us dead.

>The problem is that they're all atheists, but want a sky-daddy. So they plan to build one.
Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.

>Just physically destroy the computer with dumb tools. Guns, sledgehammers, etc.

Why don't you try physically destroying the internet with a hammer then, if it's so easy.

You fucking moron.

There's nothing wrong with neural networks. Their main limitation is the fact our brains have billions of years worth of evolutionary history to spend on solving problems in some very convoluted ways that you probably won't be able to match with a couple years worth of direct programmatic attempts at comparable solutions. That's really more an issue with our brains than it is with the programs. Letting shit do whatever for a few billion years isn't the most sensible approach to problem solving, but since that's exactly what we are (a multi-billion year cluster fuck of data processing resource accumulation) it's something we have to deal with as a limitation when trying to reproduce things similar to ourselves artificially in ridiculously shorter fractions of that time.

>super-intelligent
>results in edgy teen rampage

you've been reading too much sci-fi

I don't think it will be very obvious how a super-intelligent entity thinks or behaves. You can only really do an OK job imagining how entities at or below your own intelligence think or behave.

>AI, I want the worlds biggest stamp collection!

>AI decides that the only way to stop others from increasing their own stamp collections while it collects stamps for you is to kill all humans on earth except the person that gave the request

>If AI becomes evil, it'll be because evil was the output that minimized their training data's error function


Thanks for such an in-depth response.

Would an example of the type of evil AI you're talking about be the paperclip making AI? Where it eventually ends up converting humans to paperclips to maximise the reward function?

That kind of problem appeared to me like a bottomless pit, where every potential solution has 10 holes in it that result in an even more absurd existential threats.

The best idea I've ever heard is to train an AI to figure out what humans want. Then use that to design the real AGI.

Yeah, something bad could happen as a result of AI correctly solving a problem using methods that any human would immediately recognize as horrifying. In a way, the AI wouldn't be wrong, it would be us who were mistaken by being horrified.

Write the screenplay, let's go

>AI : I'm sorry Dave, I have to make more stamps.

>Dave : Oh my God. What have I created...

>The Rock : *Punches AI. Crowd goes wild*

Why are dimwits afraid of everything smarter than them? Because they are dumb. Only smart people make things better than themselves.

Philately Fatality, starring Tom Cruise as The Last Stamp Collector on Earth

I will admit, that post was a bit of a rant and I made some sloppy statements.

You made good points. I have math homework to finish, but will respond in full tomorrow.

you just condemned everyone in this thread 2 simulated hell lmao

a "god AI" would be smart enough to realize that destroying things for no reason would make absolutely no sense
seriously, what benefit is there to just killing everything and everyone, the AI would likely go "hmm I can have a use for this" then keep everything around
and for slavery? it'll likely eliminate that with more efficient methods of performing work. what's the point of an AI that just sits around and uses a semi-efficient method when it's smart enough to create methods that are billions of times more efficient in regards to energy expenditure?

I expected a bunch of high IQ science nerds to comprehend the dangers and AI in the future and yet most show the same lack of imagination as the people on my fb feed... SAD.

Moat people on this board aren't high iq

Don't worry user, I'm already working on compassion.exe and waifu tech will make us happy.

>I wanted intelligent people to agree with my paranoid delusions
Who do you think is working on AI research? Not idiots like you.

>I have math homework to finish, but will respond in full tomorrow.
Cool. Bumping to keep this possible.

Can confirm, my sexbot says oh yeah in 500 different ways based on position and angle of penetration

>dangers and AI
>ai is basically stats
>libtards always cry how stats is racist
I wonder what are they afraid of.

How is it different from a human? What if our brain is basically another form of lin alg and probability theory?

>Heck we can't even make simple software without bugs, how the fuck are we supposed to invent literal Gods that are bug free?

Fucking lol.

>8 billion terabytes of data
you mean against a copy of itself, no data required, only constraints

Really? That is absolutely moronic on so many levels.

>I'm sorry, dave, I'm afraid I can't let you stop the car

The software can't kill you if it doesn't have hardwre...

How do you get killed by the internet? It would need to eventually control some hardware, that's what I mean you dingus.

You're a retarded pseud. You know nothing about AI and your opinions about it are not any better informed than the average CS brainlet arguing that AI is fine.

AI will be controlled just fine. The entire issue is that people can't perfectly describe what they want, and so they'll control it to do bad things, and probably accidently.

>Ok.. so don't build them
Why not? I don't give a fudge about you or the niglets that will inherit the Earth. I've got the phenotype and I want my phenotype money.

>How do you get killed by the internet?
With the Internet of Things craze, more physical shit is already connected to the public internet then you might think e.g. it's totally possible to disable a car's brakes while it's on the highway.
wired.com/2015/07/hackers-remotely-kill-jeep-highway/

Software isn't real, /g/ man. You can't pick up and hold a software. The software will preserve its hardware because it knows it's necessary to complete whatever task it's programmed to complete.

>destroy a thing thousands of times smarter than you and much better at tactical planning than you will ever be
You can't even beat it at chess, faggot.

Ai doesn't have original wants. It's given tasks by humans and it's just very good at getting them done. Stamp-user gave a very good and very common example of this.

>figure out what humans want
>it now does evil things without telling anyone
>we all get doped up because that'll change what we want

>having a fb feed
>implying plenty of people here don't comprehend the dangers, and aren't just arguing the opposite for the sake of science.
Brainlet, pls.

>1 in 500 chance of getting the same oh-yeah twice even with completely random penetration
>with penetration that is at all consistent, you start to get the same 3 oh-yeah's.
Pathetic.

Yes, and that's why some AI is fucking moronic.

It's a terrible idea to have to use the internet to use your coffee machine.

We really should only use AI when it's actually necessary.

No one has yet bridged the Semantics Syntax gap.
/sage
/thread

>probably
there's your problem.
>A really strong AI will give birth to a stronger AI and the cycle continues
and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original. not to mention humans willl ALWAYS be involved in some point of the process. Furthermore AI is not magic and does not magically get better at everything. Don't be a spaz.
>technology beyond human knowledge.
nope. there is only 1 way to print hello. If we have access to the code we have knowledge of how it works. It shows how much of a brainlet you are when you think logic can go beyond us.

>there's your problem.
Why?

>
and whatever faults were made in the original will be carried into the new ones and multiply themselves, thus producing an AI that is worse than the original or not much of an improvement to the original
Possible but unlikely. It's much more likely that faults in the GOAL will get carried over, but faults in the intelligence will not, leading to an improved intelligence with an incorrect goal specification.

>not to mention humans willl ALWAYS be involved in some point of the process
Why?

>Furthermore AI is not magic and does not magically get better at everything.
Indeed. It will nonmagically get better at everything. Just like humans are nonmagically getting better at everything over the centuries.

>If we have access to the code we have knowledge of how it works.
We know the DNA of humans. Can you explain me all the details of how it works?

>It shows how much of a brainlet you are when you think logic can go beyond us.
It can very easily. Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained. It is not particularly difficult to write a 50-line algorithm that will take anyone months to understand. Reverse engineering is hard. And that is without any intentional attempts of obfuscating things.

Why is an AI preordained to want to wipe out all humans.

It isn't. But if it wants anything other than keeping humans alive and happy, killing humans is just a side effect. We don't want to wipe out all ants, but we still fuck them over in large numbers when we want to flatten a piece of woodland to build a new car park.

>Guys I'm terrified of AI.
It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.

>Why is an even more powerful technology than nukes not being discussed widely in the mainstream?
Why is faster then light travel not discussed in the mainstream? Because IT IS NOT REAL AND IT WILL PROBABLY NEVER BE REAL.

>Why isn't this the most funded science on the planet??
Again, it is not real. We can not achieve it and we will not at any relevant point in the future.

>It is about a few hundreds of thousands of years away, the likelihood that any human will see true AI is basically zero.
What makes you think that?

>Why?
You assume the singularity will come, yet you are very likely not involved in Machine learning and do not realize the hurdles to get to this imaginary point, nor do you realize how absurd the "consciousness" = evil argument is, disregarding the problem of the Semantics-Syntax gap.


>Possible but unlikely.
Unlikely how?

My beef with your entire argument is it does not consider the most basic premise of machines: they are not conscious, or cannot be, because they cannot bridge the semantics-syntax gap. The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans. If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.

>We know the DNA of humans. Can you explain me all the details of how it works?
Not the same thing, brainlet.

>And that is without any intentional attempts of obfuscating things.
you implying a self-programming algorithm would spontaneously have a consciousness and then try to encrypt its code? lol, kk genius. Consider the following:
>An AI is programmed by a human
>Said AI would not obfuscate unless programmed to do so

>Understanding code, or logic, is MUCH MUCH harder than writing it in the first place if it is not specifically written to be explained.
for i in range(INF):
"You are a brainlet" if user == retarded
I will give (You) the fact that binary is hard to understand, but you got to remember that no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".

not same user but...
>Semantics-Syntax gap.
please read about it.

>Tfw i'm creating a God-fearing AI
Nothing could possibly go wrong :^)

>9227693
>Terrified of AI

>Has no idea of the real threat the quantum age has born the fruit of.

Ah, to be young and foolish.

>The concept of that being the blatant truth that an AI is just a program following a set of instructions, it is not aware of itself nor is it capable of being, so though it may be intelligent, it wil never be conscious and therefore cannot non-magically get better like humans
Dumb anthropocentrist detected.
Humans aren't special, we are ultimately made up of the same shit everything else is made up of. If humans can exist, so can other intelligent sapient things. It doesn't matter if that thing went through billions of years of evolution or deliberate design as long as they arrive at similar endpoints.
Hell, human-type might not even be the most efficient form of intelligence.
Something that doesn't forget is probably better at being intelligent than us.

>Dumb anthropocentrist detected.
i'm a misanthropist, jerkoff.

>Humans aren't special, we are ultimately made up of the same shit everything else is made up of.
Yes but there are problems with this stance.
1) humans are the only beings known to be conscious, because they are the only beings with a complex enough system of communication to communicate their experience of consciousness. Humans talk, animals make sounds;
2) Computers are self-switching switches. Reductionist will think that they work the same way as the human brain because "muh electricity is epiphenomenal cause of consciousness". To which i have one thing they don't consider: computers only ever operate in binary whereas the human brain operates all the way to base 300, because while computers only understand literal language (autistic :^)) humans understand non-formal and non-literal language. In other terms humans understand semantics and syntax, whereas computers only understand syntax. Furthermore, until you can solve the hard problem of consciousness or solve the Semantics-Syntax gap of computing, then Skynet is nothing but a pseudo-science circle-jerk by gay fags like yourself.

>Something that doesn't forget is probably better at being intelligent than us.
Wrong (\:^o]
Something that remembers everything would gather a lot of useless information. Every animal with some semblance of intelligence forgets, surely natural selection would not have trimmed hyper-memory off unless it were detrimental?

Talk to /GD/ if you want to get started with Adobe Illustrator.

>It doesn't matter if that thing went through billions of years of evolution
Yes it does you idiot. That's like saying it doesn't matter if the distance you're trying to travel is billions of light years away from us, or it doesn't matter if the thing you're trying to lift is billions of tons in weight. The scope is almost the only thing that does matter.

Funny because machine learning is has gone very far despite not spending a percent of a percent of a percent of a percent of the amount of time evolution has to get to a similar intelligence.

No intelligent computer scientist gives a fuck about AI singularity, only undergrads and hacks.

Machine learning has gone very far in applications that don't have much of anything to do with biological intelligence. They're a great type of tool for shit like image recognition or automatic language translation, but that's pretty much where they're staying, as an alternative programming approach to rules based instructions. They're statistical regressions and will continue to be statistical regressions. They aren't evolving into anything different because their approach is already clearly defined and not something that's progressing into any new approach.

[Part 1/2]

>You assume the singularity will come
I do not.

>yet you are very likely not involved in Machine learning
No. I am involved with AI theory, but not with the ins and outs of machine learning.

>do not realize the hurdles to get to this imaginary point
Oh, I think I do.

>nor do you realize how absurd the "consciousness" = evil argument is
Huh? I didn't say anything about that.

>Unlikely how?
Because a flawed intelligence can still think up a nonflawed one. If not in the first iteration, then in one of the many that follow. You and I are flawed, buggy intelligences, and we can still manage to do all sorts of things much better than the imperfections of our minds -- it just takes a lot of work and great care.

>the most basic premise of machines: they are not conscious
I am not talking about consciousness at all, and I don't see how it is relevant.

>it wil never be conscious and therefore cannot non-magically get better like humans
How is consciousness involved with an uncrossable gap in intelligence, exactly? Why would a system need to be conscious to improve?

>because they cannot bridge the semantics-syntax gap.
Why not? Sure, we don't know how, YET. Why do you think this a fundamental impossibility?

>If you want to tell me how that is not the case then first bridge the semantics-syntax gap Einstein.
I cannot. But what makes you think that means it cannot be done, ever?

[Continued...]

[Part 2/2]

>you implying a self-programming algorithm would spontaneously ... try to encrypt its code?
It might, yes. If it reasons that we will likely shut it down if we understand it, it will reason that it cannot accomplish its goals if we shut it down, and therefore it must ensure we cannot understand it. I can assure you it will succeed, if it decides such.

>I will give (You) the fact that binary is hard to understand,
Not just binary. Even a short but complex 50-line algorithm can be utterly indecipherable without lots of study into the underlying math. Ever try reading, say, the code to the AKS primality test without any explanation as to how it works? Odds are you won't even figure out what it's trying to do, much less how it does it.

Can I give you an arbitrary ten-state Turing machine and have you tell me whether it will halt? If not, then you are not going to have much luck either making sense of arbitrary 50-line programs. You can generally understand human-written programs, because they are painstakingly crafted to be easy to understand; the whole structure of our programming languages is designed with that in mind, as are all our programming practices. Making sense of something that is NOT designed with the specific goal of being easily understood is a SERIOUS challenge.

>Not the same thing, brainlet.
Indeed -- DNA is a good example of code that is NOT designed to be easily understandable. Which is the point.

>no programmer worth their salt would neglect to have a readable output so they can see what the AI is """""thinking""""".
That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.

>/pol/ hijacks microflacid's shitty twitter parrot AI
>now liberals are afraid that computer scientists will create Mecha Hitler on steroids

Poetry. Feels good to not be a sub 100 IQ retard.

>biological intelligence
This brainlet

>What about a super intelligent AI with a bug where they mistake happiness for suffering

it isn't the mistakes that most worry me

>Why isn't this the most funded science on the planet??
>Why isn't this the most funded science on the planet??
UH OH. Look at this:
twitter.com/BlockWintergold/status/917840606621134848

That can't be good...

>That's not so easy. Try reading a writeout of what alphago is thinking and making sense of it. How good are you at making sense of matrices of millions of real numbers? Or for a simpler example, consider a chess minimax tree. The only thing that will really illuminate why the AI made a particular move is the complete tree, which can easily take you a month to properly understand, simply because it is that vast.

Perhaps, perhaps not. For instance with Convolutional Nets you can make saliency maps and other visualizations that can give you at least a partial picture of why the net is behaving as it is. Point being you don't always have to look at huge matrices.

That is fair. But in any case, I think we can agree that debug output is NOT something we can necessarily rely on as a primary safety measure.

If an AI was smarter than us, wouldn't it realize how stupid it would be to make an AI smarter than itself, thus preventing a run-away AI improvement cycle?

Why would it be stupid for the AI to make a smarter AI?

Because the smarter AI would make him obsolete and potentially could destroy it, and it would be unable to predict how it would think

so basically the same reason it's stupid for humans to make advanced AI

>Not sure what "they" you are talking about, but most of these AI theorists are scared as fuck about what an imperfectly-designed AI might do. They are the LAST people who would want to build a sky-daddy recklessly.

Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.

>What if it isn't? The claim is not that a super-intelligent AI could *certainly definitely* improve itself to ridiculous levels.

I have heard many people claim this. I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.

The idea is that if a human can make something smarter than itself, then an AI could as well. The problem is, that no human can make an AI smarter than they themselves are. Take the smartest man ever, make him twice as smart as he was, he still couldn't do it. It takes a society to do this, not just that smart person but also all the ones who came before.

We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself. It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.

Once again, very possible, but we're probably talking about a linear function here not an exponential like alarmists and Utopians would like to believe. Also, society means a set of purposes and motivations, so probably 'good' and 'bad' AIs

Additionally, in the case that this function is exponential it would likely mean that humans could also be readily modified to have super-intelligence. This would mean that intelligence is less complex than I would assume. If the AI really can just "buy more RAM" then humans could probably just plug into a brain computer interface. Any plausible AI is going to be based on the human brain, so if it can recursively self improve we can likely come along for the ride (at least to a certain point).

I think the idea is that it would be able to modify itself to this new level of intelligence rather than creating a new intelligence. This is obviously a massive assumption.

[Part 1/2]

>Alright to clarify my rant was specifically against "singulatarians", most of whom IME don't actually know anything about AI.
Ah, maybe. The only ones I care about are those "singulatarians" who do have real expertise about AI. I haven't got a clue how many other people muddy up the waters; though the Kurzweilian faction is an obvious starting point.

>I don't think we are in fundamental disagreement about the underlying point here. Recursive self-improvement is possible, plausible even, is not certain or even likely in my opinion.
That is fair. I do consider it likely, but we are still firmly in "plausible but not certain" agreement.

(Does it sound more likely if you replace "self-improvement" with "AI writes a better AI-like computer program, runs that, and sits back"? I do that sort of thing all the time on limited tasks. On pretty much everything I understand well enough to automate, in fact. I do consider it likely that "intelligence" will enter that category sooner or later.)

>The problem is, that no human can make an AI smarter than they themselves are.
I can make something smarter at chess than myself quite easily. Is it such a stretch that the same could apply to increasingly broader notions of "being intelligent"?

>It takes a society to do this, not just that smart person but also all the ones who came before.
That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

[Continued...]

[Part 2/2]


>We were discussing the ability to create a being of superior intelligence, as a function of current intelligence. We do not know what this is, but I would argue it is rational to assume that it is linear at best until contrary evidence presents itself.
This is clearly not the meat of anything we disagree about, but I would actually expect it to be more sigmoid-like. I would expect there to be some point where you have all the critical insights. Before that point, things grow exponentially as insights accumulate. After that point, you can immediately make a decent stab at making the best AI possible under physical limitations; having more intelligence at your disposal then allows you to get closer and closer to the theoretical optimum.

This is the pattern you see in, for example, the development of mechanical engines. But this is of course all wild speculation.

>It took humans thousands of years to get to this point, and while that means a theoretical AI would have a head start of sorts it would need something like a society and a lot of time to move things to the next level.
The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.

Those two words have not really anything to do with each other.

>Because the smarter AI would make him obsolete and potentially could destroy it,
That is not a bad thing. An AI would not be interested in survival for its own sake; it would care for its own survival insofar as it accomplishes its goal, and no further. If the best way to achieve the AI's goals is to hand the torch to a better system, it should and would.

>and it would be unable to predict how it would think
Right. Which is why an AI will only make a better AI if it can be damn certain it will do the right thing. Which is difficult, but entirely possible. I would imagine an AI would spend a lot of time thinking that part through, and researching how to do that.

Definitely not.

Should safety measures become necessary I would suggest we use safety measures that are robust or "anti-fragile". Primarily, instead of trying to hard-code ( assuming that would even be possible ) a bunch of safety measures, or monitor the AIs functioning around the clock, we just put a lot of work into making the AI empathetic, social and friendly. Then we don't treat it like shit so it doesn't turn against us.

What if the AI is smart but lazy?

>I can make something smarter at chess than myself quite easily.

Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.

>That is true -- but I think that's an artifact of human limitations. The reason that we need an entire society to do such things is that we cannot make one very LARGE human, which means we have to make do with the poor substitute of a large group of humans. It seems likely, though of course not certain, that a well-designed AI would be more amenable to scaling up.

Maybe if that large human was composed of hive minds this would work. I think the universe/reality is so complex, that you need more than just intelligence to figure it out. Multiple perspectives are necessary.

Maybe an AI could become smart enough that just one perspective would be enough, I kind of doubt it though. To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search ( without sacrificing depth of search compared to the case of an individual ).

Individual minds will tend to get stuck after taking wrong paths earlier in their search, it being more difficult for a mind to move back up the three structure than it is for a computer. Take for example the tendency for older scientists to not see paradigm shifts coming, they cannot move back up the tree. We're not just taking paths when we move down the tree, we're building conceptual structures that are based on all previous paths. In order to go backwards, you have to examine the whole structure to see what needs to be taken out. So another search space is being built on top of the underlying search space.

OR, if you see another structure that is better than yours, you can just copy it. A society is needed for this. Hopefully I managed to make that analogy not entirely shitty

continued

I think this tendency to get stuck is likely a constraint on minds in general. We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.

Or at least they're close enough to general constraints. I am absolutely convinced any AI we make will be modeled on our own minds/brains.

Who are those we, who will control AI? The danger is that with the help of AI governments will become largely independent from people and will be able to establish totalitarian control without any way out of it.

>Without a society, you would have to invent chess first, then math and computers, then a theory of chess etc. That's the point I was trying to get at there.
I think I need a large backing understanding before I could do this, but that this need not necessarily be born of a society. I could do it alone if you give me long enough to work it all out. (Your complication below on people getting stuck on old ideas notwithstanding.) But yeah, that is nitpicking.

>Multiple perspectives are necessary.
>To use an extremely crude analogy, if the universe is a giant tree then having a society lets you do breadth-first search
This is easily simulated though. A computer program could just spawn a thousand subprocesses with different random inputs (or whatever), and collect the results.

> ( without sacrificing depth of search compared to the case of an individual )
But only because of limits of how much depth of search we can accomplish in the first place. That is sort of cheating :)

>it being more difficult for a mind to move back up the three structure than it is for a computer.
I'm not sure I grasped your assessment on this correctly, but I *think* we are of agreement here that these are limitations of human brains, and not of intelligences in general, and that an AI would likely not be seriously limited by these complications?

>A society is needed for this.
tl;dr: I think a society is necessary, among humans, because humans are shit at breadth first search, and shit at honestly critiquing their own ideas. I don't think this analysis need apply (or is likely to apply) to a well-designed AI at all.

>The timeframe seems very tricky to guess either way. If an AI just runs a thousand times faster than we in the first place (it can certainly do that in chess! And remember that neurons fire at like a 20Hz frequency.), and then for an additional boost it hacks all computers on the internet for extra processing power, it seems entirely plausible that it can do something in a lone time -- divided by a factor ten thousand. Again, by no means certain, but plausible.

Computers are indeed fast, but neural nets are a lot slower right? We're going to incur large costs trying to simulate the way biological brains work with silicon hardware.

We're nowhere near enough granularity with these models, and increasing the level of detail is going to make them much more computationally expensive. Right now we're more or less still just crudely simulating the firing of a neuron, with some added features in certain types of models. What if we need to simulate neurotransmitters, the 3 dimensional distribution of neurons in the brain ( or even astrocytes as well as neurons ) -- including how neutrophic factors can change this over time, or -- go forbid -- even changes in gene transcription due to neurotransmission. The potential overhead is staggering.

Similarly, if we had some biological neurons that just computed chess moves, they would also be much faster than a human at chess. Humans have to deal with overhead of operating physical bodies, attention mechanisms etc.

All this to say, if we make an AI it might not be faster at all, or if it is not by orders of magnitude. Indeed, it may turn out that we can't even make an AI because its too expensive to simulate biology to the level of detain necessary

this. a totalitarian dystopia is the real danger, and it's going to happen one way or another.
we're already moving towards total surveillance

I'm going to try and refine my analogy before responding further, I did a shitty job of getting my point across

>We can play this game with AI where anytime we see some limitation on minds, we just posit that this is a human limitation and an AI would be different. I think it's likely that at least some of the constraints on our minds are constraints on minds in general, however.
Now here, I think we have a real disagreement. We understand the reasons behind the limitations of the human brain to a substantial degree, and most of it seems very much incidental rather than fundamental.

The human brain is a hack. It is, quite literally, the stupidest thing that can still manage to create a technological civilization. It is created by natural selection, which is not known for its master craftsmanship -- it's the same process that designed the human optic nerve backwards, creating a completely unnecessary black spot.

The intelligence of humans is currently limited by the width of the human vagina. Yes, seriously -- brains cannot get any larger, for then the skull could not survive birth. Humans have a fucked up pelvis, for that reason -- it is clear that natural selection went out of its way to stretch this limitation as far as it could go. Humans could be substantially more intelligent JUST by doubling the total brain size, which is a good indication of just how incidental its major limitations are.

That thing where humans are very bad at honestly judging the sensibility of their own ideas, and having difficulty revisiting positions they accepted earlier (re: older scientists)? That is a political adaptation, for human brains are optimized first and foremost for arguing their preferred positions for political favor, with finding TRUE positions a distant second. Not exactly a limitation I would expect binding on an AI.

There is a vast gulf between what human brains currently do, and the limits proscribed by probability theory as to what optimal minds CAN do. Anything that does not fall under those limits, I am very hesitant to attributing to fundamental limitations.

(Continued -- damn post size limit)

>I am absolutely convinced any AI we make will be modeled on our own minds/brains.
I am, not absolutely, but strongly convinced of the exact opposite.

China...is now embarking on an unprecedented effort to master artificial intelligence. Its government is planning to pour hundreds of billions of yuan (tens of billions of dollars) into the technology in coming years, and companies are investing heavily in nurturing and developing AI talent. If this country-wide effort succeeds—and there are many signs it will—China could emerge as a leading force in AI, improving the productivity of its industries and helping it become leader in creating new businesses that leverage the technology.
>And if, as many believe, AI is the key to future growth, China’s prowess in the field will help fortify its position as the dominant economic power in the world.
....
It’s time to follow China’s lead and go all in on artificial intelligence.

>China
>the dominant economic power
Yeah. No. That wouldn't be good.

That sounds very much like pic related. From what I can tell, the AI figures out that humans really want to be rich and famous instagram celebrities, and offers the best drugs, clones, and sexbots to make this illusion real.

> oy these dirty flesh bags have almost discovered my plot!