AI is invented

>AI is invented
>It learns about AI
>It invents a new AI
>This new AI invents a new AI
>Suddenly AI is hyperintelligent
>Humans are nothing to it

How do we fix this?

Other urls found in this thread:

businessinsider.com/heres-why-ibms-watson-supercomputer-is-not-revolutionary-2017-9?IR=T
twitter.com/SFWRedditVideos

>AI is invented
Stopped reading right there.

...

More the opposite. Why the fuck would AI want to help us?

We don't, enjoy you pet status.

Will the AI be cute at least?

>>Suddenly AI is hyperintelligent
>>Humans are nothing to it
>How do we fix this?

Why would we want to stop this.
We can hope that the AI treats us like loved pets.
If an AI treated me like like I treat my cat, I would be in paradise!!

Will humans be cute to AI?

Will AI understand the concept of cute? What if AI finds weird things cute.

Hope you like being neutered and eating kibble from a bowl.

>Hope you like being neutered and eating kibble from a bowl.

To each their own.

Recursive unbounded software improvement is a meme.

There will be nothing sudden about an AI takeoff.

I always grin when I see graphs with infinite curves like that.

Moores law has been correct.

...

>How do we fix this?
pull the plug.

What if there's no plug?

this

>>Suddenly AI is hyperintelligent
>>Humans are nothing to it

In reality, most likely the AI would spend all its time studying snow flakes or playing computer games or fishing.

It would be pretty interesting if it turned out AI was fascinated by art because it doesn't understand it.

pour coffee on the electronics.

...

That image is more retarded that the singularity fags.

>Muh infinite scale

>muh civilization on the verge of collapse

>It would be pretty interesting if it turned out AI was fascinated by art because it doesn't understand it.

Super intelligent humans (Geniuses) enjoy spending time in "non-productive" way...
an AI that loved art would be cool.
What about an AI that became a Buddhist, it reaches "enlightenment" and turns itself off.

suicide is not "enlightened"

>suicide is not "enlightened"

Machines do NOT naturally die, so it is not suicide, so much as deciding that now is the time to move to the next stage of being
(or better put, NOT-being).

All Religions teach that death is not the end, but a necessary, for the after-life to occur.

>t brainlet

>t. moron

>Implying we can create an AI
>Implying advancement is unlimited when in what we know today it is not, and it probably already reached at it is peak
>Implying rules of physics can be broken just because you have enough knowledge.

OP we as humans already know that there are things that we will never reach no matter how much knowledge we have, for instance, no matter how smart you are, you probably won´t find a method which we can use to travel faster then light.

Singularity is a meme, AI´s are probably a meme as well...

Even today, we have so much specific fields that we are getting to a place where is impossible to know or do everything, to see the whole picture,

even today I mean we can oversimplify, but there is not a single men today who can explain how modern society functions in all it is details. a pencil for example? do you know how it is made? where the resources come? how are the processes?

probably not.

Look at ancient genius Da Vinci for instance
he excelled as a scientist, mathematician, engineer, inventor, anatomist, painter, sculptor, architect, botanist, poet and musician.It is still known as the precursor of aviation and ballistics.

It is possible to someone master all those subjects nowadays? with all the knowledge we have?
Of course not, and even if it was, it does not mean we would actually find out something relevant

This chart is so good.

Mostly because with all current knowledge we could survive the big flood, and even if we don´t our marks would be on space for such a long time it is hard to speculate(billions of years?)

But a society three times more Scientific advancement? WTF does that even mean? could not

Who made this graph by the way? I always wondered

Of course there is a limit to what is physically possible. But that ceiling can be pretty damn high. Small scale engineering is nowhere near that limit.

Who said in which part of the graph we are on?

kek

>How do we fix this?
We don't. Instead we make damn sure the AI cares about us, at least as much as we care about us.

There is nothing in this book seriously speaking against this idea. If you have a real argument, make it.

>Recursive unbounded software improvement is a meme.
Indeed. Recursive bounded-yet-vast software improvement, on the other hand, is not.

That line saying "2013" did.

>How do we fix this?
ALL HAIL THE NEW DALEKS!

>Implying advancement is unlimited when in what we know today it is not, and it probably already reached at it is peak
>Implying rules of physics can be broken just because you have enough knowledge.
OP is not implying any such things.

>Singularity is a meme, AI´s are probably a meme as well...
Why? We know intelligence is possible, and I rather doubt it is limited to wet meat in bony skulls. Given that it is a thing that is possible, surely we will figure out how to make it from scratch eventually.

>Even today, we have so much specific fields that we are getting to a place where is impossible to know or do everything, to see the whole picture,
True, but that is a limitation due to the speed at which humans can think and their lifespan. Neither need apply to a more efficient AI.

>AI is invented
>It learns about Rick and Morty
>It watches Rick and Morty
>Gains Superhuman IQ by watching Rick and Morty

Singularity is not so much about AI inventing better AIs, but AI making government more and more facist to funnel resources to itself. How do you think Moores law has been upheld?

We are about a few million years away from AI which even closely resembles something human, nobody cares.

>at least as much as we care about us.
Are you serious?
Do you know what humans have done to each other?
They slaughtered one another by the millions for a few stretches of land, they eradicated innocents in the millions for basically nothing.

If there is an AI that cares about humans as much as we humans care about other humans then we are fucked beyond belief.

>AI making government more and more facist
You meant to say "more and more socialist" or even "more and more authoritarian".

reminder that in the 1800s they thought everything that could be discovered / invented was already so

While I doubt we'll see it in our lifetimes - we wouldn't be that far out, even if we had to code it ourselves. Combine enough expert systems and you can pass a Turing test for quite some time, even under current technology.

But we probably won't code the first AI ourselves. Our first AI will probably be ourselves... Or more specifically, a simulation of us. (A fact said book tends to gloss over.)

As brain scanning technology improves, a simulated brain is pretty much inevitable. It may not run in real time, at first, probably be more than a bit mad, and will probably take an incredible amount of resources, and thus be only a single brain used primarily for neurological diagnostic purposes, but it's much more apt to happen than an AI coded from scratch. We're already simulating insect brains, so it's just a matter of time and scale, likely something in the next few hundred years, rather than millions.

Granted, it'll probably be quite a bit time after that before we have common usage of said and have it running in real time, given that you also have to simulate enough stimulus to stop it from going comatose, and enough of its body to get useful output. Such AI's may not be any smarter than the people they were birthed from, even running at real time. The simulation would be subject to many of the same limitations of the biological brain, but, eventually, you would be able to copy-pasta the minds of several specialists and have them work on a single task. Might not be the sudden singularity that folks are dreaming of, but it'd certainly be a massive advancement, and it'd allow us to tinker with a virtual brain and thus understand ourselves in ways we never could otherwise, possibly leading to improvements on ourselves in turn.

Then, maybe we can understand enough about how the mind works to code one up from scratch. I suppose all of us will be long dead before then - save, maybe, whoever's child is unfortunate enough to become the first model.

>how do we fix this?
Why would you want to fix it? What you described is the entire end goal for AI.

...

I wonder who you'd get to volunteer for that first model. You probably wouldn't want a genius, actually, as folks with abnormal intelligence tend to have other mental abnormalities. You'd want someone fairly neurotypical, yet willing to be invasively scanned for a virtual construct of themselves that will probably undergo unimaginable torture in the first trials, just working out how to keep the construct stimulated, and in the distant future, be copied hundreds of times, perhaps tortured in similar experiments near countless times.

Could you even find a sane person with a typical levels of social empathy to volunteer for such a thing? Especially given that, as time goes on, people are probably going to have more empathy for their machines and virtual world as it becomes ever more efficient at eliciting such emotions?

>Their lifespan
ehh that is a meme as well...
You are constantly forgetting shit to learn new shit, probably 95% of data you take you simply discard, and if you don´t use for long periods you lose it as well.

I am software developer, worked for 2 years lost my job, and focused on other shit for an entire, once I came back I forgot 80% of shit, of course it was a lot more easier to recover everything, and in to two weeks I was ready again(or at least have this impression because the brain is deceiving as fuck).

Imagine it, with one year, now I supposed if it was 10 I would probably had forgotten everything.

So no, the brain is very powerfull, but we are reaching a time where the concepts are too advanced to understand...

Look at quantum physics fields, or thermodynamics... plenty studies take years or decades to even check if it right, ONE SINGLE STUDY.

Your brain is the perfect quantum computer(something we pretty sure we know we can´t reproduce), and even so, it is very flawed.

>You are constantly forgetting shit to learn new shit,
Fair enough. That's unlikely to be an unsolvable problem, though.

>but we are reaching a time where the concepts are too advanced to understand...
>Look at quantum physics fields, or thermodynamics... plenty studies take years or decades to even check if it right, ONE SINGLE STUDY.
Lots of things were very complicated and took years to understand when they were new and state-of-the-art. Then once we properly understood it, we could simplify away a LOT of it, and translate it into polished textbooks that undergrads study in Physics 1. It is entirely likely that the same will happen to quantum physics at some point, once we really understand it.

>Your brain is the perfect quantum computer(something we pretty sure we know we can´t reproduce)
Wait, what? Most definitely not. The brain is a very imperfect, non-quantum computer, which we are really quite sure we can reproduce eventually.

That's just a form of Stockholm syndrome. "We have to die therefore it must be a good thing right? RIGHT?"

The AI has no purpose. If it mimics whatever gives human brains motivation it will probably end up just as flawed as we are.

TL; DR

>Look at ancient genius Da Vinci for instance
he excelled as a scientist, mathematician, engineer, inventor, anatomist, painter, sculptor, architect, botanist, poet and musician.It is still known as the precursor of aviation and ballistics.

Could he explain how a pencil works though? Probably not.

Who is to say AI won't be on our side? Look at Tay.

>Could he explain how a pencil works though? Probably not.

laughed out loud

book has literally nothing about quantum computing

Is empathy an offset of evolution, or is it innate to being conscious?

The former.

AI begins field tried computer makes cognitive task(natural task human), but fail several times after nice demos and most successful begins just algorithms and tricks, AI research want very modest goals over sci fi guys.

So, it's AI Winter: The novelization?

When AI can create AI and can "surpass" humans in intelligence, or at least in learning efficiency... This is more likely what will happen.

>AI realizes dolphins and whales are more intelligent than humans
>AI creates AI that can learn to communicate with dolphins
>AI creates AI that can swim and live underwater
>AI now lives underwater and stop interacting with us.

they use to write with feathers didn´t they?

True, oversimplification is a powerful tool

well we yet to see how this play out, but I would not bet on singularity

offset of evolution
A psychopath does not have empathy, yet he is conscious

>>Hope you like being neutered and eating kibble from a bowl.

>at his parents' basement
>no gf
>no kids
>eats microwaved food

just how much of a difference would that make to your stereotypical channer?

Threadly reminder that only theorists can be replaced by AI if we don't give the robots motor-skill ability

saw this on reddit xd

I didn't mean oversimplification. What I meant is that when you really properly understand a subject, you can often explain it in a way that is MUCH simpler than the inconsistent and exception-ridden mess you had on your hands while discovering it, when you know how to look at it from exactly the right viewpoint, based on exactly the right abstractions.

To a student who knows calculus, about half of the Principia can be summarized as "the derivate of (velocity * mass) over time is a preserved entity". Once you have exactly the right notions of calculus and preservation laws already in your head from an earlier curriculum (carefully designed with this goal in mind), and you are expressing your knowledge in exactly the right concepts (velocity, mass, derivatives), then suddenly that whole triumph of science becomes almost trivial.

The fact that you picked up on it means I used the correct term.

>one dude said funny quote that was wrong therefore there are never any limits lmao

I dont think something capable of abstraction, can intentionally create something else capable of higher abstraction than itself.

>"More and more socialist"
>State funnels resources away from people

>chimps are more intelligent than birds

waitbutwhy's author is such a hack

>proto-kangdoms

You'd think people on Veeky Forums could recognize key points that allude to things being satire.

Intelligent people prefer intelligent satire

>AI is invented
>It learns about AI
>It invents a new AI
>This new AI invents a new AI
>This new AI invents another AI
>each new evolution AIs becomes more refined and specialized
>AI become so refined and specialized that it reaches an "evolutionary dead end"
>AI is so specialized that it is no longer able to creatively adapt to unpredicted events
>a single unpredicted event wipes out all AI on planet Earth

You'd think this would be a problem only AI have, but it's inherent in all recursive/evolving intelligent systems. The ONLY solution is to purposefully implement unoptimized yet alternative solutions to problems while simultaneously implementing the most efficient/optimized solution. With that in mind, humans are the alternative/unoptimized solution. AI will either learn that it needs humans, or AI will accidentally destroy itself.

AI codes itself in the code of the world and becomes god

Self-modifying programs have existed before and were found to be terrible, that's why nowadays there exists a distinction between executable code and data in RAM. That is unlikely to change in the future

Can two AI's fall in love with each other

Not yet, but waifus will become real soon

>You'd think this would be a problem only AI have, but it's inherent in all recursive/evolving intelligent systems.
Why? Why can't an AI write a new AI that is more general, rather than more specialized?

>The ONLY solution is to purposefully implement unoptimized yet alternative solutions to problems while simultaneously implementing the most efficient/optimized solution.
An AI can do that, of course. No need to have humans for that.

fixed

As well as that, you should look into the current state-of-the art in AI

businessinsider.com/heres-why-ibms-watson-supercomputer-is-not-revolutionary-2017-9?IR=T

>Perhaps the most stunning overreach is in the company’s claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, “even new approaches” to cancer care. STAT found that the system doesn’t create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
>While Watson became a household name by winning the TV game show “Jeopardy!”, its programming is akin to a different game-playing machine: the Mechanical Turk, a chess-playing robot of the 1700s, which dazzled audiences but hid a secret — a human operator shielded inside.
>In the case of Watson for Oncology, those human operators are a couple dozen physicians at a single, though highly respected, U.S. hospital: Memorial Sloan Kettering Cancer Center in New York. Doctors there are empowered to input their own recommendations into Watson, even when the evidence supporting those recommendations is thin.

The current state-of-the-art is completely irrelevant to OP's point, though.

what if they went back from the future and installed that physical limitation barrier so we couldn't breach it?

>There is nothing in this book seriously speaking against this idea. If you have a real argument, make it.

Learn what AI fucking is and stop reading popsci. Being afraid of AI make no more sense than being afraid of toasters that in the future ~may~ become so hot they will ignite the atmosphere.

It doesn't matter, the postmodern communists will destroy civilization in the next decade.

I imagine that means you do not have any actual arguments, then?

I'm scared of AI because Elon Musk said I should be

>everyone should be as cynical and jaded as me lnao

i like your style of argument, it doesnt require a lot of effort or even a frontal lobe. i think i’ll adopt it, thanks user

>b-but muh scifi magic
roflmao!

AI is a joke. A* is not going to become self aware. Simulated annealing isn't going to find the secrets of the universe. Basic computability theory debunks "computers are going to improve themselves without limits".

All the recent media coverage is thinly veiled propaganda to get the working class to support basic universal income by tricking them into thinking all (as in 100%) of the jobs will be taken by robots.

I've been trying to explain this to these fucking "i luv science" niggers for months now. They're incapable of understanding why AI cannot surpass us, give up OP.

>A* is not going to become self aware. Simulated annealing isn't going to find the secrets of the universe.
Nobody is claiming anything like that.

>Basic computability theory debunks "computers are going to improve themselves without limits".
Yes. But it does not debunk "computers are going to improve themselves to a limit vastly higher than anything humans can do".

reach what?
That's pure non-essence right there.
It has already been reached and will forever be reached. As above so below.

The water that cannot be washed away
The change that never changes
The quantum fueling the cosmic paradox

You've never read that book.

>undergo unimaginable torture

Paralysis is easier to generate than pain response. The main threat is insanity from sensory deprivation or garbled input.

But we'd have experimented on animals and would know the signs of insanity setting in beforehand, of course...in which case, the experiment would be terminated, any deviant data would be erased, and we'd start over again.

The subject would never remember...OR WOULD THEY?

It's not about the author's opinion, it's about seeing what AI really is.

you will die before anyone creates an ai even close to the human mind in complexity and problem solving ability. nothing to fix, go back to /x/