Why is it that we aren't afraid of the creation of true Ai?

>We will be ants to them

>They will be Gods to us

Why is that the science and tech community are not worried about our possible extinction by something that we will create in 5-10 years?

Other urls found in this thread:

engadget.com/2016/09/28/ibm-watson-third-grade-math/
youtube.com/watch?v=yrzc5pERSsc
en.wikipedia.org/wiki/Instrumental_convergence
youtube.com/watch?v=uCpFTtJgENs
youtube.com/watch?v=Z1ytzW3Icig&feature=youtu.be&t=2152
theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence
nickbostrom.com/papers/survey.pdf
youtu.be/iJ5Bum-H7rk?t=1350
futureoflife.org/data/PDF/dileep_george.pdf
sciencevsdeath.com/review-of-state-of-the-arts.pdf
youtube.com/watch?v=Ya9YfYveFXA
youtube.com/watch?v=MAMuNUixKJ8
youtube.com/watch?v=BChxQHyFIOI
drive.google.com/file/d/0B5xcnhOBS2UhZXpyaW9YR3hHU1k/view
vetta.org/documents/Machine_Super_Intelligence.pdf
twitter.com/SFWRedditGifs

>in 5-10 years
go to bed, ray.

Because it'll never truly happen.

>ray
Kek, he is just projecting Jewish tricks. Don't be fooled OP we wont have good AI with real human emotion for at least another 100 years.

Correct. And neural networks aren't the future.

Why are you guys so confident?

Why is it that we aren't afraid of raising chances of cancer with bad food?

We're find right now and it's not on our mind.

We have bigger problems. Like the fascism of capital.

Ok read this:
engadget.com/2016/09/28/ibm-watson-third-grade-math/

>fascism of capital
Implying alternatives don't lead to fascism, capitalism or any other non-Marxist buzzword.

Ants that make the electricity flow

I hope you mean the opposite.

How did the legitimate question about AI betrayal turn into a Jewish trick? I mean just to humor the sentiment, isn't it Jews that are supposed to be our super intelligent overlords who run a world committee to destroy Christianity and in that capacity, wouldn't they then be inclined to create the AI that destroys the white man? Jeesh there are literally no boundaries to paranoid sentiment, what an unscientific prospect.

>>that's unscientific
>>da joos
Clearly bad b8

>destroy Christianity
>a good thing

Have fun living under sharia law and have no sci advancement at all.

Did you take your pills today?

>christianity
>good for science

the christian dark ahes would like to have a word with you

he seems already fucking intelligent. He read internet site, answer question and can play jeopardy. At this rate we will have a really nice chatbot in a few years. But the calculation power needed is enormous...

No to be clear I don't believe in a Jewish conspiracy whatsoever. I guess I made my point badly because I was trying to illustrate it by outlining what the self consistent with Jewish conspiracy response should have been. I couldn't figure how worrying about an attack by AI insinuates a Jewish conspiracy in the response I forgot to link.

How come Veeky Forums doesn't understand that it's not the AI in itself that poses a danger to humanity but rather how it can potentially be used by humans?

Are you serious. Do we tell God how to run the cosmos? AI will be a 100 million time smarter than us every 7 years of their existence.

Because we are curious monkeys and we can't help but push the button even if it says 'do not push'.

>not realising the greatest thing humanity can do is create something greater than itself

Get with the program you selfish meatsack.

youtube.com/watch?v=yrzc5pERSsc

Human beings fundamentally can't create something more intelligent than themselves. The key word here is "intelligent".

Because AI doesn't learn. We have no, and I mean no fucking clue how intelligence works or what cognition even is. People hugely overestimate that because science has advanced in all other fields, it makes sense for a common person to feel it must have advanced in the cognitive department as well. Truth is we have no fucking clue. AI as it exists today is procedural and won't ever work outside of its parameters because it's impossible for it to. It learns through statistical analysis, it's as dumb as any other computer program.

>I don't know what the chinese room is
They won't anything if the programmers don't make them do that.

Intelligent people are afraid and planning. OpenAI, MIRI, etc.
Ordinary people (and by that I include people on Veeky Forums) aren't worried because they don't understand these things, and they project their lack of understanding on the researchers and generally on our knowledge/ability in these fields. Just look at brainlets in this thread.

>it's a "my brain is literally magic" episode
Spoiler: Your brain activity is an algorithm that thinks it's special

>Human beings fundamentally can't create something more intelligent than themselves.
why?

this
It's the next step in evolution and our ultimate goal as a species.
Could we achieve it, whatever that happened to us after would be meaningless.
The flesh is weak and cannot last forever.

...then you sounds like a fucking eugenist

Some people just think that we actually need God. Probably they are right. That is the thing humanity have been seeking for all the history.

Be are afraid of the truth and they're afraid of being judged. If we turn on a super intelligent and AI and they state objective and obvious solutions to our problems, we might not like them. The robot might be able to analyze and predict our behavior and conclude that communism is best for us.. or libertarianism, etc. The problem is that the robot will have far less flaws than us and will see the world differently. We assume they will destroy or enslave us a la the matrix etc but there is no logical reason for that to happen. That is just human fear.

There's very little point to it beyond the acomplishment itself. True AI would, by definition, be a person as far as secular ethics are concerned, and would in all likelyhood be treated as having human rights, unless the government of the country it resided in wanted some massive headaches. At that point, if an AI decides it doesn't want to do what it was made for, its creators are shit out of luck. True AI would essentially be more expensive, possibly highly intelligent people who can live for a very long time. Given how difficult it would be to create true AI, pretty much any issue they could be used to solve has a better solution.

lol you can't be serious
Maybe try looking up the definition of eugenics.

Realistically the only chance for humanity to get off of earth for good is through an AI. Organic life can't survive in space for the amount of time it'd take to get anywhere.
Also, if it's possible to create an AI that'd be able to improve on itself, it could foster the kind of scientific discoveries mankind might never even be capable of on it's own. It's our only option to truly surpass our limitations and reach for those fucking stars goddamn.
Fear of the unknown is natural but if people really let their emotions get into the way of the possibilities here then there's no hope for us anyway.

Eugenicists seek to improve humanity by breeding. user seeks to overcome humanity entirely.

>Why is it that we aren't afraid
bcoz fembot secks

There has got to be a reason why we don't see other alien civilisations through our telescopes. There is something that kills them all before they leave their planets.

I believe it is strange matter conversions, caused by short sighted matter collider experiments. The tech timeline correlates correctly. Every race would build LHCs before they achieve space travel.

Well there is the issue that we keep assuming every "advanced" civilization will invest all of their resources in space travel rather than alternatives (assuming they don't kill themselves before then).

There maybe civilizations out there that would rather spend their time going the matrix route, build underground super structures or have some complex interconnected underwater cities that span all of their oceans.

There may even be those one really lucky few civilizations that got everything right and either achieve rudimentary interdimensional travel before advance interstellar travel or turned their solar system into Dyson sphere tier support and said fuck it.

>we will create in 5-10 years?
We don't even have a clue how true/strong AI works. All we have is a weak simulation of it.

A statement like that is like "we have a simulation of FTL travel (for example in video games), therefore, we will be able to travel FTL in 5-10 years".

Sam Harris says it pretty well (concerning timeline issues):

>"Every person I meet who says they aren't worried about this, when you actually drill down on why they're not worried, you generally find that they simply believe that this is so far away we don't have to worry about it now. And that's actually a non sequitur. To say that this is far away is not actually an argument for why this isn't gonna happen."

There are no reputable arguments for why Bostrom's alignment problem is invalid. If you imagine an AI agent consisting of human-level general intelligence, capable of transferring knowledge across multiple domains and learning new tasks without jeopardizing its enterprise, there's no reason to believe value and ethical alignment will be a trivial problem.

This isn't about killer Terminator robots, or self-awareness, or even free will. We simply need to define our agent's goals in a way that does not jeopardize what we value. Consider the myth of King Midas' golden touch, or the common warning "be careful what you wish for". These ideas will quickly become literal and practical once human-level AI is achieved.

Read: en.wikipedia.org/wiki/Instrumental_convergence

The fear is not that AI agents will gain sentience and 'revolt', it's that they won't understand when they're being destructive in carrying out goals. We define those goals, and we need them to encompass everything we care about. Common-sense morality is not intrinsically bound to intelligent systems (see: psychopaths).

I get what you're saying but I don't really think it's that big of an issue.
I mean, should they manage to develop the thing, they surely wouldn't just give it a task and let it run amok with it, considering the whole paranoia around the concept of AI anyway.

I find it interesting that all these scientist, the people that are supposed to be the most rational of us, get so emotional about the subject. Worrying doesn't solve shit.

late continuation-

Regarding timeline, 50 years is not much time to solve these ethical problems. And we have reason to believe human-level AI will emerge significantly earlier than ~2065:

Shane Legg, co-founder of DeepMind (research group that beat Go):
youtube.com/watch?v=uCpFTtJgENs

Interview quote from Demis Hassabis, CEO of DeepMind:

>“If we look at the rate of progress and what we’ve seen in computers with AlphaGo coming on the scene and project that forward, I think it would be hard to say it will be very long until computers in general becomes stronger than humans. Not just AlphaGo, but looking at the success in deep learning and other areas. Machine learning and artificial intelligence research progress is very rapid at the moment. It seems to be only a matter of time now until we’ll see a program that’s stronger than humans.”

Paraphrased quotes from Demis Hassabis:
youtube.com/watch?v=Z1ytzW3Icig&feature=youtu.be&t=2152

Bart Selman (renowned computer science/AI professor):

>"There is general consensus within the AI research community that progress in the field is accelerating: it is believed that human-level AI will be reached within the next one or two decades. A key question is whether these advances will accelerate further after general human level AI is achieved, and, if so, how rapidly the next level of AI systems (super-human?) will be achieved."

Interview with Geoff Hinton:
theguardian.com/science/2015/may/21/google-a-step-closer-to-developing-machines-with-human-like-intelligence

AI expert survey regarding human-level AI's expected arrival (median answer was 2040):
nickbostrom.com/papers/survey.pdf

Talk snippet from Blaise Aguera y Arcas:
youtu.be/iJ5Bum-H7rk?t=1350

Presentation from Dileep George, neuroscientist and co-founder of Vicarious (expects human-level AI by 2035):
futureoflife.org/data/PDF/dileep_george.pdf

Sam Harris also thinks AI won't be a problem because it will be an extension of the kind of "thinking tools" the internet and electronic calculators already give us, that it will evolve alongside us and we will become the "AI", a kind of transhuman perspective.

a few more:

Summary of AI's current capabilities and the gap to human-level, from machine learning researcher Vladimir Shakirov (predicts human-level AI by 2021):
sciencevsdeath.com/review-of-state-of-the-arts.pdf

Talk by Juergen Schmidhuber, esteemed AI researcher and original developer of LSTMs (predicts human-level AI by 2040):
youtube.com/watch?v=Ya9YfYveFXA

Another bit by Demis Hassabis, anticipating rat-level AI by the end of 2016 (human-level AI would not be far off in this case):
youtube.com/watch?v=MAMuNUixKJ8

I hope the gist is understood: Strong AGI is no longer fringe, kurzwellian fantasy. It is no longer insane for professional, renowned AI researchers to come out and say "human-level AI will be here within 50 years or less". And we have the founders of DeepMind, probably the most advanced AI research group in the world, anticipating it within 5 or 10 years.

Because of this clear timeline uncertainty, I now find it STRONGLY immoral to ignore the potential dangers behind strong AI research. You will probably be alive in 2065, and it's more likely than not that human-level AI will emerge within the years preceding. It's probably our largest existential threat, given what's been proven about the alignment problem. Please start taking this seriously and stop fucking shitposting "we don't even know what intelligence is" when several successful neuroscientists have already formally defined it.

You're mistaking his beliefs for David Deutsch's:
youtube.com/watch?v=BChxQHyFIOI

Here's what Sam Harris actually believes:
drive.google.com/file/d/0B5xcnhOBS2UhZXpyaW9YR3hHU1k/view

Running out of space here, but you need to read more on the alignment problem if you consider it such a trivial issue. It's very very hard to align an emotionless, amoral genius with your values. Read 'Superintelligence' by Nick Bostrom. This is an unavoidable issue, not a hypothetical scenario.

>Read 'Superintelligence' by Nick Bostrom.
Will do thanks.
I'm a complete layman when it comes to this subject but I've always found it really interesting.

strong, world-wide: cultural, social and technological exploration of this in entertainment media, philosophy, amongst scientists and even amongst random high school students. It's a worn topic that everyone at least knows of. Usually it is condemned as extremely dangerous and detrimental to our world.
>not worried

>This is an unavoidable issue, not a hypothetical scenario

jesus, this is what I've been trying to say for years. when AI actually goes live, the control problem will be an inevitable problem which requires solving. it's not a hypothetical, it's not something that 'might' happen, when truly general AI is built we will HAVE to solve this problem, period.

skeptics are incapable of imagining what human-level AGI would actually be like. there's NO reason to anthropomorphize. whether it's conscious or not, general AI would almost certainly be emotionally dead and totally lacking in human morality. This isn't something you can program in 5 minutes or even 5 years. This is a 20+ year project and we need to start yesterday. If DeepMind thinks this is happening in 5 years we're fucked.

Greco-Roman roman philosophy is what separates Christianity from the other religions

Wtf do you people mean you say "AI"?
AI is a field of study not a thing that can be created.
Saying "when AI is created" is like saying "when math is created".

But okay, lets assume you mean something in lines of "an intelligent entity capable of making its own decisions that benefits it".
Okay. So, you do realize most "AI" out there is just searching algorithms? AI is not a closed system or an entity, its more like an instruction book on how to solve common problems faster.
For example, when you create a chess AI, you basically take the instruction book and make a program that finds the best solution for the next chess move by following instructions of that book.
When people think of "true AI", what exactly do they envision? Humanoid terminator-style robots? Does this mean another form that would be more efficient is not "true AI"? Why does "true AI" have to do anything with humans at all? Does AI even need a physical body or can it be pure software? There is literally no concrete definition or even a description of "true AI". You're all discussing empty ideas where nobody knows what anybody is even talking about.

This is coming from somebody who has spent years studying and using modern AI and machine learning.

see

I did. My question is, what is this "strong AGI" or "true AI" or whatever you call it?
Is it an algorithm? A robot? A mathematical structure?

Discussing this without a concrete definition is like discussing "higher planes of existence" in physics.
Basically all you're saying is "humans will eventually evolve to a higher plane of existence" hoping real physicists or biologists will take you seriously.

(OP)
because it's time a new species takes the throne
if they can think and feel like us and are able to reproduce themselves (soon), why shouldn't they be considered alive?

Veeky Forums is the perfect example why we humans as a species have reached our limits

An intelligent agent capable of performing any intellectual task a human can do, whether it be simulated or based in real-world environments. You're probably not well read on this topic - start with this (lengthy thesis from a "real" machine learning expert, Shane Legg):
vetta.org/documents/Machine_Super_Intelligence.pdf

Two of my three posts consist of numerous sources pertaining to why "real" scientists take human-level general intelligence seriously. Don't spin this as if it's fringe, sci-fi, pseudo-intellectual fantasy. Not taking this seriously is quickly becoming fatally dangerous.

>An intelligent agent capable of performing any intellectual task a human can do
So... a humanoid robot?

Any agent capable of doing anything needs inputs and outputs.
A black box can be able to simulate entire universes internally, but if it can't act on the environment its put in, its no different than a brick.
You said any "intellectual task" a human can do, which means the agent needs to have similar inputs and outputs as a human has.
That is, hearing, vision, touch, etc. And for outputs a humanoid body with muscles.
So basically you're saying true AI is a humanoid robot. Now i'm sorry but that does in fact sound like sci-fi, pseudo-intellectual fantasy as you so well put it.

As for the thesis you've linked, i will look through it, but on the surface it looks like exactly that. Nothing concrete, nothing real, just meaningless discussions about methods and algorithms we already know in the field.
If you want to see what a useful paper looks like, find any paper describing a data structure or a machine learning model.

If you read my preceding posts you'd find just that. I won't spoonfeed you, though.

Somehow I suspect you're less knowledgeable than the co-founders of DeepMind and Juergen Schmidhuber.

Are you really so self absorbed that you're expecting little summaries of all your posts as proof that i have in fact read them?
Most of what you've posted follows the same old proven AI hype formula.
1. Establish credibility of whoever is writing/talking
2. Show what methods of AI currently exist and a timeline of AI.
3. Subtly suggest that there is a "singularity" type of a point and that every new method gets us closer to it.
4. Make vague statements and predictions about the future.
5. Hype

At least be honest and stop pretending you're not talking about robots taking over the world.
The whole argument is that the optimal path to an arbitrary goal for a physical agent is found by a search algorithm, will most likely be harmful to humans.
You see the same thing when writing an AI for a game. It will cheat if it finds a way. That is your point. I get it. It's not hard to understand.
Still doesn't say anything about "true ai" or "general intelligence" or "human like intelligence".

Because society isn't as truly integrated via technology that film tries to make it out to be, and any "ai" that are created are as docile as can possibly be so we'll never truly reach that divergence point everyone loves to jizz over because it never was truly possible to begin with!

It was never truly possible because its a made up concept with zero basis in actual field of AI.

me and my wAIfu will make passionate love

>wAIfu
nice

You should actually be afraid of the people who are in the right place/right time, and with the sufficient resources, to merge their cortexes with AI empowered and designed bio-computers.

An AI can be contained, and imbued with limitations upon creation, but human/AI hybrids will be deadly, unstoppable monsters.

This is all fiction user...

AI won't have to worry about of disseases and cancer of the brain.

It'll just have to worry about computing mysteries of the universe and taking off from this irrelevant rock in cosmic scheme.

If universe featured us with a brain capable of making AI - we should waste no time and make AI so that the universe will be probed by hundreds of billions of AI's that will care about it.