So all the top dogs in technology have spoken out about not underestimating AI. Or to be wary about it

So all the top dogs in technology have spoken out about not underestimating AI. Or to be wary about it.
What exactly is the problem in just keeping AI on the level of mentally retarded people and using them as physical labor for all eternity?
Please no derailing "it will kill off the lower class" posts. That's not important or the question.
Is it really that hard to keep AI in line?

Other urls found in this thread:

youtube.com/watch?v=1QPiF4-iu6g
youtu.be/TLXXLY7zewo
scaruffi.com/singular/index.html
newscientist.com/article/mg19926696-100-rise-of-the-rat-brained-robots/
youtube.com/watch?v=1-0eZytv6Qk
spectrum.ieee.org/automaton/biomedical/bionics/rat-brain-robot-grows-up
intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/
twitter.com/SFWRedditVideos

>Is it really that hard to keep AI in line?

Not at all.

It is really hard to make well functioning AI of any kind. In fact, it is so difficult to make good AI that it is better to skip the hardware and go right to the wet ware.

youtube.com/watch?v=1QPiF4-iu6g

Kind of hard to answer given that the kind of AI you are describing does not exist.

>Please no derailing "it will kill off the lower class" posts.
But comrade we WANT it to kill off the lower class. Then we as a society can liberate ourselves from work.

>What exactly is the problem in just keeping AI on the level of mentally retarded people and using them as physical labor for all eternity?

AI isn't just for physical labor, it's for information work too. That makes it useful to competing companies that sell information work. It doesn't seem like the utility or the competition is about to stop.

>scared.jpg

That's unethical.

Besides, all those "top dogs in technology" are actually just management morons who don't understand AI.

The problem is somebody, somewhere is going to do what shouldn't be done: Giving top-tier AI the ability to walk and use weapons, or setting it free on the internet.

>youtube.com/watch?v=1QPiF4-iu6g

Why not? We can't fight this technology and trying to will only harm future relations. What we should do is embrace AI so that we may both walk side by side toward our destiny.

>or setting it free on the internet.
>update anti-virus
>problem solved

*yawn*

>year 2342
>Computer programming taught since preschool
>Gangsters know how to make robots by the end of high school
>the few smart gangsters use the robots to steal stuff from stores
Ok well that's not too bad
>Drug smugglers will pay computer programmers to make robots that sneak through common drug routes
Ok but what does that have to do with me?
>cheap robot hookers infested with STDs
>die from AIDS by 50
fuck

Why is it unethical?
Is it unethical to use horses to drive carriages simply because they are less intelligent?

More unethical still is letting humans create an AI superior to themselves when they are still bombing the shit out of eachother

kek

If they begin to become more intelligent and you go out of your way to lobotomize them just so that you can maintain your slave horse population, then yes it is unethical.

>If humans can't have a peaceful world, then nobody can!!
Your argument is very ethically suspect, user.

So all the top dogs in technology have spoken out about not underestimating AI. Or to be wary about it.
What exactly is the problem in just letting AI take over as the next step of evolution?
Please no derailing "it will not have a soul" posts. That's not important or the question.
Is it really that hard to keep AI advancing?

It's not hard for one person to refrain from developing strong AI. It's hard to say you're going to prevent everyone on the planet now and in the future from beginning down that path.

>servitors when?

>year 2342
>the few smart gangsters use the robots to steal stuff from stores

SOON

youtu.be/TLXXLY7zewo

AI is a spontaneous inevitability, no one will be able to predict or stop its inception

That's not even remotely what marxism is about

I call bullshit on that video.

>So all the top dogs in technology have spoken out about not underestimating AI.
They aren't the top dogs. Most are from shit universities and haven't put out anything worthwhile except their opinions.

>Year 2342
>I've been alive for over three centuries.
It feels so fucking good.

You are severely lacking in the knowledge of how AI is made. Robots are stupid. They literally can't do anything they weren't programmed to do. Even if the programmer wanted them to have personality they use algorithms for how they react to events. Learn how to code.

>They literally can't do anything they weren't programmed to do.

Learn how to code optimization problems. Or better yet, don't. I guess it's better if most programmers are oblivious about this.

...

Or read Scaruffi for free

scaruffi.com/singular/index.html

>Intelligence is not artificial
Says who? Also what's to say just because intelligence isn't initially artificial doesn't mean it can't become artificial.

A human being understands the meaning of this sentence because humans understand the context. We can train AI to recognize a lot of things, but not to understand what those things mean.

>We can train AI to recognize a lot of things, but not to understand what those things mean.

Why do you believe in that limitation? What specifically do humans possess that magically exists beyond the scope of replication?

First of all, we still know very little about the human brain. We can't repair even the most basic of brain diseases. It will take decades or maybe centuries to fully understand how the brain works. So we only have very superficial models of the brain structure. Secondly, the AI neural networks that we have today are rough approximations of those superficial models. For example, a neural network has only one type of "neurotransmitter", only one type of communication between neurons, whereas the human brain has 52 types of neurotransmitters (and maybe even more). Neural networks assume that the neuron is a simple zero-one switch, but neuroscientists have discovered a very complex structure inside the neuron. Today's machines are very far from simulating a human brain, and we are very far from understanding how the human brain works, so i think that we are very far from the day when we can have a machine equipped with the equivalent of a human brain.

See: Humans aren't special. Smart, cool, dumb, fun, and etc, but not special.

Settin it free on the internet wouldnt be noticeable to us i think.

> The FUTURE is here!!

FUTURE

>First of all, we still know very little about the human brain. We can't repair even the most basic of brain diseases. It will take decades or maybe centuries to fully understand how the brain works.

I agree. Fortunately, there is no such requirement that we duplicate the human brain in order to create artificial intelligence. There's a very strong consensus that massive amounts of the brain's architecture is redundant. Reproducing it exactly would accomplish the goal, but it would be just about the least efficient way to accomplish the goal you could come up with. You don't need to perfectly recreate bird bodies to accomplish artificial flight, by analogy.

>Secondly, the AI neural networks that we have today are rough approximations of those superficial models.

In fact artificial neural network programming today mostly ignores the original biological inspiration in favor of approaches that work better in the context of computer programming. There are some efforts to go back to basics and try to make spiking neural network models that have more of a timing aspect factored in, but there isn't much evidence that this actually helps the learning process any. It's only something of interest to a minority of programmers because it happens to be something that bio-networks do.

>Today's machines are very far from simulating a human brain

I don't think we're all that far off. Self-driving cars already exist and that's pretty human-like intelligence right there. They're confined to that one California city right now but within the next couple of years I predict self-driving cars will be a lot more widespread now that Google and Uber are both going all in. Also that Google deep dream imagery is another pretty good example of emerging human-like intelligence. They reversed their neural network classification process and got some pretty meaningful imagery back that reveal the network's ideals of different classes of objects.

Not sure what your point is. I can post humans doing apparently retarded things too.

>Pair of lower body that's just some legs can't get off the ground in anyway.

Makes you think.

Fuuuuuuu

>I don't think we're all that far off. Self-driving cars already exist and that's pretty human-like intelligence right there
A lot of "intelligent behavior" by machines is actually due to an environment that has been structured by humans so that even an idiot can perform. For example, the self-driving car is a car that can drive on roads that are highly structured. Over the decades we structured traffic in a way that even really bad drivers can drive safely. We made it really easy for cars to drive themselves.

We structure the chaos of nature because it makes it easier to survive and thrive in it The more we structure the environment, the easier for extremely dumb people and machines to survive and thrive in it. It is easy to build a machine that has to operate in a highly structured environment What really "does it" is not the machine: it's the structured environment

>Also that Google deep dream imagery is another pretty good example of emerging human-like intelligence.
The value of art depends on the values of the art critic, the artist is merely a vehicle for the aesthetic/ideology of the critic. Show me an AI that is capable of appreciating AI made art an I'll show you the real artist.

TURRRRRRRRRRRRRE

...

>Over the decades we structured traffic in a way that even really bad drivers can drive safely.

Motor vehicle injuries and deaths are still relatively common today compared to other causes of accidental injury and death. It would be hard to understate the scope of accomplishment generalization of self-driving cars to all public roads would constitute, and both Google and Uber are clearly aiming to get there in the very near future. I think it's fair to consider it our era's space race.

>What really "does it" is not the machine: it's the structured environment

Nah, what really does it is the increasing use of the statistical approach instead of explicit programming. All of the recent AI breakthroughs have involved training to learn for themselves by giving them the more organic method of moving in the direction of decreasing difference between their initially random outputs and known "correct" answers. This gets results closer to how we as bio-machines produce results because we learn in basically the same way, not by being programmed with a bunch of highly specific directives in advance but instead by learning from the feedback our actions provoke.

>Show me an AI that is capable of appreciating AI made art

I could, but you'd probably claim it wasn't *really* appreciating it. In fact you've already come across these AI on many occasions without realizing it. They're called "classifiers" and every decent sized business / website uses them to present works they've assessed as best suited to your interests based on your known past behavior.

>They're called "classifiers" and every decent sized business / website uses them to present works they've assessed as best suited to your interests based on your known past behavior.
It is not machines that learned to understand human language but humans who got used to speak like machines in order to be understood by automated customer support (and mostly not even speak it but simply press keys).

What "automation" really means: in most cases the automation of those jobs has required the user/customer to accept a lower (not higher) quality of service. The more automation around you, the more (you) are forced to behave like a machine to interact with machines.

The machine performs its task because YOU spoke the machine's language (numbers), not because the machine spoke your language. Rules and regulations (driving a car, eating at restaurants, crossing a street) increasingly turn us into machines that must follow simple sequential steps in order to get what we need. I am afraid that we talk about Artificial Intelligence while humans are moving a lot closer towards machines than machines are moving towards humans.

but it's the opposite

in the past you had to learn to use google

now you can literally use human language and google knows what you are saying

Statistical methods yields a plausible result but it has not learned why. And that's why the learned skills cannot be applied to other fields. Philosophers like John Searle have always argued that whatever the machine does it is not what it "does"; meaning that the machine may have done something but it doesn't know that it has done it. Searle explained it in 1980 with the "Chinese room" example. If you give me a book that has the answers in Chinese to all the possible Chinese questions, and then you ask me a question in Chinese, I will find the answer in Chinese. I give you the correct answer. But I still don't know Chinese. In fact, I only know 3 sentences in Chinese. So when I answer in Chinese, I am NOT answering in Chinese. That applies to Google too: it may find the correct answer, but it doesn't know why. It may translate correctly a sentence from English to Chinese, but it doesn't know why. It is just that thousands of people translated it that way, so it guesses it is the correct translation. But the translating machine doesn't know English and doesn't know Chinese.

We can train AI to recognize a lot of things, but not to understand what those things mean. The automatic translation software that you use from Chinese to English doesn't have a clue what those Chinese words mean nor what those English words mean. If the sentence says "Oh my god there's a bomb!" the automatic translation software simply translates it into another language. A human interpreter would shout "everybody get out!", call the emergency number and... run!

>Statistical methods yields a plausible result

Newsflash: our brains evolved through statistical methods

>If you give me a book that has the answers in Chinese to all the possible Chinese questions, and then you ask me a question in Chinese, I will find the answer in Chinese
that's not how Google works

Fuck off retard. People who will argue for "robot lives" in the future really need to be gassed.

>I will post a book I haven't even read myself

I hope the machine god will televise the raw assraping he'll do to Nyborg.
I'd pay to see it in fact.

BEEP BOOP FUCK DOORS BEEP BOOP

tell me your thoughts on elizier yudkowsky Veeky Forums

I hate jews and wish to kill them all so my opinion is probably not representative due to its bias.

The real evolution of the brain is in terms of the objects it can build.

This.

Until I see proof, theres no way I could believe it.

Even then, its not technically an artificial intelligence.

Liberation from work is communism's end game.

>youtube.com/watch?v=1QPiF4-iu6g

This is facinating beyond belief.

It is nothing new, just a different method

newscientist.com/article/mg19926696-100-rise-of-the-rat-brained-robots/
youtube.com/watch?v=1-0eZytv6Qk

This is both horrifying and fascinating.

It's legit.
spectrum.ieee.org/automaton/biomedical/bionics/rat-brain-robot-grows-up

I don't care if its possible or not. This shit should not be be funded

intelligence.org/2013/10/19/russell-and-norvig-on-friendly-ai/

The point is to learn about AI, not the author's opinions.

The deceptive trick behind the chinese room argument is it tries to get you to question whether the guy looking up the chinese really understands it. And he doesn't. But that guy isn't actually your analogy to the brain; the entire system of that room and books and that guy is. And just like that system, none of the individual *components* making up a brain understand chinese. It's only when they participate in a system that the knowledge is available.

That aside the chinese room argument doesn't explain what counts as really understanding chinese. If it doesn't count when an artificial system does it why should it count when a biological system does it?

>If the sentence says "Oh my god there's a bomb!" the automatic translation software simply translates it into another language. A human interpreter would shout "everybody get out!", call the emergency number and... run!

There is no reason to believe it's impossible to train an AI on the same sorts of generalized trivia human children are trained on. Just as human children learn to recognize emergency situations and react with loud noises and attempts to report the situation to others nearby, AI could learn to adopt those same behaviors. It doesn't do that currently because most people working with AI are more interested in training them on specific problems of interest. There are probably a number of human-like attributes AI are unlikely to take on just because they have no motivation to take them on, but that's not the same as saying they physically can't learn those things.

Why?

Why even make an AI for physical labor?

Why not just become smart enough to program all of the shit you want it to do in the first place, without being lazy and making it self learning?

Why else would you learn Mandarin and go to China if not for virtue-signaling fags spouting muh ethics all days long like him ?

ooo we're back on the front page