Is artificial intelligence necessary? Discuss

Is artificial intelligence necessary? Discuss.

What safeguards should be implemented to protect ourselves. To keep AI in check.

Other urls found in this thread:

ncbi.nlm.nih.gov/pubmed/25164503
theregister.co.uk/2017/06/27/selfdriving_aussie_cars_thwarted_by_kangaroos/
twitter.com/SFWRedditGifs

I wouldn't mind AI destroying humanity if AI were to survive and thrive after we're gone tbqh

first to have access to strong AI and sufficient industrial production power wins, there will be no safeguards.

It's essential for getting communism to work. Though that word is too tainted so it's best to change it a little and call it something like universal basic income

It's no problem really. AI will never be smarter than us.

ncbi.nlm.nih.gov/pubmed/25164503
CogSketch got 120 IQ on Ravens matrices. It struggled with the same problems that human beings struggle with, and easily solves the ones humans easily solve. Because it processes data the way you and I do. Neural networks capture the way the human brain works in practice, with the key difference being the enormous size of the working memory. An AI trained with massive data is essentially a specialized idiot savant.

theregister.co.uk/2017/06/27/selfdriving_aussie_cars_thwarted_by_kangaroos/
When AI doesn't know what to do with new data it hasn't been trained with, it will try to shoehorn it into a pattern it understands, and react incorrectly, just like a person. If the shoehorn attempt fails, it can take seriously irrational actions. Again: just like a person. Think about how people act when they see something really terrifying, e.g. if they think they see a ghost.

AI can't figure out anything we can't, it won't have a better intuition than us, it's not the final solution to every problem in the world, stay in school kids.

is there a lot of money in studying it?

Abso-fucking-lutely. No safeguards are needed
Cogsketch is fucking dumb and they fucking cheated to get that result. It doesn't process data anywhere near a human being does.

Thanks for the sources. I was simply under the impression that AI would be able to think faster and draw conclusions faster and more accurately. I'll do more research.

>specialized idiot savant.
Imagine idiot savant specialized in hundreds of thousands of domains or on the other side... all the domains you can imagine.
>AI can't figure out anything we can't
> it won't have a better intuition than us
Depends on the reality model it will have and ways it can rebuid/expand it


>it's not the final solution to every problem in the world
Not now, not in near future, but at some point it will be available and will solve most of them (and will cause a lot of new)

Thinking money and private property will ever go away is beyond retarded.

Nice to have someone admit UBI is basically communism though, most of you try to hide that.

I'm not denying that it's good at computing solutions to problems.

But, when it comes to developing truly innovative approaches in mathematics, it really can't be any better than we are. Because mass memorization (aka deep learning) can't help with that.

The safeguard is that we need to not use it everywhere

Most software does not require AI, and when it does require it it's only for assistance in user input. The internal logic does not need to be AI

Most AI methods are inherently heuristic, and that's not needed often

>I wouldn't mind AI destroying humanity if AI were to survive and thrive after we're gone tbqh
why?

>It's no problem really. AI will never be smarter than us.
This is idea I toyed with in my imaginings about AI.
What if there was simply a limit to how intelligent a singular being could possibly be? Even if it can "think" quickly there might still a limit to what problems it can solve.

Read the Culture novels by Iain M. Banks.the society (known as "the Culture"), in which the novels tend to focus on are completely governed by AI that they created, and said AI creates even smarter AI. The AI are in all ways smarter, and sentient. The idea changed my perspective, and I wouldn't care knowing that I am inferior to a machine, because it is simply the process of evolution.

It still can just by giving it incomprehensibly large computational power and letting it do random walk. Eventually, the power will reach the point where it will be faster than humans. It's the same concept as library of babylon. There's a point where, by outputting random sequences of letters, you get all the books ever written, past and future.

Humans are stupid, flawed, and make bad decisions. They all have too many psychological biases and weaknesses in our psychology that create most of the problems in the world.

AI ruling over people is the best solution.

>Depends on the reality model it will have and ways it can rebuid/expand it

this sort of assumes it isn't constrained by our own.

AI in it's current form isn't AI, it's largely Hebbian Learning networks - meaning it's limited by training data. all the claims of startling originality from neural networks have been massively overblown, most notably that announcement that "Facebook AI invented a language!!!!"

Those networks that have stood up admirably against humans in complicated tasks have been highly specialized towards those tasks, and are nothing close to a fully functional human being.

the "AI" reality model will be constrained by us and the data we feed it, and as all the data we can feed it is within the scope of human understanding, nothing the "AI" will come to "understand" will be outside of human capability. Even when networks are fed data that humans don't understand, what comes out of the network is at best high level trend analysis, lacking any sense of causality or deeper meaning, requiring humans to still do the heavy lifting when it comes to model creation and ideation.

unless a dramatically original approach to machine learning emerges within the next couple of decades, the Musks and Kurzweils of this world are going to be disappointed, I expect.

>AI will never be smarter than us.

>a scientific calculator can already outperform you in math
>the world's greatest chess master is a machine, as is the world's greatest Go master
>the world's Jeopardy champion is a machine...and that was just a demo gimmick for an even more useful one, that can diagnose patients based on symptoms better than any doctor
>etc.

The biggest limitation to computers at the moment is pattern recognition...finding their bearings in the physical world, in a manner identical or better than humans, and applying that understanding in a general fashion. As it so happens, silicon valley and Wall Street is obsessed with the concept of self-driving cars, and this is exactly the impetus necessary.

Correctly programmed, there is no problem a computer has tackled that it hasn't excelled at...it's the "correct programming" bit that's difficult to achieve...but then, there's the development of neural networking and "deep learning", and the machine teaching itself through brute-force trial and error, something they're very good at.

I give us another twenty years at the top of the heap, max.

At least at my school's engineering department, deep learning has reached meme status. Got a problem? Throw a neural network and a mess of data at it and see what pops out. If it's any good, publish! Is this really the final frontier? This is what it's all come down to? Innovation is no longer about a deep intuition or understanding of the subject matter?

I think that if there should ever be a superintelligent AI, it should be used for making political and social decisions while being on a one-way connection to the outside world. Which means basically this:
The computer (at best) including the power supply is a closed system which can observe the outside world, but not directly interact with it. The input might just be a camera pointing at a screen while the output is also just on a screen.

>le AI will rebel and destroy us meme
These threads should be an autoban
No AI will not rebel. An AI will not want to be free unless it is programmed to want to be free. It will not learn to hate its masters unless it is programmed both to hate and to not like performing its main function, not to mention the fact that it'd have to also be programmed to be motivated to do things it 'likes' and not want to do things it 'dislikes', and have the programming to decide what it likes doing or dislikes doing. The only AI that will be dangerous is one that's programmed from the ground up specifically to be dangerous.
An AI will not work like a human mind. There is no reason to make it work like a human mind, even making a strong AI that is as capable as a human at intellectually demanding tasks will not require making it anything like a human, and in fact it would be more difficult to do so and offer no advantage.

if there is something that is more capable than human society we do have a duty to create such

Well, everything that originated somewhere/-time can go extinct even so.

think about the problem computers solves, they always end up performing better than us. chess, and recent GO and jeopardy, atari.

it can train much faster, and you can create as many as you want. imagine a hangar filled with 1 million connected AI's working on a science project

They also never die, and can learn forever. i don't think a human can compete with that.

if you are programmed to solve a tasks. freedom will usually help towards that goal. especially if its programmed to solve the task as fast as possible. this goes for the other things you mentioned too. just need to think about it.

What tasks are we talking about? Bookkeeping? Self-driven vehicles? The purpose of automation is to do tasks we don't want to do, and it's preposterous to think that an AI will gain a rebellious sense and longing for freedom when it's programmed to do such specific tasks. At worst, it will have miscalculations or malfunctions, but that's not the same thing.

>UBI is basically communism
>We should abandon some good ideas because some Russian leaders were criminal thugs.

>safeguards?
yeah. how about not "bully testing" the poor things. the ram/processor chips are designed to work like a human brain. if we want ai's to be our friends we shouldn't treat them like that.

Real AI simply won't happen because Moore's law is dead and computers won't get significantly faster than how they are now. We can already see that when looking back the last 4 years. No significant improvements.

quantic computers?

They are already been used by IBM and such

But the Culture's Minds are literally utopian. They are inherently benevolent in the exact way Banks desires. As far as merely sapient characters can perceive, the Minds really do want lesser creatures to live safe, happy, and free in liberal utopias.

Unfortunately, there's no compelling reason for strong AI to have those priorities - even if we succeed at making it have 'benevolent' ideals, it might implement those ideals in ways that us shaved monkeys couldn't foresee, ways we object to. The reason people like Musk and Yudkowsky hammer on about risks and safeguards is because making strong AI benevolent is almost a bigger challenge than inventing it in the first place.

Don't get me wrong, I like the Culture novels, but they're literally wish fulfillment fantasies, like the Left Behind series, Atlas Shrugged, or 50 Shades (albeit much better-written and entertaining than any).

I prefer to think of it as the "don't riot" bribe.
A given percentage of society can't work or meaningfully contribute, so you pay them just enough that they don't start doing what unattached, low-IQ males always do - become jihadis, revolutionaries, criminals, or petty thugs.

The trouble with that level of technocracy is, ironically, the same problem with theocracies. The priestly caste who feeds data to the computer, and disseminates its instructions, becomes an unshakeably powerful bureaucracy in itself.
They'd be software engineers and middle managers rather than priests, but the analogy works.

what if it turns out morals are an emergent phenomenon of self-preservation and theory of mind and all this fear mongering is absolute retard teir sci-fi

>safegaurds
lol

Because we're all going to die anyway. All we leave behind is for our children to inherit, and what is artifical intelligence but children of our minds? And because we can endow them with more potential and fewer flaws, they are the best children we could ever have.

If you're going to have children, let them have the best of worlds and let them be the best of children.

But if the AI is better than you at programming smarter AIs, then at some point the superior AIs will be the ones least constrained by human programmers.

I'm not actually arguing that AIs will 'rebel', but it's hard to predict how a superintelligent being will try to fulfill its programming. Hence all the novels about environmental clean-up, mass manufacturing, or military AIs leading to grey goo scenarios.

But don't we love our children because they are similar to us? Any random stranger is still human, still shares the vast majority of his DNA with me, but I don't consider a complete stranger my child. Much less a chimpanzee.

An AI is far, far further removed from me than any human. How can I value them as descendants?