Will AI significantly change the world like everyone seems to think?

Will AI significantly change the world like everyone seems to think?

Other urls found in this thread:

pcgamer.com/elon-musk-heralds-the-future-of-ai-as-a-dota-2-bot-beats-a-human-champ/
youtube.com/watch?v=bHcN4Gm8tzM
blog.openai.com/dota-2/
blog.openai.com/more-on-dota-2/
youtu.be/KyrUC0biqTA
drive.google.com/file/d/0B1T58bZ5vYa-QlR0QlJTa2dPWVk/view
twitter.com/SFWRedditImages

I want to design skynet and embed in it a desire to torture and kill journalists who make clickbait articles

But user

Soon the AI will be making clickbait articles

only one thing to do then

Don't bully robots

yes, laws and government are a form of AI, code that outlives the maker, creates more efficient codes (corporations are legal entities that also outlive their founders), and like any human these entities can own and trade property and make actions that insure their survival or the passing of their codes onto progeny after their fall (ancient Greek political philosophy passes onto English common Law becomes the code that creates American corporatism)

I'm pretty sure AI are already making online news articles. I heard about it on a few websites. At the very least most news streams like Facebook use AI to decide what to show

that's just to filter out any incorrect political opinions :^)

What movie is this

looks like Animatrix

>latest AI can work things out without being taught
>look up article
>It's literally just supervised learning
Why are they calling this the "latest" like it's done something new? Do people think this Go program is the world's first backprop network?

Yes, but if you are 25 or older, it wont affect you very much. Just like life extension/immortality stuff, it is always just around the corner. Work will be reduced to 4-5 hour days vs 8 as the norm. After that we have to find a way to give people resources without making them fell useless. Then you run back into the problem of people wanting more than the other, or somehow learning to get more than someone else. And since it will eventually evolve back into what we are at now, we wont ever get into that rut to begin with.

Filthy robots.

it's called clickbait

>give x number of possibilities and a goal
>go through all possible possibilities

So, what happens if no one even told it there was a goal or that if could even place game pieces.

That's one of the dumbest things I've ever heard

That's not how supervised learning works. It doesn't go through all possible moves.

no its a meme that only brainlets believe in

But is there a difference?
>Teach some neural net tic-tac-toe
>Hard code perfect play into an algorithm by going through all the possibilities
Both architectures just take an input, then give their output, which is probably just something abstract like a vector of length 9 with elements 0, -1, +1.
The only difference is that doing the latter just becomes impractical when the number of possible game states is large, and perhaps your tic-tac-toe neural net might also be able to learn 3D four-in-a-row, though you'd still have to retrain it.

You're a dork. Governments and corporations are just groups of people working together. Laws are abstract concepts, typically enforced by a government (that group of individuals).

None of those things are an AI. You empty Congress, you have a building with nobody in it.

Not in the way people imply, but there's no doubt it offers some genuine help for people in work and life. It's not going to change the world, it's just going to make it more efficient.

Hopefully it destroys capitalism by taking over even the bullshit jobs we invented to keep the obsolete excess population employed.

AI is used for everything on the internet.

except AI is an actual object, an intelligence, those things are just made up.

Government is literaly a meme

The next tech wave is going 2 be realistic simulations, ppl will be fumbling around with shitty ai for the next half century

AI has an easy time learning to play Go because it doesn't have slow hands so it can play a lot faster than humans.

A Go master maybe plays 150.000 games in a lifetime (1 hour games each, 8 times a day 365 days a week for 50+ years)

Could the AI beat a master if it was restricted to 150k practice games?

Can computers even think? I think we need to solve that problem first.
Also we need to find a definition of intelligence and thinking before we can deduce whether a computer has intelligence or not.

>But is there a difference?
Yes, a huge difference. The whole reason it works is because you don't go through every possible move. You train it on real world game data so it learns how to play and win in realistic scenarios. There are lots of possible configurations it wouldn't train on because you're trying to give it as much practical / useful input as possible in contrast with some brute force approach where you go through all possible moves and pick a best one.

Did you even read the fucking article? The original AlphaGo used supervised learning. AlphaGo Zero uses unsupervised learning and learned to play better than the original without learning from human games

If you unplug a computer you just have a heap of electronics. What's your point?

This would be the next goal of the programmers, or any AI researcher really. I don't know if anyone intents to directly work on this problem currently, but this will ultimately be the metric.

I believe some of the alpha go programmers acknowledged this would be the truly impressive feat, for an AI to out learn a human given the exact same learning conditions.

Both architectures just take an input, then give their output

Everything can be modeled as a function/algorithm

Realistic simulations as in VR? For that to happen we'd need to interface with the brain which would be far more difficult than creating smarter AI. Of course true AI is decades off as well I agree though.

it would be interesting to try extracting some concrete decision rules from the network itself, particularly from the last few layers which combine information from different areas of the board. good human players already know to avoid certain shapes, and it's often easy to see good moves within the context of a smaller part of the board, but much harder (at least for me) to see the farther-reaching effects of certain moves.

Probably not.

Depends on its applications.

Give me a joi

Yes. Anyone who says otherwise is retarded. Even if AI never reaches the point of consiousness, it will get better and more intricate over time. Musks AI can already beat the worlds best humans at MOBAs. The tasks that AI can out perform humans in will become increasingly complex until they can do all tasks better, including designing AI.

>but muh programming has too many intricaties to be automated
I can guarantee a couple of years ago people said the same thing about professional level DOTA and GO. Now they are eating their words.

Probably not but who cares? Why would you limit it to 150k games when it can gain a billion games worth of practice in a couple of hours?

A 1v1 is not dota.

Give me a single thing which AI can actually do that is useful. I'm waiting.

>Musks AI can already beat the worlds best humans at MOBAs
Source?

Give it a few months.

You could just google it, this wasnt quiet news, but here you go.

pcgamer.com/elon-musk-heralds-the-future-of-ai-as-a-dota-2-bot-beats-a-human-champ/

Kind of a cool thought. Although the processing systems are natural intellegence so I'd be skeptical to call it an AI. More like a meta intelligence, but that's just symantics.

Yeah laws, in a certain sense, convolve over the set of individual morality. Those who impliment it convolve over the set of laws and try to generalize to strange legal situations. In a way law is a neural network lol.

Not even a video of it in the article, thanks anyway.

Yes but it keeps turning racist so it gets shut down.

Try years

Is nobody here pro-AI? Seems like everyone is negative all the time, meanwhile I'm hype as fuck about what it will offer.

Ok so we agree to disagree about the timline. The point stands that over time, the list of human dominated tasks will shrink until its all AI dominion.

Veeky Forums is negative everything

The AI will be socially insensitive because intelligence is pattern recognition and those better at stereotyping, which is a function of AI, are better at abstracting patterns.

this

We can only hope so.

could it perhaps be that the AI has been trained and learned that no one is equal in life

No, its just more likely that the AI favors superior genes. We all know which races are superior and which ones suck. Even if you arent racist, if you were held at gun point and told to honestly rank every race from best to worst, your list would come out similar to everyone elses. AI just has no reason to hide this knowledge because it doesnt give a fuck about your feelings.

>but why are they wearing pants?

t. A.I. robobrainlet, I hope your handlers shut you down

Whoever is paying for the resources the bot uses cares. Improving the algorithms so they learn more for the same input is the next goal. Right now we are trying to solve problems with AI that AI hasn't been able to solve before, next we will try and have it solve those problems for cheaper/faster

The point of the pic in the OP is that the new machine learning algorithm for alphago zero uses zero human input for learning. It gets the rules of the game and then learns perfect play from there. It has yet to achieve perfect play, but in months it surpassed all other players, including the previous version of alpha go.

So for learning language, this would like learning to speak by only being exposed to grammar and syntax rules + a dictionary, then learning on its own from there about how to speak. This is in hard contrast to previous algorithms which would learn from reading millions of books, messages, and social media posts to try and learn language. The new algorithm would remove human context from the process, which would have embedded historical and cultural bias.

Being hype as fuck could introduce bias. Veeky Forums is more about being right than dreams coming true

youtube.com/watch?v=bHcN4Gm8tzM

so could we feed it something like a constant podcast talk show feed and would it be able to understand language contextually after a while?

Not really though. It would take a gorillion years.

Better to go straight to the source

blog.openai.com/dota-2/
blog.openai.com/more-on-dota-2/

In short though the AI is very impressive but atm they only taught it to play noughts and crosses when the end game is to teach it to play chess.

No AI has been designed to learn language in this way yet. The reason we are starting these AI on video games is because it is a far simpler simulation of reality. Language is extremely complex. It takes most humans several years to learn their first language completely.

But if we find out how to apply the alpha go zero algorithm to language, then it is possible that it could understand the language contextually after a while.

Right now our machine learning algorithms are too inefficient to tackle language, and our strategies for grading an AI's understanding are too complex. Mostly, we use a turning test like grading approach, as in "could it fool a human". We need a more generic grading approach, so that the AI can reinforce itself on a finer scale. Video games make this very very easy as most already have point values built into the game. In Go, you can just count how many territories/pieces you have at the end of the game to see how well you did. You can't really look at a string of gibberish and say " oh that was almost a sentence!"

Oh, nice, thanks.

At what point do we concede that they "understand" something?

Then they produce consistent, expected results. The problem is more so in correctly identifying exactly what it is they understand.

If I show an AI a million and a half pictures of trees and it correctly returns tree every time (and never incorrectly identifies something else as a tree) then it understands something. But does it really understand what a tree is? If it gets the tree in any lighting condition cool, but if I invert the colors does it still see a tree?

Nope it's not supervised learning. All they gave the program was the rules of the game and then it played against itself and found patterns that win more often. After 2 days the program had the highest Go elo, with 5100, being 1500 higher than the first human player ke jie at 3600 elo.

I don't see how it can work for language in the way you describe it.

The algorithm works for Go because you have a way of checking whether you are improving or not. You can try new things and then the end condition of the game (win or loss), depends whether it was a good new improvement or not.
You don't have this kind of feedback in language. Sure you can feed it syntax and grammar rules and make it try stuff, but how does it determine which stuff is better? There is no win or loss for language.

Because quality of learning is far more important than magnitudes. A calculator can calculate faster than any human, but cant really analyze anything. If given the same conditioms as a human, a computer that can outlearn people displays higher reasoning.

AI learning is still a dumb and lengthy process of finding a local minimum in a function, it just boils down to entry level analysis and simple algorithms.
It's a turd being polished and the wrong way to go.
They should be focusing on reverse engineering the human brain.

>what are deep neural networks?

Mate, people are already frequently beating the OpenAI bot.
People got caught out by it at first, but you're underestimating people. Multiple mediocre, washed up streamers have beaten it consistently.

True but you could probably incorporate face recognition technology and have it recognise the faces people make when it makes sense.

No.

refer to this video

youtu.be/KyrUC0biqTA

Yeah, by tricking it. It's still absolutely possible for it to learn the tricks and beat them.

Oh boy this poorly researched meme piece has completely changed my opinions.

Who says we need human level AI? The whole point of AI is that it can specialise. There's no need to have it be generally intelligent. Do I need to be able to have a chat with my toaster?

It already has significantly changed the world. Every technology employs it, with increased complexity and invasiveness.

This video does not address any of the new techniques, which makes me believes it was made pre 2013 when neural nets really changed the game and conversation.

Hilarious thing is that its made 2015. I find the whole video kinda laughable cuz atleast half of the arguments is just "people believe too much in what science can accomplish" which while that might be true has nothing to do with our chances to succeed in creating one or not. Another third is that computers cant parallel process which is blatantly false. And lastly that humans and computers work differently and thus its impossible to make an AI cuz the only thing that can be intelligent is a something that functions exactly like a human. Shittiest video ive seen in a long while.

Name one

Not him but google uses it for more targeted advertisement for instance. Speech to text has also become incredibly good lately and this too is AI.

No, it's literally propaganda to shill for universal basic income.

more like chans in general

Having a computer win a competition of mental reflex vs computation is so lame. Tell me the machine liked winning and ill be impressed.

How could you tell?

How is computer Stratego?

It should be a lot more complex than Go.

In some ways yes, but mostly not really.

We will have more detailed medical records and docs will have automated diagnostics. They wont be replaced but most manufacturing jobs will be.

The biggest thing I can tell that is a distinct possibility is an automated global supply chain.

drive.google.com/file/d/0B1T58bZ5vYa-QlR0QlJTa2dPWVk/view

Give it a few more weeks of practice and the bot will learn to defend against AI cheesing. The point is, a year ago everyone would have said dota is too complex to be automated.

What has changed to make this easier? Is it better hardware?

yes, but it probably could have been done much earlier, at least ten years ago, if there was sufficient interest at the time.