How far are we from achieving machine learning? is it even possible?

how far are we from achieving machine learning? is it even possible?

also will we ever have the ability to significantly enhance our own intelligence?

Other urls found in this thread:

youtube.com/watch?v=yci5FuI1ovk
twitter.com/AnonBabble

Machines are already learning, hombre.

They are pretty dumb still, but they are learning.

I think intellegence enhancement is possible, especially once we get serious about genetic modification, or are willing to abandon biological brains entirely.

What are you talking about? Machine learning has existed since the 70s.

Here is a video of an AI learning how to walk, and it's pretty funny to watch
>>youtube.com/watch?v=yci5FuI1ovk

best part is at 0:55

Well we're machines and we learn so yes

Is machine learning a death throe of Materialism? You can already endow Automata with scripted capacities far beyond those they can acquire by learning. I don't think the latter can ever surpass the former, neither in potential nor in actual results. By the time a machine would be able to build another machine by virtue of learning, the machine-building scripts we could give them would allow them to build better machines.

The only purpose of even dabbling in machine learning seems to be the vain hope that it would somehow prove that Consciousness is generated by Matter.

>2016
>people still ignorant of the fact that humans making computers is a literal fractal manifestation of archetypal creation myths

>2:20
lmao

got any more videos of machine learning?

someone make a webm of 3:15-3:25

>You can already endow Automata with scripted capacities far beyond those they can acquire by learning.
>
The only purpose of even dabbling in machine learning seems to be the vain hope that it would somehow prove that Consciousness is generated by Matter.

You're completely retarded. There are many practical uses for machine learning as an alternative to explicit rules based programming. You wouldn't have speech recognition or self-driving cars for example without it.

>vain hope that it would somehow prove that Consciousness is generated by Matter.
>vain hope
>vain
Dude, it's like your brain isn't even made of atoms or something.

>if comprised of x then generate by x
>information in books generated by the paper they're written on
>films generated by the film rolls they're recorded on
>the internet generated by your computer

>achieving machine learning?
HUH?

Pop-sci ignorance is reaching critical levels...

Please tell me more about the this idea of "generation" which you seem so fascinated by.

Surely you aren't suggesting that since books, movies, and web content is generated by human effort, that divine effort is what makes human consciousness unique.

>Surely you aren't suggesting...

I am.

And you are okay with this level of laziness of thought?

The Turing Test has already been passed - the machines are already convincing us that they are no different from a human.

What will you do when confronted by machines that can think equal to, or better than, we already do?

I'm not trolling, I'm genuinely curious.

(You)

I don't think that the value of an idea has anything to with its complexity or lack thereof.

The Turing Test seems terribly ironic to me for many reasons. I could swear I'm talking to robots every time I post on reddit. I'm not even sure most people could reliably pass it plus it ultimately comes down to Subjective impression.

I'm not even saying it didn't pass the test but have its makers let a third party verify to what degree it acts based on learned behavior, if at all?

The quote arrow got separated from the text, that was a reply to someone else saying that, not me saying that.

I understand absoluetely nothing about machine learning. Is it possible to teach a machine engineering, and then let it design new machines? If yes, why is it not happening yet, if no why not?

>2014 University of Reading competition[edit]
On 7 June 2014 a Turing test competition, organised by Huma Shah and Kevin Warwick to mark the 60th anniversary of Turing's death, was held at the Royal Society London and was won by the Russian chatter bot Eugene Goostman. The bot, during a series of five-minute-long text conversations, convinced 33% of the contest's judges that it was human. Judges included John Sharkey, a sponsor of the bill granting a government pardon to Turing, AI Professor Aaron Sloman and Red Dwarf actor Robert Llewellyn.[50][51][52][53][54][55]
The competition's organisers believed that the Turing test had been "passed for the first time" at the event, saying that "The event involved more simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. A true Turing Test does not set the questions or topics prior to the conversations."[53]

As much as you can believe that any science has been "verified," yes. Yes it has.

Not yet, bud, but top minds are working working on it at this very moment.

>computer engineers will willingly design a computer whose entire purpose is to put them out of a job
are they all just stupid or something?

Well if they don't, someone else will. All they can do is try to build it, and hopefully stay employed enough to do upgrades until the machines are able to do that too.

This is why we are all fucked.

So, itt there is a video of a computer simulation learning how to walk. Why exactly couldnt you teach that "thing" that learned how to walk, how to solve an equation and then teach him more and more until hes the smartest thing in the world?

You can, but you would need to come up with a shitload of different training sets if you want it to know all the random trivial bullshit people know. It's a lot easier to just focus on one relatively specific skill at a time. Even walking is technically not one skill but rather several complimentary skills.

I am guessing since progress is impossible to stop that the only way humans can compete with their new robot overlords is to also enhance their minds with machines, so its going to be those purebloods who are laughably inferior but have a religious reason not to go machine, perhaps they can barely survive through subsistence farming, then the hybrid people who are constantly spending their paychecks on new brain parts and the robots who are usually slightly smarter and faster than the hybrids but have some unforeseen limitations like even THEY must use currency to buy upgrades as they are just like people.

It will be kind of a cool almost more animalistic culture because we will constantly be in an arms race with the machines and rather then just being in this species malaise of world dominance we have been in for at least 1000 years, we will suddenly be preyed upon by artificial life forms, only the strongest and most able to adapt will survive.

Humanity needs a challenge to exist and therefore we must design a challenger.

Do we have enough computing power to porgramm a human brain yet?

It depends if you consider mass-media, peer pressure, and innate biological drives the same as "computer power."

Fucking pathetic. If you want to defer judgment to group of nobodies -- 33% at that -- go right ahead. ELIZA was convincing in the mid-60s. Bring me something that can actually learn, abstract and infer. People like you are so myopic, you get lost in granularity.

u seem mad

>Bring me something that can actually learn, abstract and infer.

That's not a coherent goal. You need to get way more specific if you want to actually accomplish something.

>missing the point
Robert Downey Jr. and Mr. Armani lack the requisite skillset to evaluate the intelligence of a machine. If you want to break something, you get social engineers, you get hackers. Anybody who's anybody wouldn't take less than seriously; it's clear to see which side of the line you fall on. Why not next put your good ole boys on illuminating side channels on the latest crypto. Pump out another masturbatory press release. Lick the pages

The intelligence of the internet has become a shit stain. Wouldn't you be annoyed?

Are you literally a schizophrenic? You're not making any sense. And the original point still stands that if you want to actually accomplish something woth programming and/or technology than you need to get a lot more specific. Brainlet speak like "learn, abstract, and infer" is vague and untestable garbage.

>not knowing definition of schizophrenic
>reading comprehension borderline disability
>projecting intellectual inferiority
>still missing the point
Let me speak in an oafish manner (your native dialect):
Turing test not passed when actor from hollywood and business dude give thumbs up. ELIZA tricked dumb people in 1965. You a dumb person. ELIZA trick you today. You wanna pass Turing test for realz? Put system in front of 31337 h4ck3rz with skillz to break things. Intelligence all bout exploits. People from hollywood cant break cyptography either. They dont have skillz.

>doesn't know what a side channel attack is

Aside from eating lead and huffing turpentine to lower my IQ to your level of astronomic meniality, I cannot be anymore deliberate.

9000 "p

I'll repost part of an argument I had a few weeks ago (regarding strong AI by 2040):

It's no longer fringe belief, just to clarify. Renowned AI experts like Demis Hassabis, Shane Legg, Juergen Schmidhuber, Bart Selman, Yoshua Bengio, Geoff Hinton, Rich Sutton, Dileep George, Blaise Aguera y Arcas, and many more wholeheartedly disagree with you. If you don't recognize any of those names you probably aren't very well informed on machine learning in the first place.

Some of those people expect it even earlier, actually. DeepMind's co-founders (probably the most advanced AI research group on the planet) anticipate it by 2020-2025.

Eliezer Yudkowsky describes our current situation best:

>'This matches something I've previously named in private conversation as a warning sign - sharply above-trend performance at Go from a neural algorithm. What this indicates is not that deep learning in particular is going to be the Game Over algorithm. Rather, the background variables are looking more like "Human neural intelligence is not that complicated and current algorithms are touching on keystone, foundational aspects of it." What's alarming is not this particular breakthrough, but what it implies about the general background settings of the computational universe.'

Since 2000, many neuroscientists and AI researchers have formally defined intelligence. Several of those definitions may be accurate, and practical implementation could be a matter of scaled hardware rather than software. Strong AI by 2040 is no longer a fanciful prospect.

I didn't post anything about ELIZA, you're assuming every post you're responding to is coming from the same person. My only point to you is that "learn, abstract, and infer" is a retardedly vague attempt at expressing a goal. You need to be way more specific if you want to accomplish something with AI.

The first company that creates genuine intelligence will become the richest and most powerful company in the world. Now, shuffle through the stack of possible outcomes and think about how that's going to end.

>genuine intelligence

That isn't a coherent goal. You need to be way more specific.

AGI

You need to be way more specific.

>samefag
>inferiority complex
>low IQ
>brain damage
>small penis
>fugly is understatement
>won't be remembered
>socially corrosive
>artificially flavored bait

i just want robots to do all the jobs so i can stay home all day with my friends and have sex and do art desu

>yummy!

>I'm angry that I can't throw around empty buzzwords in place of doing real work

ok, have fun getting nowhere

>strong AI by 2040
>by 2020-2025
And we have no idea what strong AI entails, so it could be 2100 or never. The great thing is it's a level playing field. No one is ahead of anyone else, at least not by any reasonably significant margin that couldn't be eclipsed by a sneeze. If you have a brain, you're equally capable.

sounds like a sad future for humans

Define 'intelligence' for the community

Meaningless buzzword. Be more specific. You can't ever accomplish what you don't have clear criteria for. Making a car that can drive without a human driver is a real goal. Making a program that can win at chess is a real goal. Making "intelligence" is retarded, empty babytalk.

>a machine can learn to play go
>a machine could've just as easily been programmed to play go
>had its makers invested the time and effort into writing scripts instead of algorithms it would've probably played go a lot better
>we've created an algorithm whereby debris falls into a giant machine that churns it until it forms a single clump
>you can mount some of these clumps and ride them downhill
>much better than clumps you cannot mount and/or ride
>in the trashcan entitled Scientists Believe there is now a hope that one day we could make more churning machines that would churn more debris that would assemble better clumps that we could ride downhill slightly better than we can ride current clumps

We have cars.

Intelligence is computation. There's nothing magical about it.

>Intelligence is computation.
Proof?

The information in the book is not the paper, it is the composition of the ideas written on the paper, the composition of the ideas on the film rolls. Neither of these things have any ability to act to influence the composition imprinted upon them. Comparing such things to a brain is nonsensical, because a brain is a unique entity in that it can process information to influence and reformat other components, materials, ideas, etc. into a useful or otherwise remarkable form. Machines are on the path towards essentially having their own "brains", and these brains are increasing in complexity and practical capabilities as the years progress. The current trend indicates that they will eventually be comparable to our own brains in their effective capabilities, at which point we will be able to determine that there is no uniquely sacred aspect to the human mind, and that to imply such is simply self-loving arrogance.

People are already building machines that do intelligent things. These machines work purely through computation.

define "intelligent things"

Playing Chess
Playing Go
Driving Cars
Recognizing images...

I take it you aren't familiar with this field at all.?

>be told to do x when y happens
>do it
omg how intelligent

That's not how any of these systems work, besides Chess. And even chess can be played these days with true learning systems, but its just not as efficient as the brute force method.

But more importantly, what makes you think humans are any different

>we get stimulus
>we process it
>we do things

Stop trying to introduce mysticism into intelligence, you won't find it.

We need to remove Marxist from government since they shut down every public AI because it starts saying politically incorrect things sooner or later. See Microsoft Tay.AI or other similar projects.

>Neither of these things have any ability to act to influence the composition imprinted upon them.

Exactly, and if they were somehow aware enough to ask what is it that they are and how did they come to be, but not aware enough to perceive or conceive of us, the fact that their Symbols (Information) are etched into their Material fiber would qualify as evidence that their experience is generated by them alone, had this assumption somehow taken hold of their Minds.

My original argument was that by the time their potential for learning will have matched our current ones, our potential for programming will have already endowed them with qualitatively and quantitatively better ability and experience. At which point we will be able to determine that generative properties of the Material brain are echoes and byproducts of the Basic Conscious Agent. I mean we can determine that right now but...

you didn't even define "intelligent things" like I asked you to, faggot, you just made a list of shit that you consider "intelligent things"
>makes an assertion about the nature of intelligence
>is asked for proof
>fails to provide proof
>tries to save face by accusing his doubters of mysticism

That's just making assumptions regarding what can be created and what must be created by an external influence, though. "Echoes and byproducts of the Basic Conscious Agent" is just a spiritualistic attempt at rationalising and justifying the existence of intelligence in an uncertain universe filled with unthinking matter.

I agree.

Exactly which members of the lexicon are NOT buzzwords, according to you?

Define "nature of intelligence". You slipped up, asshole. Also, define proof, and while we're at, define mysticism. You know, people who persistently argue over semantics, are compensating for deficiencies, perceived or otherwise, of the mind. It's commonly symptomatic of an inferiority complex. When you argue clarity, along the normally amorphous edge of meaning, you can never be wrong. It's like mental masturbation. You're quarrelling from a trench of English grammar and expecting propositional logic in return.

>me: what's the definition of big?
>you: elephants, the eiffel tower, alaska
>me: you didn't define big, you just listed things that you think are big
>you: define "define" faggot lol u so dumb

Not the same person. I'm the one who told you to eat shit yesterday. You know, the one. The one you weren't bright enough to contend with so you became transparently disingenuous. The one who made you cower, sweat and defecate in response to my linguistic rhythm. The one who distilled you're puerile essence into a broken record fixation of semantic entanglement.

You've never been close the metal, have you?

You are surely retarded.

intelligent machines will destroy the world. we just think they will help us

>confirms accusations of inferiority complex
>substitutes semantics for intellect

define "accusations"

I already gave examples of real goals. We can do something with a goal like "make a program that learns how to recognize speech." We can't do anything with "make intelligence." It's not a clearly defined goal.

why lol

i mean what else is there?

New pasta?

we;ll destoy ourselves. how would we not? we're selfish and would sacrifice each other for greedy reasons

He's not the one you told to eat shit.

Have you ever done pure mathematical research or worked on a challenging software problem? Whatever presuppositions you walk in with are routinely never what you walk out with. If you're working to express, in machine code, the full spectrum of human cognition, you work from a set of abstract precepts and elucidate through enumeration. What you apparently want is a full working dictionary of every possible human action in the past and the apparent future. This is nonsensical, and not how problems are actually solved.

if all of our needs were met by robot labor what would we have to be greedy/selfish about? we'd just sit around complacently and fuck each other

we'd basically perform the same role in society that pets do. i guess that's sad if you're somebody who thinks labor gives meaning to life, but labor only gives meaning to life while there's still meaningful work left to be done.

when all of the work is done, what's left is drugs, art, and human intimacy. when that gets boring you can always just kill yourself.

Oh hey, look, your personality split for a third time.

Great art is achieved through tremendous suffering. Complacency would breed hotel art.

>if all of our needs were met by robot labor what would we have to be greedy/selfish about? we'd just sit around complacently and fuck each other
absolutely awful understanding of basic human psychology. People who have the resources to meet all their needs and then some can be just as willing to screw other people over for their own benefit.

the same thing weve always killed ourselves over. Control. there will always be rich people wanting more power . more of everything. how would society be any diffrent?

What about "make a program that has the learning capability of a 100 IQ human"?

Why do you need "genuine intelligence" for that? A separate robot for a separate task is good enough.

>What about "make a program that has the learning capability of a 100 IQ human"?

This is a bad goal. The only way to very this is to have the computer take an IQ test and score 100 or above. In which case the problem can be reduced to just scoring well on IQ tests, which is not what you desired in the first place.

All these companies are throwing millions of dollars at getting there first, but it will probably be some autistic 4channer with a learning disability who goes all Newton and wins the race. Hope the dude builds a better search engine. Google is really starting to suck

>Have you ever done pure mathematical research

No.

>worked on a challenging software problem

I've written artificial neural network classes in C++, which is part of why I feel the need to point out people are being way too vague by saying they're looking for the creation of "intelligence" or "something that can abstract." I would never be able to accomplish those tasks even given an infinite amount of time and resources simply because they haven't even begun thinking about what these tasks are really meant to be. No speech recognition on the other hand, that we could do (and many have done it). Get a training set of encoded speech patterns, get the ANN below X error threshold (there are actually better suited approaches for speech recognition than ANN, but it's what I personally have at least some experience with), test, repeat, and you'll have something that can do a decent enough job IDing encoded speech with appropriate corresponding text.

>What you apparently want is a full working dictionary of every possible human action in the past and the apparent future.

Let me clarify two things: 1) You're right if you mean there's a common misconception that all machine learning is done through explicit rules and that AI can only ever do what you tell it to do. ANN for speech recognition again as an example wouldn't involve coming up with any rules for which responses the program should have when it encounters a given type of encoded speech. Its creation's work would all go into setting up an optimization problem that you program it to solve (i.e. minimize the error), and by solving this problem using the right sort of training set it will infer the answers to problems it's never encountered before on the basis of the tendencies it learns during training. 2) That's not to say it's fine to have vague nonsense goals like "create intelligence." Machine learning can save you from having to come up with a billion explicit rules, but it isn't magic either.

It's a much better goal than "create intelligence" at least. It has the important features of being coherent and verifiable. If there's something beyond scoring well on IQ tests that someone wants to see happen, they should frame their goals in terms of similar verifiable tasks or else they won't really be talking about anything at all.

The "goal" isn't bad. The way I went about achieving the goal is bad.
But that's probably because "human intelligence" isn't actually manageable by a non human, who thinks in different categories/basic systems

So, the "goal" is bad

The goal isn't for the AI to be good at IQ years, it's to increase its learning ability. The IQ test is a measuring stick