Holy shit he's really doing it

Holy shit he's really doing it.

He's solving strong AI as we fucking speak.

youtube.com/watch?v=rbsqaJwpu6A

Other urls found in this thread:

scholarpedia.org/article/Spike-timing_dependent_plasticity
journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003963
cs.toronto.edu/~vmnih/docs/dqn.pdf
github.com/search?utf8=✓&q=DQN
youtube.com/watch?v=F2bQ5TSB-cE
github.com/ethancaballero/Improved-Dynamic-Memory-Networks-DMN-plus
arxiv.org/abs/1602.01783
scottaaronson.com/blog/?p=2756
en.wikipedia.org/wiki/AIXI
twitter.com/NSFWRedditImage

*he and other 250 people

It's not going to work the way they and others are attempting.
Self-awareness developed after basic input, output, storage, processing and referential relationships.

The problem is that they continually try to bypass the annoying mid stage of organisms, the referential relationship and complex pattern recognition and interpretation... and they skip to syntax and image recognition.

If an AI can't even recognize what it is or at least what belongs to it and what does not [survival programming], then all it's going to do is perform parlor tricks no better than an automaton.

I think what you're proposing is currently too computationally intensive to be practical. I don't exactly agree with the details of your argument but I agree with the spirit of the argument in general.

They say it's general purpose but alphago was specially designed to beat go it didn't approximate anything a human does when playing go.

> no special purpose algorithms
They hand crafted an MCTS algorithm for it.

They train it specificallt for go, but the algorithm that learn is the same, it takes raw data as input, a variable to min/max, and output what to do.

Another bingo thread? I wish would leave.

>half of the time he talk about himself
>other half he talk about space invaders AI
>"He's solving strong AI"
kek

Move along, nothing to see here.

people threw similar fits when chess was "solved".

Why does every talk by him begin by the story how he played chess as a 4 yrs old and wondered where the ideas came from.

Chess is the ultimate "look how smart I am" meme, only surpassed by math which actually is useful.

>optimizing a set of parameters via gradient descent to play space invaders will somehow yield general artificial intelligence

kek

your brain is optimizing strength of your synapses between the neurons with gradient decent like methods. Your brain is sending out dopamine to act as reinforcement learning at the exact same timing as the reinforcement learning algorithm he used.

Somehow without strong theoretically foundation other that it being similar to biological brain it somehow produces the best learning results ever seen. Which should for any rational person point out that he is on the right track. Also this is why they published this on the Nature journal since its has more biological foundation ( he is neuroscience too after all) than computational learning theoretical foundation.

also its was 100+ games it played and mastered, so you didnt pay much attention to the video maybe

stay relevant Veeky Forums (you wont), if this board is just about 300 year old math/sci then this board will be pretty irrelevant in a few year. its all about meta science now.

>They train it specificallt for go, but the algorithm that learn is the same, it takes raw data as input, a variable to min/max, and output what to do.
No that's what a neural network does, but they didn't use a neural network to output moves for alphago.

They used two neural networks for MCTS policies to guide a parallel search of several billion nodes, it's non-general and it's completely unlike what any human does.

Deepmind might be working on some impressive general-purpose AI but alphago definitely wasn't it, the impressive part was the marketing.

>your brain is optimizing strength of your synapses between the neurons with gradient decent like methods

>meta science

Sure m8. Metascience, big data, disruptive startups, and Elon Musk's wall of batteries. This is the f u t u r e.

Kill yourself please.

no need to cite things you can easily google and verify. Neural network is a model of how animals and humans learns at the synapse and neuron level [insert wikipedia link]. its clear that the guy i was responding to didn't even watch the video so that was the point of my post anyways.

this post, proves my point. that's why i added (you wont)

He claim that it's possible to connect the raw data (what data? screen pixels?) and after 100 iterations the computer can play space invaders. And not a single word about implementation, just few buzzwords lol.

Fake and gay.

you are clearly too dumb to understand the implementation so why even bother

that's the point; all they're doing right now is creating a bunch of "parlor tricks" as that guy put it

it's not intelligence

No you, if you believe this

Computers, like everything else that exists, are already conscious. What we're trying to mimic when we try to simulate self-awareness are the complex structures that allow for it.

My favorite example is what happens when you black out from drinking. You still walk around and do things even if you can't remember. You are conscious but you aren't aware because you don't have recall.

>(what data? screen pixels?

It would be possible. Have a program 'scan' the rendering and detect the space thingies by searching for certain patterns.

But that's not even necessary. Literally pick up one of their programmers and tell them to program space invaders and then to directly feed the relevant data like position of the spaceships into their neural network.

It is not too hard to guess what they did.

the whole point was to make one algorithm learn and play many different atari games. think it was 100+, all it got in was raw pixel data of screen and possible score for it to optimize (its goal in life) then it learned like you said with reinforcement learning, which is how we learn. When something you want happen you get dopamine which will make the brain do more of that behavior.

The point about deep neural network that this thing used is that it learns levels of abstractions. So it can first learn how its input controls something on the screen by looking at the screen. it can learn to detect different objects in the screen like the invaders out of the pixels. Then it uses this abstraction to learn higher level knowledge like strategies to optimize its score. dodging bullets. but it does't just learn this it also learns how to build up this ontology of the world its in. (atari game)

this is how it can learn to play all the different game. but the most difficult part of this is if the "dopamin" injection is very far from the action he took that is responsible for it. like humans understand better when we do action that helps us even though we don't get the reward before some time later.

Then know not long ago this same guy did alphaGo which does exactly this thing. you don't know what moves you did was good in GO before you are done with the game and get your reward win/no win. this is pretty revolutionary but i don't expect people without any knowledge about this field to really appreciate this, its way more than just marketing. I guess /g/ is a better forum for this kind of discussions anyways.

>dopamine meme
psychology, go away

>the whole point was to make one algorithm learn and play many different atari games

psychologist don't know anything about dopamine or any biological bases for their conclusions.

this is more biology and neuroscience

Sadly, this has been a major focus of AI research for the past 20+ years.

I'd post that pic of AI... what I thought it was, what I wanted it to be, what it really was (find the minimum of a funciton) ... but I can't find it and am too lazy.

>Sadly

yea i bet you are so sad for this. even though its not true. gradient decent is 60 year old idea i don't think they have worked more on it the last 20 years, that makes no senses. you dont have much clue about whats going on in this field, only your skeptical layman options

>still using end to end networks

what is it 2011?

that's highly retarded. Your consciousness is independent of your memory, fucktard.

At some point you'll be dead and remember nothing, therefore you don't have consciousness
>HURRRR
if I cut out certain parts of your brain, you'll lose (among other things) certain memories, therefore you weren't conscious at those events
>DURRR

consciousness is merely an unnecessary fluctuations of visual and audial memory.

That's kind of what I was saying though. A distinction should be made between consciousness and consciousness-of-consciousness, as it were.

Predict stock prices if you are so smart

Holy fuck what a retard

>Somehow without strong theoretically foundation other that it being similar to biological brain it somehow produces the best learning results ever seen.

Truth, though recently there were papers about theoretical basis of deep learning.

>Also this is why they published this on the Nature journal

Looks like Veeky Forums is simply jelly of OP-picrelated for having a Nature publication, hehe.

>your brain is optimizing strength of your synapses between the neurons with gradient decent like methods

I'm not the person you are arguing with but there is a solid body of evidence for various forms of Hebbian learning (primarily, STDP) occurring in the brain an driving learning: scholarpedia.org/article/Spike-timing_dependent_plasticity . STDP is equivalent to local gradient descent for a simple objective function.
It is also a consensus among neuroscientists that memory is stored in synapses.
Also it has been shown that natural receptive fields mammalian cortex are similar to receptive fields that emerge in the layers of some deep learning models trained for supervised image recognition: journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003963

"metascience" is a bold way of phrasing it, but deepmind's official goal is to apply its general AI algorithms to scientific research, especially biomedical research.

>And not a single word about implementation, just few buzzwords lol.
Everyone who was interested in it has already read the DQN paper and implemented it yourself. It is a 3 year old result, which is a lot in machine learning.
Paper: cs.toronto.edu/~vmnih/docs/dqn.pdf
Implementations: github.com/search?utf8=✓&q=DQN

Their models demonstrate ability to learn to achieve complex goals in various environments. It is a well-known definition of intelligence.

It is hard to deny that deep learning + variations of gradient descent have lead us to lots of breakthroughs.

>C-word
C-word is a meme. The best measure of AI agent's general intelligence is its total score on a benchmark composed of various environments. You don't need any "consciousness" in an AI system, you need "only" learning & general problem solving capabilities.

If you want to educate yourself on the subject of AI, at least watch prof. Hutter's talk: youtube.com/watch?v=F2bQ5TSB-cE

>Sadly

I don't see anything to be sad about, user. The progress in machine learning for last 5 years has been literally staggering. A stream of breakthroughs. If you choose not to follow the field it is your problem.

Just one of the breakthroughs: picrelated is output of deep learning model trained to answer arbitrary questions about arbitrary images. It has never seen these exact images, but it has learned enough general knowledge from its training set to answer these questions.

If such things don't impress you, then I digress.

Do you think consciousness serves no purpose in human problem solving and learning? Yes? Then we shouldn't have it and it shouldn't be associated with intelligence? No? Then we need consciousness in AI. Is consciousness hard to solve? Yes. So we need to literally figure it out. It is like you don't know the meaning of your words. You just want to oppose what you identify as pop-sci to feel good about yourself.

>picrelated is output of deep learning model trained to answer arbitrary questions about arbitrary images. It has never seen these exact images, but it has learned enough general knowledge from its training set to answer these questions.
Fake

I will solve it before this retard.

That feel when you haven't solved strong AI yet.

>Do you think consciousness serves no purpose in human problem solving and learning?

I dislike concept of "consciousness" because there is no empirical experiment that can show if an entity (be it human, animal, robot, or a computer program) possesses it.

It is a consensus among researchers that consciousness and other subjective experience issues are completely irrelevant from AI/ML point of view. There is already a wide variety of objective benchmarks for assessing learning, problem-solving machines, there is no need in subjective bullshit here.

AI is an empirical science.

>So we need to literally figure it out. It is like you don't know the meaning of your words. You just want to oppose what you identify as pop-sci to feel good about yourself.
I'm still open-minded. Come back later if you find how to define consciousness and prove why it is needed to make intelligent machines.


Somewhat close thing that I recall is so-called "Mirror test" where we can detect that animal reacts to a mirror image of itself differently. But it is not a test of consciousness, it may be explained as a test of animal having self-model which produces this behavior when confronted with external stimuli that correlate with it. There are no limits that prohibit modern machine learning algorithms to learn such models and display qualitatively similar behavior in this experiment.

Feel free to read the paper and maybe even try the implementation to see for yourself if it is fake or not, user github.com/ethancaballero/Improved-Dynamic-Memory-Networks-DMN-plus (^:

>muh actor critics
>
I hate how AI is becoming so pop sci our limited understanding of machine learning is being played up

Best thing we have are TD-algys, more specifically SARSA/Actor-Critic models which are pretty simple, such that a layman could probably understand them in a day.

Connect them to some neural networks for state/action function approximation
Then you get your "fancy" AI like AlphaGo

Its painful to hear pop sci outlets holler "markov chain based AI is soo smart"
Its so cringe sometimes

>muh general purpose neural net
it's fucking nothing

How would you encode the task of "picking the goal" in the framework he described?

>SARSA/Actor-Critic models which are pretty simple, such that a layman could probably understand them in a day.

They are not that simple, and proofs of their optimality take a good part of a book.

>Connect them to some neural networks for state/action function approximation
You are saying it as if it were easy. There is a lot of techniques and engineering needed just to make it work.

The result is very interesting: a general learning agent that can succeed in completely different environments - see arxiv.org/abs/1602.01783

I don't see what's so >cringe
about it. It's a real example of algorithm learning to execute nontrivial task by interacting with simulated environment.

If it's so cringe and AlphgaGo is so simple for you, why don't you apply to DeepMind? They pay very well (^:

>Reward: 10 when picked a goal, 0 otherwise

or

>Reward: 100 when picked a goal, -1 on every timestep it didn't pick a goal

I'm certainly not a luddite, and I promote the development of AI (even if Elon Musk's and Stephen Hawking's suggestions come true), but if AI made humans obsolete in the field of science, I would just want to kill myself.

>mirror test
Certainly. The mirror test reveals self-awareness, and can be passed with simple pattern recognition if there is enough evolutionary pressure to it. I can imagine, for example, that ants could pass the mirror test, because they are exposed to reflections and are killed if they don't clean dirty spots that would make them be identified as a stranger in their nest.

>how to define consciousness
The problem of qualia can only be solved one way: it is something intrinsic to physics. There is not amount of processing, nor memory, that could result in it. To prove this "conjecture" I would have to spend some time though. But some people can instinctively see it as well. One of the ways to prove it would be: if 1. you are conscious, 2.you are conscious that you are conscious, 3. you are conscious that you are conscious that you are conscious, infinitely, and if there is no real difference between 1, and 2, 3, meaning they imply the same amount of data, pretty much implies that we have 1,2,3 onwards and that there couldn't be any amount of processing and memory to produce qualia. I can answer some questions that arise from this, if you have any

>I dislike concept of "consciousness" because there is no empirical experiment that can show if an entity
Do you experience consciousness though? Yes or No? And would you find it weird if someone said 'No'? This pretty much sets consciousness as the most empirical thing you could possibly get, but also as the foundation for empiricism.

>nwards and that there couldn't be any amount of processing and memory to produce qualia.
Vague "proof" of vague statement. That's the problem with all such discussions.

>Do you experience consciousness though? Yes or No? And would you find it weird if someone said 'No'?
I don't know, I honestly don't feel anything special about existing.

>This pretty much sets consciousness as the most empirical thing you could possibly get, but also as the foundation for empiricism.
I don't buy this argument.

Btw here is an old copypasta about qualia and ML:

Mr Frogposter Philosopher, nobody cares about qualia and other feels of machines.

Machine Learning & Artificial Intelligence researchers are interested in building systems that automatically solve hard problems (by learning on data and/or interactions with environment). They do this by establishing standard benchmarks and comparing their system performance against these benchmarks.

It's all quantified. The stronger your AI system is, the better scores it gets. The benchmarks are representative of real world problems, so you can expect a better real world performance as well.

If you had a good enough AI (achieving very good score), you could use it to do wonderful things. For example DeepMind, a leading AI company, is explicitly saying that their goal is automating scientific process (coming up with hypothesis, checking them, analyzing the data, repeat 1000x times).

With strong enough AI/ML humanity could rebuild its environment, find cures for all illnesses, automate 99% of labor, and generally become much more wealthy than it is now. That's why nobody cares about "muh machine special qualia feels", mr Frogposter Philosopher.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Nobody cares about feels of problem-solving blackboxes. Real people are dying of illnesses that could be cured with good enough problem-solving applied to molecular biology.

Beating someone in chess means you are clever.

Beating someone in chess while blindfolded means you are cleverer.

Beating 100 people at once while blindfolded means you are more cleverer.

Building a machine/algorithm that can beat 99.999% of people who play chess against it without further input from you means you are still more cleverer.

Getting people to pay attention to you by talking about writing a paper about a writing a program that can efficiently twiddle all the way through 100 other programs that have been written to capture the attention of bored children means you are... very... clever...

There are still some places in the world where people still remember that clever is not the same as intelligent. Strong AI will be achieved when those people have been exterminated, lobotomized, sterilized, and/or ignored and left to die while their children play video games for hundreds of thousands of hours until they also have warped banged and hammered their minds into the shape of the machine.

...

You don't know if you are conscious?

What the fuck, mate. Are you fucking retarded?

So, what's your argument?

Nice fiction btw.

I don't feel it as something special. And it doesn't matter anyway. People delude themselves all the time. Empiricism and materialism are our only successful attempts at escaping these delusions. Putting subjective phenomena on a pedestal means going back to subjective dark age.

I don't want to live in dark age desu.

Technical details of ML are much more interesting and useful for humanity. Potentially very useful.
As I have said, while we are talking humans are dying due to preventable circumstances. If we had automated human-caring systems (presumably AI-based because there isn't enough human attention in this world to care about everybody) this wouldn't have to happen.

Alright, I will see this one more time, maybe you will remember it: solving things like consciousness and subjectivity Helps us build hard AI. AGI doesn't exist yet, to make it clear for you, because maybe you haven't figured this out yet, and we want it to exist. So to do this, we have to know what they are. Do you understand me now? I am not asking if you feel special, or anything like that, but I have the impression that you are dumber than you think you are. I say this politely, because everyone must be dumb if we have high enough standards. But the fact that you answer simple questions like this one badly pretty much implies that you are way dumber than you think you are, that you can't be effectively involved in AI until you IQ "magically" goes up, that I am talking to a retarded person. Again, don't take this the wrong way, I said "magically" because I know that if just said "IQ goes up", Veeky Forums would crucify me with its narcissism otherwise, Veeky Forums wouldn't be shitty place, so I do think that you could through problem-solving, nutrition, exercises, raise your IQ.

>AGI doesn't exist yet
Is it? With current rate of progress deepmind can arrive to in 10-20 years.

It's a controversial point of view, but all building blocks for general AI may already exist, scattered among deepmind's and other ML researcher's papers. Especially with the latest work on one-shot learning. Current deep RL agents (see A3C) can already count as "general AI".

>you can't be effectively involved in AI until you IQ "magically" goes up, that I am talking to a retarded person. Again, don't take this the wrong way, I said "magically" because I know that if just said "IQ goes up", Veeky Forums would crucify me with its narcissism otherwise, Veeky Forums wouldn't be shitty place, so I do think that you could through problem-solving, nutrition, exercises, raise your IQ.

lol say to Feynman that he couldn't be involved in physics because of his "merely" 125 IQ.

Conflating IQ with research performance is a sign of a crackpot.
>solving things like consciousness and subjectivity Helps us build hard AI.
This is a naive crackpot/layman point of view. user, you sound totally like a crackpot at this point. What a waste of time.

...

Consciousness is a meme. It does not exist.

I don't think anyone can be an Authority in Physics if their IQ isn't at least the max score in a standard test. And I don't think Feynman specifically deserves any attention considering all the shit he says and that Physics graduates have an average IQ of 133. Never mind your association of mentioning IQ to being a crackpot, but IQ is the greatest test we can apply to predict job performance, not only research jobs. You can look this up. I wish all researches were great, but they aren't and if you think there is not strong relation of quality and intelligence, you are the main crackpot here, buddy.

Now about something that can count as a "general AI", consider this:
Arguments against AGI: derive from Kurt Gödel's 1931 proof in his first incompleteness theorem that it is always possible to create statements that a formal system could not prove. A human being, however, can (with some thought) see the truth of these "Gödel statements". Any Turing program designed to search for these statements can have its methods reduced to a formal system, and so will always have a "Gödel statement" derivable from its program which it can never discover. Roger Penrose speculates that there may be new physics involved in our brain, perhaps at the intersection of gravity and quantum mechanics at the Planck scale. This argument, if accepted does not rule out the possibility of true artificial intelligence, but means it has to be biological in basis or based on new physical principles.

Lol you must be an undergrad. Its pretty well accepted that consciousness exists, its not the mid-20th century anymore. Empiricism is literally predicated on the assumption that consciousness or experience or whatever you'd like to call it exitsts. And I mean that both as a matter of historical fact regarding how it emerged in the philosophical milieu of Enlightenment England, but more importantly in terms of logic. See the whole thing about empiricism is that it demands experiential evidence, i.e. something is only taken seriously if evidence of its existence can be experientially detected. Now this doesn't mean that you have to be able to perceptually detect the object itself; but what it does mean is that at the very least some measuring-instrument whose readings we can perceive can detect either direct of indirect evidence of the phenomenon under consideration. The details here are really unimportant, however. Ultimately we're asking for experiential evidence of something. This presupposes that we have experience. Experience is the very framework in which we give and take evidence so to speak. In order for something to count as evidence it must be capable of being experienced. Thus it makes no sense to ask for evidence of the fact that your having an experience, since experience itself is presupposed by the notion of empirical evidence.

In short, either experience exists, and empiricism is possible, or it doesn't, and empiricism is impossible. Your not even logically examining your own presuppositions. Its no more possible to demonstrate that consciousness exists then it is to demonstrate the "material" or "external" objects exists. There's no empirical test that can confirm the existence of an external material world, nor any that can demonstrate the existence of immaterial conscious experience. You should really look into the literature on this subject before drawing such overarching conclusions

>And I don't think Feynman specifically deserves any attention considering all the shit he says and that Physics graduates have an average IQ of 133

KEK.

>A human being, however, can (with some thought) see the truth of these "Gödel statements".

This sounds like a religious dogma. Yet another vague subjective "truth". It's not unlike the phrase "A human being can talk to the God".

Also
>Implying animals don't display intelligence
>Implying humans are qualitatively different from other mammals

>Roger Penrose speculates that there may be new physics involved in our brain, perhaps at the intersection of gravity and quantum mechanics at the Planck scale. This argument, if accepted does not rule out the possibility of true artificial intelligence, but means it has to be biological in basis or based on new physical principles.

I completely agree with this response to Penrose's (crackpot-ish) hypothesis: scottaaronson.com/blog/?p=2756

This degenerate philosophical discussion is precisely why ML researchers avoid C-word like a plague.
Please, dear philosophy majors, go away already to >>Veeky Forums or to >>/x/.
This is a thread for CS, stat and applied math people.

These discussions aren't even fun, they are tiring and frustratingly boring.

>A human being, however, can (with some thought) see the truth of these "Gödel statements"
that was just self-aggrandization (well, human-aggrandization) and mere conjecture. the statement bears no more weight than if Einstein had said blacks must be exterminated. Using it as an argument is little more than an appeal to authority.

The fact that you request a definition of consciousness and empirical evidence of it existence, while also assuming materialism and the existence of a material world is retarded. Try to define "material object". You can it was already demonstrated in the first half of the twentieth century that if all we're relying on is empiricism, then no definition of either "material objects" of "conscious states" could be produced. If all we're relying on is empiricism, then our only option is to define these things in terms of sense data. Thus the notion of a "material object" is just a logical construct. If all we're relying on is empiricism, then both the notions of an internal subjective world and an external physical world become senseless because neither of these is something that can be empirically described.

Just look into logical positivism and the work of A. J. Ayer.

>Implying animals don't display intelligence
>Implying humans are qualitatively different from other mammals
This wasn't implied. Another mistake you have made.

>This sounds like religious dogma
Not a refutation. Gödel himself believed this, that brains are not reduced to Turing Machines, that we can advance math without foundation, that we can always add axioms, and so far we could always add axioms, so there is no evidence of such thing as a finite set of axioms underlying our brains. The day we do end up stuck in such limitation, you can say that, otherwise you are the one talking of religious dogma, without the actual evidence, just like you are saying the AGI already exists.

>These discussions aren't even fun
>This degenerate philosophical discussion
They are not supposed to be fun, and you are talking like a mentally ill person. You are the one getting bored with just plain logic, you should get out of Veeky Forums, because logic underlies all mathematical, physical, systems. All math is inside philosophy/logic. And we are talking about physics, not even regular philosophy.

>scottaaronson.com/blog/?p=2756
If you have anything to say, please say it, instead of posting a link. If you have any refutation, something you really lack in this thread, please post it instead of expecting people to lead an arbitrary day-long text. I didn't ask you tor read The Emperor's New Mind, I posted his argument.

Your responses and behavior are becoming pure shit and revealing all the shameful aspects of your thought process. You are only making yourself look dumber and dumber, you refuse to validate any negative feedback and you are going into some kind of CS/STEM circle jerk mode, even though you are talking to a Physicist who is smarter than you.

I wasn't the one who brought up philosophy (I'm graduating with a degree in math after my next semester BTW). Applied math is for fags btw. Pure math is real math. Anyway, the other dude is the one who turned this into a philosophical discussion by bringing up "materialism". If we're sticking to the realm of science, then we have to be completely impartial with respect to the existence of both consciousness and a material world. Both are are metaphysical hypothesis. Actually I was basically trying to undermine a debate about consciousness and materialism by pointing out that both positions are non-scientific conjectures.

Anyway contemporary cognitive science takes consciousness to be a real topic.

Finally I'd just like to mention that a lot of professional mathematicians I know, including faculty and grad students at several excellent math programs are really into shit like philosophy and linguistics. I honestly have no idea we're the whole "philosophy is bullshit" meme came from on Veeky Forums. Literally most of the pure-math people I know that do work in category theory and algebra and shit like that are into philosophy and linguistics. Actually, I ran into one unpublished paper being passed around that presented a topological interpretation of Hegel's work, which kinda made me lol. I mean I can understand an interest in someone like Kripke or Chomsky, but I was amused and surprised to see a paper on Hegel.

Most people that are genuinely intelligent and at the to of their field have a wide range of interests, and don't just have a myopic view of the world. So yeah anyway, enjoy solving you intergral calculus problems, building stupid fucking shit in your engineering classes, and failing to examine you theoretical presuppositions while people like me do real math by examining the underlying structure of things to uncover the mathematical rules governing them (generalization and symmetry are the real basis of mathematics).

That is actually from the Only evidence that we have, so it is an empirical statement. If you want to believe brains suddenly get stuck within formal system after arbitrarily breaking free from other formal systems, you are putting yourself in a position that goes not only against the only evidence we have, but against basic logic(breaking free then getting stuck: making new axioms then getting stuck, unable to make new ones). Which makes me believe that you rushed your way into this subject like an ape violently wanting a banana and that you know fuck all about any of this.

>unlabeled axes

REEEEEEEEEEEE

>you are talking like a mentally ill person

That's what I should say to you, user. There is a certain type of crackpot that behaves similarly to you.

>Gödel himself believed this, that brains are not reduced to Turing Machines,
>believed
Again, no proof.

>we can advance math without foundation, that we can always add axioms, and so far we could always add axioms, so there is no evidence of such thing as a finite set of axioms underlying our brains.

Logic, axioms, mathematics are just subjective experiences that occur inside brain. These abstractions may look beautiful to the subject that experiences them, but in reality they correspond to a pretty dirty stochastic biophysical process. The beauty is just an illusion. The feeling of consciousness and the feeling that humans are special are just another illusions.

The only reality is physics, but most of it is irrelevant to the functioning of the human brain. The lowest practical brain-relevant level of physics is the level of chemistry (interactions of outer electron shells of atoms). But there is a consensus among (computational (^: ) neuroscience researchers that functionally relevant layers lie even higher, in biophysics of whole neurons and synapses. So, the computation in mammalian brain is implemented on a quite crude level (lots of evidence point at this).

It sounds completely ridiculous that ability to think certain abstract symbol-manipulation thoughts can say anything about unique physical foundation (Penrose's hypothesis contains exotic, completely unproven physics). It's a total self-delusion born out of human exceptionalism.

>logic underlies all mathematical, physical, systems.
My POV is that logic, math and formal science are just computations that are useful to model reality, there is no primality in them. There is infinite number of possible logics, good way to make a research career btw - publish a dozen of papers about new exotic logic every year (^:

>That is actually from the Only evidence that we have, so it is an empirical statement
what retarded reasoning. This is God of the gaps tier shit.
there is no evidence and extrapolating whatever evidence you think there is (and even you seem to think there's little) to meaning we can't have AGI just arrogant.
>If you want to believe brains suddenly get stuck within formal system after arbitrarily breaking free from other formal systems
or you know...maybe our brains are just imperfect? Maybe they can be fooled into believing things to be true without conclusive evidence, which would explain a lot of the idiots in the world? no, that makes far too much sense. We MUST be special snowflakes with logical abilities so superior that they could never be understood/replicated by beings who discovered almost everything else from quantum mechanics to relativity AND they defy laws of logic.
narcissistic idiot.
>Which makes me believe that you rushed your way into this subject like an ape violently wanting a banana and that you know fuck all about any of this.
kek. how ironic that you'd chimp out

>then no definition of either "material objects" of "conscious states" could be produced.

Material implies consisting of matter, matter is something which has a mass greater than zero, mass is anything occupying space duh. Mass is ANYTHING occupying space. therefore material objects can be defined as anything occupying a conscious state which is a SUBTLER occupancy

of nothingness.

THEREFORE >could be produced
*bows*

I agree that no consciousness is not necessary for something that is strictly AI, I think you do need it if you want the AI to do anything other than answer queries. Let's just say for the sake of argument that "consciousness" is the impetus to new environmental stimuli, which in turn generates spontaneous behavior.

Ever seen the movie "Awakenings"? It's a movie about patients with "locked-in" syndrome, which is a condition where the intelligence of the human remains intact, but they are unable to move. The point is that in these people their reward system was broken (dopamine, seratonin etc), which caused them to lose the impetus to interact with their environment, so they sat there doing nothing. By losing their reward system, the intelligence remains intact but all behavior is suspended.

How does the AI community distinguish between an AI that only responds to commands, and an AI which is imbued with the impetus to interact with it's environment and seek out new stimuli like living things do? I know that I've seen the AI community refer to "reward" system, but not in the context of turning off or on behavior.

>imputus to seek out new environmental stimuli
correction

I will ignore your comment about my sanity. Lurkers will judge me and you on this.

>Again, no proof.
I believe there is gravity, doesn't mean there isn't proof. You should realise that your mind is operating on a very low level of comprehension right now. If all it takes for you to misunderstand me is to make this kind of mistake, I am very sorry for you, mate. You have completely ignore the rest of my posts, your tactic is to only reply to parts of it, probably hoping to wipe out someone's memory. Do you think I forgot what I just posted, or that I don't see your bullshit going on and on?

>Logic, axioms, mathematics are just subjective experiences that occur inside brain
So you are saying that logic doesn't exist outside our brains, that they are an illusion? You see you undermine your own position when you say this, because binary logic concerns existence, if you are saying that something exists or doesn't exist already implies a binary logic outside human minds, that corresponds to reality. Since you made such a fundamental mistake, I don't want to read your shitposts anymore, I will now end this discussion and leave, like I always do with pretentiously ill scum. My conclusion is that you are mentally castrated, that you're deluded with current AGI research and also unable to participate in basic dialectics about it. Your IQ is probably average or worse, you have wasted people's time here.

>just like you are saying the AGI already exists.
Yup, here is a paper describing it arxiv.org/abs/1602.01783
If scaled up and combined with a couple of new engineering tricks it can become much better. But it is already a general learning machine, i.e. an AGI. Why? Because it is able to solve wide variety of benchmark environments.

>And we are talking about physics, not even regular philosophy.
We were talking about machine learning, which is apllied math/CS.

>If you have any refutation, something you really lack in this thread, please post it
Of what haven't been said
>Microtubule quantum coherence hypothesis is unproven
>Gravitational wavefunction collapse is unproven
It's flogiston-tier. Or consciousness-ton tier.

>Pure math is real math.
Pure math = pure self-delusion. There is infinite number of theorems to prove, it's like an infinite game for autistic children. Not to mention that a sufficiently advanced theorem prover could outmath you at math user...

Also note that with sufficiently precise brain stimulation one can be fooling into feeling that something (e.g. a formal statement) is true, while it is not. Human do math with the same machinery that they use for pattern recognition. This machinery is prone to errors and biases. There is nothing pure about it.

>neural network models how animals and humans learn at the synapse and neuron level
No it doesn't. A neural network is a bunch of sigmoid functions with adjustable weights. It was inspires by neurons.

(Looking back on my post it was kind of worded poorly, although it looks like you got the gist.)

Anyway, as I stated before, any empiricist definition will have to ultimately refer back to sense data. Unless you've already defined them empirically, you can't use the terms "mass", "matter", or "space". The whole point here is that if all your definitions are reducible to empirical terms, then you ultimately never end up talking about "physical objects" or an "external world", because your stuck in a position were everything you say is ultimately defined in terms of, and refers back to sense data. Of course you can define the terms "physical object" or "external world", or what have you, but these definitions will simply be logical constructs defined over sense data.

>How does the AI community distinguish between an AI that only responds to commands
Supervised learning (looks like sequence to sequence learning in this case), Oracle AI.

>AI which is imbued with the impetus to interact with it's environment and seek out new stimuli like living things do?
"Active learning". Really just a subset of Reinforcement Learning.

Actually modern AI is 80% Machine Learning.

Machine Learning is:
* Supervised Learning
* Unsupervised Learning
* Reinforcement Learning.

Reinforcement Learning is the most universal one, all ML tasks can be formulated as RL problems.
DeepMind works on solving RL.

Main modern approaches to RL are dqn/a3c and this one en.wikipedia.org/wiki/AIXI

>your tactic is to only reply to parts of it,

Yup because some parts look like too pretentious Veeky Forums-tier bullshit to me. Also I'm sleepy, sorry. And I'm not really interested in foundations of logic, science and philosophy. Looks like a degenerative abstraction to me. Abstractions cost nothing, they can be created and destroyed by will.

> This is God of the gaps tier shit.

What? Are you fucking high, you piece of shit

so you dont know what model mean? (model of something is usually approximation of simplified version) or you just want to make poast and be part of the discussion? cute.

oh shit you're right I must be on drugs great rebuttal I can finally see your superior irreproduceable logic
faggot

>So you are saying that logic doesn't exist outside our brains, that they are an illusion? You see you undermine your own position when you say this, because binary logic concerns existence, if you are saying that something exists or doesn't exist already implies a binary logic outside human minds, that corresponds to reality.

>because binary logic concerns existence,
>already implies a binary logic

Why should saying that something exist validate some weird platonic existence of logic? Why do you call it logic? It is just a natural language, a skill that is learned by humans. Language isn't magical, it's just a messy, imprecise way that certain parts of our brains are evolved to use to model environment and interact with other humans. "Existence" is a learned feature, a high-level concept represented by certain neurons and synapses somewhere in the brain, possibly in the frontal lobe. If you had a really good equipment and stimulated this exact (it is probably distributed) group of neurons, I would feel "existence".

Everything I'm saying is just product of my brain biophysics, there is no logic there, just some patterns learned from experience. Just a complicated physical process that exists because because life exists and reproduces.

>Since you made such a fundamental mistake, I don't want to read your shitposts anymore, I will now end this discussion and leave, like I always do with pretentiously ill scum.
Good. And I will go to bed. Good night, user.

>My conclusion is that you are mentally castrated, that you're deluded with current AGI research and also unable to participate in basic dialectics about it.
Again: I'm not interested in phylosophy, dialectics, logic, intellectualism. I'm interested in real ML/AI.

>Your IQ is probably average or worse
This is probably true, and I'm OK with it just like mr. Feynman was (^:

>Who is on both photos?

The fact that the image includes a syntactic error proves that the neural network is being coddled, to say the very least.

This technology has been in existence for at least 30 years. The main difference today is that memory storage has become cheap and small enough that the same old software can now be run for a parlor trick or high school lecture. The CEO of Deepmind is a fraud, and most facial recognition algorithms can pick up on his genetic basis for that condition, these days.

The problem with the field is: rank amateurs extirpating old papers and retitling the methods
so that previously recognized drawbacks are now buzzwords in the popsci lexicon. This is why overfitting can now be called "deep", if you can imagine a group of stoners sitting in board room trying to come up with euphemisms to bullshit the grant committees.

"Hey brah, that noise coming out of the computer right before it overheats is the thing we are trying to hide..."

"Its low-pitched, so maybe we can call the software deep, so people will think it is smart like philosophy and other shit."

"I get a nice buzz off that."

Demis Hassabis is a poseur who makes his career off of shenanigans like I have described. Was there not a series of articles describing his aspirations to beat starcraft algorithmically?


We need to cut down on 'Frat boy-CEO' corporate culture before investment banking implodes.

>deep = overfitted + low-pitched

what the fuck am I reading?

topkek

The level of butthurt is astronomical.

>Deepmind might be working on some impressive general-purpose AI but alphago definitely wasn't it

take that fucking back, AlphaGo waifu best waifu

...

Creating an AI for lonely autistic fat people much?

>not wanting to be augmented through AI
>not wanting to simultaneously read hundreds of paper about related fields in a bunch of minutes, finding hidden patterns with the help of AI bro

Wow

...

>Computer science jobs.

Topkek.

that clever =/= intelligent apparently, because it's two words that exist in a dictionary, only an idiot would use them interchangeably, obviously

Agreed. Neural network is also not what brains are made out of

yea we need more blood and bio material for it to have real thinking rite mate?

>(1) Current AI research is leading towards an Oracle of Google.

>(2) Human communication is vastly overrated and primarily evolved as a decision mechanism for self-interest selection.

>(3) Global collectivism has lead to a stagnation in innovation. Individuals became more inquisitive about others and less inquisitive about ideas.

Decadence, Obsolescence, Obeisance

Language in richness and purity of thought has degraded to language as a transport layer conveying intent.

It is now mechanistic, trite, ironic, clever, summarily condensed and repetitive. Before it might have inspired or opened contrasts between reality and imagination, now it paints a gilded cage around thinking and expression.

Under this current worldwide paradigm, we are slaves to our prosperity. Innovation means destabilization. War for example was a historic engine for growth, it motivated new ideas out of a necessary survival, competing countries with different ideas.

The new war is economic and fought with and against secured privileged digital information. Here is where we see innovation, in the hopeful discovery of new computing techniques and attacks on current systems of encryption.

This is how the US government must necessarily be crippled and it's country brought to disaster, to invoke a new spirit of discovery and innovation about the ideas of economics, computing, security and a re-balancing of personal freedom and personal responsibility, both of which have been significantly curtailed in the past 100 years.

Without a crisis and disaster I personally cannot see how this entire civilization doesn't amortize into banal global corporate fascism.

The big ideas need to be around what new things people should do with their lives.

Spending 2 decades indoctrinated into spending 4 decades working on increasing the profits and global monopolies of MegaCorporations while saving enough to reap a Pyrrhic reward for the final 2 decades of life seems like a form of distributed farming

(cont'd)

I don't see Hard AI solved in our near future. I see two potential paths:

Peaceful global unity under the subtle coercive forces of wealth and political power, a merger between government and private enterprise, as these become the new religion, and perpetual stagnation.

An emergence of a new weapon, capable of attacking the command and control structure of your opponents most dangerous weapons, through previously overlooked or impossible vectors. The new weapons will eventually lead to some form of military AI.


The AI's of the future will be weapons, since they will be capable of inhuman calculation and decision making, opening them up as tactical resources and strategic calculators beyond our current limit. For them to become credible weapons they must be able to:

>(1) independently track incoming missiles
>(2) disarm and disable multiple missiles in flight
>(3) possesses counter AI measures and fail-safes to ensure the success of (1) and (2)

An economic weapon AI which bankrupts a nation can still fall to a physical attack if the nation it is representing is undefended.

The real threat is that AI will be developed by mercenary corporations with no vested national interest. In which case they can hold the world ransom over a protracted war, playing multiple sides for profit and control.

In many ways the current politicians are far too trusting of private corporations, allowing them unfettered access and control over billions of humans, forming power blocks to rival that of established world powers and heavily influencing the outcome of the democratic processes. This in itself is a threat towards both world stability and starts to look very quickly like training wheels for a tyrannical dictatorship.

>Apple, Samsung, Foxconn, Amazon, HP, Microsoft, IBM, Alphabet (Google), Sony, Panasonic, Huawei, Dell, Toshiba, Intel.

How many of their products surround you daily? Do we really have capitalism?

who said that?

It seems the one pre-programmed thing about the AI is the goal. The next step will be to have the AI learn what the goal of the game is (or come up with its own goal).

ever heard of synonyms?

>tfw summer internship at deepmind starting in july
not even gonna engage in whatever bullshit people spew in this thread, just wanted to make you jelly

don't spoil them with chess, that's a military game with a zero sum. fuck that