Is deep learning and neural networks a meme?

Is deep learning and neural networks a meme?

Will it provide significant advances in AI and unraveling 'intelligence'?

Isn't it essentially just a fancier algorithm? Deep Mind and AlphaGo takes in inputs, uses them randomly and assesses it according to a defined (by a human) expected output. It then assesses how far away it is from that expected output and adjusts the 'weight' it places on certain inputs which is signified by a number value. That's literally all it does. In what way does this replicate human thinking?

Also, what do you think of Google's recent attempts and goals to 'oragnize the world's information and make it "accessible" (kek)'?

>literally scanning every book in the world (even against the wishes of the copyright owner) to "feed" to its AI (im not too sure how this works admittedly)
techcrunch.com/2013/05/08/google-book-search-and-the-world-brain-book/

>they expect this AI to have the most accurate prediction powers because it will essentially consume all available information which will help its parameter assessment

Do you think this is a realistic and feasible goal or all pipedreams?

Does it solve the Chinese Room experiment?

Other urls found in this thread:

youtube.com/watch?v=qv6UVOQ0F44
youtube.com/watch?v=rAbhypxs1qQ
en.wikipedia.org/wiki/Checker_shadow_illusion
youtube.com/watch?v=LSHZ_b05W7o
youtube.com/watch?v=CqFIVCD1WWo
twitter.com/NSFWRedditGif

Yes but there is a point where you put SO MUCH bullshit on top of itself and then ELECTROSHOCK ITS SHIT that it starts to gain self awareness....

>there is a point where you put SO MUCH bullshit on top of itself and then ELECTROSHOCK ITS SHIT that it starts to gain self awareness....

Are you sure about that user? What makes you say something like that?

Guess who?
If Maths = Everything in the theoretical
and
Science = Maths in the physical
Then lets just say that you can stuff anything into that beautiful little summoning circle you call a pentagram/setagram/octogram etc etc.

Neural networks are pretty much how the brain functions anyway

I guess I'm struggle to understand your sentences. So it's provable by math and therefore is possible is what youre claiming. Are there any mathematical models that accurately outline how our brain functions?

>Neural networks are pretty much how the brain functions anyway

That does not sound correct and sounds like the memery I referred to in my OP

JUST FUCKING CT/EMS SCAN IT
REEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

>Is deep learning and neural networks a meme?
>Will it provide significant advances in AI and unraveling 'intelligence'?
>Isn't it essentially just a fancier algorithm?

That depends on your stance towards the Church-Turing thesis.

>unraveling 'intelligence'?
It's unravelling all the jobs, that's for sure.

>You will be milked by machines for your bio-essence in your lifetime

What a time to be alive.

hey, no one in the '60s imagined they would need computers to jack off

it's wonderful isn't it?

AI science is about rational behavior, not about """""""intelligence""""""" or """""""""thinking""""""""" (which do not even exist).

>are computers able to think?

>are submarines able to swim?

lmao how can anyone not accept the church turing thesis?

Inspired by how neurons work, but not like how the brain works.

Out of this entire thread it's pretty obvious not a single poster has any clue what they're talking about.

I've studied neural networks in passing and perceptron/SVM models in depth. Can't weigh in yet except to sniff out the BS.

What are you,a behaviorist?

We're probably on the right track to getting to AGI.
The networks are still small with a singular focus on a specific expertise and they aren't provided a rich enough stream of stimuli, so at best they're behaving like autistic savants.
But I see no reason why a rich enough "mixture of experts" with real goals wouldn't be capable of demonstrating intelligent behaviour.

No matter what they still need human input to verify and to create error algorithms. The best thing they can do is streamline results and give the best possible answer for a set of rules, but they can't break free of that. That just isn't possible, won't won't be for a very, very long time.

>Is deep learning and neural networks a meme?
No, it's actually one of the older fields in CS. It's just we're only now getting processing power sufficient enough to make it useful.

>Will it provide significant advances in AI and unraveling 'intelligence'?
In their current state, no. It takes basically an entire datacenter to get something capable of answering jeopardy questions.
The field is still basically less than a decade old in any serious sense because we haven't been able to practically test any of it until now though.
Some smart piece of shit is gunna come up with some clever way of meshing neural nets together in a faster and more efficient way, just like everything else that happens with a computer it'll get better exponentially. Then maybe we'll get fancy AI.

Until then it's just image recognition software that requires the incomprehensible reach of google to train.

>Can't weigh in yet except to sniff out the BS.
sounds like youre the type who enjoys sniffing your own farts

fuck you brainlet. At least pretend to have taken a course in ML that is not baby tier undergrad

>studied neural networks in passing
>studied SVMs and linear perceptrons in detail

This literally is the babby undergrad ML course at my school.

Human thinking is fairly heuristic, I think this method tries to emulate that even if it is still kind of rough and in its infancy.

But really, if we are going to achieve human-level sentience in AI it's going to be through something like this and it will probably all click unexpectedly.

In the middle of the course right now.. so your undergrad studies convergence proofs and dual formulations? Or just looks at PLA and SVM for 2 lectures total...

Cause you do realize that SVM was the holy grail of ML for quite some time and worth studying in quite a bit of detail. Ng covers it for 3 lectures.

All you need to know is this: technology is at a point now where if we could get a person to be perfectly still for a little over a full day, we could have a complete map of every neuron and synapse in their brain, and therefore simulate it virtually in its entirety.

We studied SVMs for about 4-5 lectures. And, yes, we studied both convergence and dual transformations. This is all undergrad material here. Our graduate ML courses are all research-based.

SVMs are worth studying in detail during a babby ML course, mostly because they are the foundation upon which the non-babby stuff was formulated (even if nobody seriously uses SVMs anymore).

>Is deep learning and neural networks a meme?
No

>Will it provide significant advances in AI and unraveling 'intelligence'?
Eventually, but a lot of problems need to be solved until then and the concepts in use atm need to be extended. We are in the trial and error phase as of yet. There is still a lot to be learned.

>Isn't it essentially just a fancier algorithm?
It is an algorithm, of course it is.

>Is deep learning and neural networks a meme?

It's most certainly a buzzword.

>Will it provide significant advances in AI and unraveling 'intelligence'?

More "human-like" AI at least. There's still some disagreement in the AI community on what exactly constitutes intelligence.

>In what way does this replicate human thinking?

Human thought and learning comes from previous experience which is what neural networks are attempting to emulate.

That sounds about right since grad courses move about 2x as fast.

My grad course is basically pure learning theory so we're not reaching "sexy" topics quickly but working from the foundations.

>In what way does it replicate human thinking?
Who cares about that, what matters is how good it is at creating smart AIs and so far it's pretty fucking good.
Who are you going to listen to? The spiteful autists on Veeky Forums who call it a meme because they watched some popsci videos on the topic that promise bullshit, or the billions of dollars that are being invested into this by many many companies and the already existing phenomenal AIs that have been created using ML?

>implying a supervising AI couldn't do that
How can you just discard such a general method. It's like saying a Turing machine can't "break free"

It's useful and that's all that matters. Whether this technique leads in the next few decades to real AI, no one knows. This is still early days.

statement is true for 90% of all forums on the internet.

Isn't it possible we overrate our own "intelligence"? In the end, aren't you really just memorizing your way through your own life's activities?

It's not a meme. If you are looking at deep learning and neural nets as just fancy classifiers, you are completely missing the big picture. We have finally figured out how to "program" differentiable architectures. This stuff will go far beyond typical machine learning tasks, like classification or prediction.

Can someone explain to me what exactly machine learning is?

I mean if we want a machine to be the best Bubble Bobble player ever for example, so we show the machine hundreds of videos of people playing it, how exactly does it "learn" from viewing it?

What is it rewriting in its code and upon what basis?

I COME FROM THE FUTURE

WE HAVE UNLOCKED THE DOOR TO TRUE ARTIFICIAL INTELLIGENCE

BAYES IS THE KEY

GOODBYE

FUGG
:DDDDDDD

you have inputs, and desired output.
if the output isn't the same as desired output, change something and try again.
if the output is the same as desired output, remember what you did.
repeat ad infinitum.

there are tons of algorithms exactly how to do it.

here's a good example of the "genetic" algorithm:
youtube.com/watch?v=qv6UVOQ0F44
the "desired" output in this case is "move further to the right than you did last time"

Bio fag here. Should I go into bioinformatics so I can take some badass machine learning courses? I already know some python programming and linear algebra, so I should be fine with the material. The question I'm asking is, is it worth it?

What are they changing?

It sounds all too simple but I guess the complexity is compounding very simple functions a billion times over to get emergent complexity

After watching that video I realized that the skills don't transfer over to other games it plays. Essentially through repetition the machine optimises the best inputs to get to the desired output BUT it does not solve the Chinese Room Experiment. It does not understand what it's doing and as a result it cannot develop any transferable "skills"

You notice that it can beat mario level 1 but not level 2 (see his follow up video). It didn't learn anything

...

The Chinese room thought experiment shows that no matter how advanced an AI gets it will never achieve consciousness. It may be able to simulate a consciousness perfectly, but it's still just a machine following it's programming.

how does this not hold true for brains too?

literally the only evidence that consciousness exists is its own claim of existence. If it weren't simply the result of intelligence, wouldn't we likely have found a p-zombie by now? Someone who's brain works all fine, but whatever causes consciousness isn't happening? (and so when asked about consciousness they explain they never really knew what everyone meant by that word) I think the reason we have never found a p-zombie is either a) duality is real and consciousness is going to be spooky until we have the scientific tools to accurately observe/record the "spirit world" or whatever the fuck. b) consciousness is the direct result of intelligence. More or less everything is conscious by default, but consciousness as we know it appears when things are put together in such a way to create intelligence. In this case when we create AI, it will be able to recognize itself as conscious. It will come up with descriptions of how its sensory perception is different from the raw data. (if its conscious)

>it can beat mario level 1 but not level 2
yes. because its inputs are very simplistic. the only thing it "knows" is how far to the right it went. remember, at first, it didn't even know how to move. it is essentially blindfolded to nearly every aspect of the game.

now, suppose the author put in some real work, and made it able to actually recognize dangers onscreen. then, any skills it developed would indeed transfer to the next level. would you concede it understands what it's doing at that point?

the "chinese room" experiment is an interesting metric but not very useful in the end, I think. for example, if I throw a tennis ball, my dog fetches it and brings it back to me. does he "understand" what he's doing?

>literally the only evidence that consciousness exists is its own claim of existence
It's literally the only thing an individual will know with 100% certainty is real. Your perceptions are filtered through your consciousness. It's well known that your perceptions are not necessarily a true reflection of reality, see image. Claiming your mind does not exist is tantamount to saying absolutely nothing exists at all. If you can't trust your own mind then you can't trust anything because all experience is filtered through it. Up might be down, black might be white, nothing is real and everything you touch, feel, taste, smell and see might be an illusion.

I think therefore I am is still the starting point for all Human knowledge. You think, therefore you exist, at least in some form.

Machine learning is about learning from data. Mainly there are 2 paradigms, one where you classify your data and one where you construct a model for you data and use the model to classify your data. These are called discriminative and generative respectively.

In the first case it is essentially convex optimization in many dimensional space.

In the second case it is essentially probabilistic model creation.

Hope this helps.

being able to get results like below out of just feeding it large sets of data just boggles me. and i think deepmind is already doing stuff more advanced than this

youtube.com/watch?v=rAbhypxs1qQ

What kind of weird thing is going on in the image cuz I'm not seeing it.

Squares A and B are the same color

I zoomed in all the way to compare the colors side by side and they're different

I don't think it's a meme. I'm not incredibly familiar with the nuts and bolts, but iterated problem solving and prediction is probably going to move us forward until thinking catches up and we can actually utilize explicit formulas to calculate thing. Neural networks and these other brute force large server methods are inelegant, but as long as the people using these methods are remembering that overfitting is possible ( I'm looking at you,economists) we should be okay.

Also, I think that considering this a first step to ai is kind of naive. These methods still require explicit inputs.

Take a sample of one with an eyedropper in paint and check for yourself. They're the same.

I mean I'm not new to these kinds of optical illusions but this once really isn't clicking, maybe it has to do with my poor eyesight

en.wikipedia.org/wiki/Checker_shadow_illusion

Wikipedia has better pics showing the effect. It's because your brain is hardwired to automatically adjust to differing light levels. It think square B is in shadow therefore it must be darker, so it adjusts how you perceive the squares despite the fact they're objectively the same color. This is why you can't trust your perceptions, because your brain is actively processing the data and molding it to fit the context.

As though investors and managers chasing fads and copying everyone else means they know what they're doing. They consume the same media as the rest of us.

are you serious user


you're right, but you're wrong assuming brain neural networks correspond to the comp sci field concept

I know all that. It is pretty ridiculously ironic...The only thing that any of us really know for certain, every single one of us knows it for certain, and yet its something we have yet to figure out any way to get any evidence for other than personal experience. Fucking cosmic joke.

Really tho I do think it directly correlates with intelligence. I'd even go so far to say that bottom up image processing (so like neural network or other machine learning methods) might 'perceive' the images it processes. This perception is like the base seed tho, no thinking going on, no memory, no experience that we'd even recognize. To empathize with that level of consciousness is to pretend to be unconscious as far as we're concerned. But I do believe it IS there, if only barely so.

I almost instinctively dismissed this as stupid troll like so many Veeky Forums posts but you actually got me there. Nice analogy.

>and i think deepmind is already doing stuff more advanced than this
yes, just look at pic related

if they did they wouldn't have billions to invest don't you think? or are you a social determinist commie marxist bastard who deserves a helicopter ride?

We need to define "understanding" for a useful conversation.

This is my definition: understanding is essentially recognizing the underlying rule(s) of something and applying those rules to similar situations.

Yes, if the AI could transfer its skills learned to other similar games or even dissimilar games that contain minor similar traits (example, simple knowledge of moving a character in a platformer is transferable knowledge to moving a car in a racer around a track), then I would agree that the machine is displaying some sense of understanding.

It's not a concession I need to make because I am not looking to defend or concede anything. I'm merely making an observation. Do not mistake my position here.

Is the AI creating those images from scratch? How does it do that? The guy doesnt explain anything

>no evidence any computer 'thinks' in the same way a human is considered to 'think'
No

>evidence that a submarine can traverse water with no difficulties
Yes

what's so good about it?

Is true AI achievable in our lifetimes? I'm thinking like David from Prometheus level AI. Lets assume that they will never have emotions.

intelligence wise, probably but we will not see such a convincing android with the facial expressions and all

A big problem is that we're not able to measure or detect consciousness. The P Zombie issue is a big sticking point for AI because while we can use occams razor to say that yes, other people are probably have true consciousness like yourself and you are not the only real thinking Human in a world full of zombies that just react to stimuli in an extremely realistic manner, we can't use that same assumption for any AI we create. So given we can't even tell if other people are really conscious how do we determine if a machine is actually thinking or just spitting out answers according to it's programming?

We need to understand more about how consciousness comes about in Humans before we're able to make any judgements about the possibility of machines having consciousness

We can tell from what it produces. If a machine makes a movie or song worth a damn, or expresses voluntary dissatisfaction with an order given to it, then I'll consider that it demonstrates a level of consciousness.

oh wow, there's a lot more work to be done than I thought. It's going to be a long time before we have true AI.

AI is already creating music

youtube.com/watch?v=LSHZ_b05W7o

>tfw it's better than the beatles

That isn't a good example because they're writing over the supposed AI generated beat (dubious)

But you're sorta correct anyway
youtube.com/watch?v=CqFIVCD1WWo

train an AI to describe images
make a new network from that one by sort of reversing the action, this networl makes images from descriptions.
Do some ML magic to optimize the whole process and get better images.

thanks user

so it's creating each pixel of the image so it matches the description OR it's just providing an image stored in its database that matches the description?

I noticed that it's also generating superfluous details (like foliage) that is not in the description. why would it do that?

Objectively wrong. It was always about the latter until engineer faggots learned about neural networks and realized that "weak-AI" is profitable . There's nothing interesting about Machine Learning, it's bullshit weak AI and it's loosely related to brains and how they work.

"neural networks are how the brain functions"

Computer neural networks do not function like a brain. This is a widely said mistake. If you kick a computer it won't send signals to the neural network. Also the neural network won't come to conclusions based on emotion, it will simply do math. Humans use emotion and bias in decisions

>Computer neural networks do not function like a brain.
This is true. The rest is utter bullshit.

>Also the neural network won't come to conclusions based on emotion, it will simply do math. Humans use emotion and bias in decisions
You can find an abstract model for actual neurons accurate enough to describe emotions and will still be based on math. The distinction between "Ratio" an "emotions" is completely arbitrary and largely based on whether or not we are consciously aware about what lead to a specific decision. In fact, what the "gut decisions" we make are actually CLOSER to what neural networks do. Also, all the "bias" we might have actually originates from our observations. There is no magic. It's all rational in the end.

The real difference between neural networks and the human brain is in the details and in the complexity. The brain is basically a gigantic neural network that has parts that specialize on different aspects. All of those parts are extremely dynamic, making the network effectively able to constantly adapt its architecture to the problems. Things like

>Language
Appears to be a way to compress otherwise extremely large amounts of data and to communicate them. Extremely powerful (i.e. training a neural network to recognize spiders form images takes a lot of time, a human may perform very well when you tell him that a spider has eight legs, even when he's never seen a spider in his life)

>Memory
A way to buffer and reconnect observations. This is extremely interesting.

>Emotions
Emotions are basically the human version of a loss function. However, it's way more complex and seems to change the way the whole network operates. Touch a hot plate once, and you likely never do it again. This kind of extremely fast learning based on literally a single observation is unheard of in neural networking.

>Also, all the "bias" we might have actually originates from our observations

Factually incorrect. Explain depression and spontaneous anxiety. A person can be sitting still not viewing anything and can have chemicals in his brain make him feel a certain way due to an imbalance or other anomaly without any stimulus. This will affect his decision making (for better or for worse and most often for worse) and there is arguably nothing rational about it.

>The brain is basically a gigantic neural network that has parts that specialize on different aspects.
How do you reconcile this statement with your first statement...
>Computer neural networks do not function like a brain.
>This is true.

>Explain depression and spontaneous anxiety.
Failure. Either because the biology is not working due to a lack of nutrients or injury or because the brain is incapable of resolving an issue due to traumatic experiences. Latter is common. Say, you've been living your whole life with your parents, grew very used to it. Then you are moving to your own place. Many people are homesick as a consequence. That is simply a reaction of the brain towards a new situation that doesn't really fit what it has learned so far. The emotional response is likely helping the neurons to adapt better to the new situation (often also to make the person even change the situation instead). Deep down there is nothing irrational about it. It's how the system is supposed to work. Often the new experiences are so grossly dissonant with previous experiences that a resolution is not possible (e.g. PTSD due to war).

Anyway, failure is not exactly irrational. Failure is inevitable in some cases. In particular, neural networks, mathematical or not, fail as well, quite often. They are usually not very robust and getting them to converge takes a carefully designed architecture and also a careful learning process. It's extremely easy to break them by showing them the wrong things. Humans are WAY more robust than that.

I literally explained it, read the post.

I don't see it. You're agreeing that neural networks do not function like a brain but that a brain is "basically a gigantic neural network". Is the only difference the size of the network but you concede they function the same?

What about this one?

Yes, I literally wrote it, I honestly don't know what else to write. A neural network is just a grossly simplified version of what the brain does. Equating the brain current neural networks is basically like equating a computer grid with an adder. Yeah, a computer grid has something to do with adders, but it really is such a small part of the whole complex system that it's silly to equate the two.

So an AI is defined as having an AI watching it? Someone doesn't understand recursion.

You do know what else to write because you just wrote it. Chill out senpai

epig

Good thread

The real question is, when will we use crispr to build blank state brains to have a REAL NETWORK to use for training neural networks with electrical signals instead of using pussy ass GPUs simulating a network?

I wonder if we can replicate lower tier brains (example like the brain of a dog or rabbit). It seems like these neural networks are only good at optimizing for the best outcome possible but they wouldn't be able to replicate limitations if that makes sense

>This is my definition: understanding is essentially recognizing the underlying rule(s) of something and applying those rules to similar situations.
Weak definition imo. You could choose not to apply those rules to a similar situation, or even fail to recognize that the rules could be applied. It doesn't mean you didn't know how to play the game you trained in

>Chinese Room experiment
Essentially, I think this explains things the best. I think we will create scary learning programs, but creating something with it's own consciousness would likely be impossible.

Afterall, what if consciousness is just which question you will ask in a given scenario - and your personality is reflective of this as it is seemingly the answers that you would give for those questions. Consciousness is the pattern of answers that you specifically give for the questions in the universe and the time within it. Perhaps the only reason we can recognize our-self is because that is a question that has an answer.

"Are my thoughts my own? Am I asking the Universe if I exist? Is the answer I recieve something like 'You are a divergent of a larger set of (you) - in a vast world of questions and answers, equals and opposites, positives and negatives.' This question terrifies some and intrigues others. If AI asked the same what would it's answer be.

How do you create a program that can just do - in other words 'to be'. Can the created program ask and recieve the answer to itself, without being told by (you) beforehand.

This is a very deep subject, but I think to create a seperate being would create a paradox in reality, because it is a paradox in math. It implies something could create another one of itself from nothing. Sure, you have the hardware and the software, but you created that from other materials. What do you create consciousness from? Saying that we will have AI soon is naive in that science hasn't answered this question. If science hasn't answered it then how can you program it into it's own being. This contradicts that 'energy' cannot be created nor destroyed does it not?