Them lads at Deepmind just solved speech generation with their meme networks:

Them lads at Deepmind just solved speech generation with their meme networks: deepmind.com/blog/wavenet-generative-model-raw-audio/

Maybe we won't have another AI winter again after all, huh?

Other urls found in this thread:

techxplore.com/news/2016-08-machines-simply.html
nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable
twitter.com/hardmaru/status/773968758519902208
arxiv.org/pdf/1603.03417v1.pdf),
twitter.com/AnonBabble

an hour and a half for 1 second of generated speech is not "solving speech generation"

speech is one appendage of a much deeper central mystery
all these big AI developers are trying to build the walls and roof without any foundation

Lol. I love how much of an after thought the nose model was. It's like they thought "this looks like a talking fleshlight", and then added a plastic nose. kek.

resonance in nasal passages can alter the sound of the voice

>all these big AI developers are trying to build the walls and roof without any foundation
I agree but it's amazing that it sort of works.
Maybe most AI problems aren't as hard as we think.

yeah but you gotta start small

How much you wanna bet that some lab technician stayed late at the lab and stuck his dick in it after everyone left?

Very few people are as desperate and disgusting as you.

I agree, progress is progress, I just get annoyed how people start screaming that AI is right around the corner every time there's a small step in the right direction

techxplore.com/news/2016-08-machines-simply.html

>It is now possible for machines to learn how natural or artificial systems work by simply observing them, without being told what to look for, according to researchers at the University of Sheffield.

SOMETHING'S going on...

no I'd definitely do that as well

nautil.us/issue/40/learning/is-artificial-intelligence-permanently-inscrutable

They don't have to build shit. These things are arguably a little bit sentient already. We will not be able to intentionally create the foundation of intelligence piece by piece. Intelligence appears to be so complex that to do so might take hundreds of years, and even longer to "comprehend" in any useful way how the pieces fit together to produce intelligence. It would take an infinitely greater intelligence to fully apprehend what makes a lesser intelligence tick. This problem seems to be at the very heart of the nature of intelligence itself, not artificial intelligence, specifically.

So what will likely happen is that as this research progresses, one day AGI will be generated seemingly spontaneously. Just as the "quacks" predicted, it will happen so suddenly we will not even realize when it happened.

Because the nature of intelligence appears to be unknowable, the only thing we will be able to get out of artificial intelligence is its effects, never a true understanding of the nature of intelligence. And so we will have no choice but to trust these greater intelligences to bring about effects that we desire, and not say, ones that will cause us harm...

They do that because we simply can't know if it or if it isn't. It will definitely take us by surprise when it happens. And there are definitely hints it could come soon. But no one knows for sure.

>It will definitely take us by surprise when it happens
What are you basing this on. Are you one those people that think AI might spontaneously "happen" on the internet or something.

>What are you basing this on.
You didn't read this:

I bet sentience/self awareness itself is some super simple recurrent structure, it's just all those other modules feeding into it, like goals, planning, memory and reasoning that are hard as fuck to get right.

I did and it seems to be based on a dilettante understanding of neural networks and ML.

>simplicity is not simple

that's just like, your opinion, man

Why not just CRISPR to genetically modify gorillas and then enslave them and have them work as our servants?

Seems more viable than building a general AI via computers. Computers are already as smart as they need to be.

gorillas are only good for carrying shit, we need something that acts like a human

Surely nothing could go wrong.

They already tried something like that in the US a few hundred years back, it didn't work out.

>goals, planning, memory and reasoning

What makes you think these aren't all integral components of it?

...

>CRISPR
STOP THIS STUPID MEME, IT ISN'T EVEN FUNNY ANYMORE

gorillas actually don't have a big enough brain to match our dexterity and fine motor control.

However mixed biological system will certainly be a thing, just not very soon.

The reality is that it's probably a lot cheaper to make gorillas or humans than positron-brains and 3d-printed robots.

Humans are (hopefully) our of the picture, so maybe some hybrid/engineered anthropomorphic creature can be used.

Some seem nonessential, like long-term memory for example. Amnesiacs are still conscious and they can reason. This is pure opinion, but I think only short-term memory, some kind of sensory input stream and a way to affect the environment by actions are truly required. Provided you get the structure just right, of course. With such a minimal setup it'd have to be preconfigured, not learnt.

I imagine a digital brain will be more valuable than a physical one since it can transfer and copy itself, but yeah if the hippies don't stop it and the necessary technology is in play by then, jobs that the AI need a physical body and/or voice for could just be done by monkeys & apes with AI brain implants

I don't think just memory is enough, it needs some kind of processor to decide its actions before it acts

yeah this You still need goals, even if they're only short-term ones.

You are "retarded"

I thought this was a Laboratory Test for fleshlights at first

twitter.com/hardmaru/status/773968758519902208

Honestly who cares? They'll optimize it and it'll become fast enough. The science has been worked out.

>Maybe we won't have another AI winter again after all, huh?

Well, no shit. AI researchers don't need to rely solely on DARPA funding anymore, now that Google, Facebook, and co. are footing the bill.

You are "retarded"

sigh

>sigh
...

im just tryna feel what da head like

does anyone else feel like trying to teach a computer to emulate human thought, with all the heuristics and biases we have, is a totally fucking backward endeavor?

Why not cut out the middle man and just CRISPR the computers?

Cost of gorilla upkeep. Do you have any idea how expensive horses are?

Also, training of gorillas, it is difficult to train animals to do things.

>it is difficult to train animals to do things.
not when you shove hardware into them.

That was my first thought seeing it. If it was my experiment, I'd get the class's attention and tell everyone I put needles in it, let any frisky folks know they'll have a bad time.

>babby's first artificial neural network

This approach to AI is probably older than you are.

he has a point though gorillas probably eat and shit a ton

No. They get real humans for testing.

>These things are arguably a little bit sentient already.
I am sure that this is bullshit (or your definition of sentience is bullshit)

sentience actually has no definition

its just some magical thing everyone claims they are because they experience it

theres no proof at all


much much much more likely is taht everything is deterministic and a human being experiences pain only because of electron travelling down nerves

and that a human being is as sentient as a rock

life is only complex combination of materials but still as material as a rock

im intrigued, please go on rockbro

like
what is a rock?
an atom of rock next to an atom of rock

what is human?
a molecule of vein next to a moleculte of skin

those molecules are made by rocks

no difference

absolutely everything thats taken as evidence of sentience can be explained by the laws of physics


a person doesnt smile because happyines exists somewhere he smiles because a torrent of electrical storms runs down the vagus nerve

someone sint love happy
""""love"""" is a byproduct of an internal essence of wanting to spray semen carriers onto the females reproductor areas


people claim they are "experiencing " something

no one is, its all deterministic deal wiht it

It takes 90 minutes per second *now*. This is a really, really naïve approach to audio generation - it's slightly astonishing that it works at all, let alone as well as it does. Future development may find drastically more efficient approaches.

For instance, take neural style transfer - the original Style Transfer algorithm relied on an expensive, iterated process whereby a random-noise seed was optimized through gradient descent to match a set of features derived from both images.*

However, more recently, new innovations like Texture Networks were developed (arxiv.org/pdf/1603.03417v1.pdf), which use new networks to directly generate the textures hundreds of times faster and enable high-quality neural style processing to be run offline, on a mobile device. (This is how Prisma works)

It is not unthinkable that, now that we know WaveNet works, we may be able to find more efficient ways of generating audio of similar quality.

* (WaveNet is slow for a similar reason - because the net must be re-evaluated for every sample to produce a distribution conditioned on the partial audio observed so far, and you have to do that 16,000 times for every second of audio. They're basically using it as a recurrent network)

Also, even if it can *never* be made fast enough to run as software, it could probably be implemented as an ASIC or FPGA if there was demand for it. WaveNet's a 1D convolutional model, which means execution would be highly parallelizable and it could be feasibly laid out as a two-dimensional circuit. DeepMind's paper is annoyingly vague on the exact architecture (how many layers, etc) but at 16kHz, as long as it takes less than 60 us to get data through one cycle of the model, you'd be able to generate audio in real-time. Since every element on a layer should be able to run simultaneously, and each layer is pretty much nothing more than a great big vectorized floating-point multiplication followed by summation, it seems quite likely that this could be managed. Judging by some figures given, WaveNet's likely to be sixteen layers deep at the very most, and FPGAs can easily run in the tens of megahertz.

It used to be very common to have specialized speech-generation, sound-processing, etc. chips when computers were too slow to handle such things in software; we may yet return to that era with the end of Moore's law.

(It's unlikely that anyone will implement this network specifically in silicon, but some descendant of it might be if a much more efficient method is not found. Speech synthesis of the kind of quality that may be obtained by further developments of this strategy seems like the sort of thing that might be worth including a chip for.)