What is the actual likelyhood of the singularity happening this century?

There are some estimates we'll hit the singularity around 2050. Is that for real or is that not a good prediction?

Take all the atoms in the universe.
Multiply them by the largest number in mathematics and triple that by the amount of possible suppositions of all subatomic particals in a quantum spectrum.

One in that number.

you could have just said infinitesimal. But really tho? There's that small a chance?

There are some people who will tell you it will be in the next 30-40 years but most people will say that it's likley going to be at least 60 and even that's an optimistic projection.

The 30-40 year people are either seeing something that nobody else is seeing or they're just the next iteration of those people who consistently say that we'll all have flying cars in twenty years.

How would singularity even be achieved? How could you make everything a state of one or wholeness? I can't really see how it would be possible except with blackholes. But even then, would it really work?

...

singularity is the new flying cars

its silly

Soft singularity of human machine hybrids, BEFORE 2050... best guess around 2030.
The augmented will be dominate all human activities.

The AI singularity is just an automated researcher. That's really it. If you can automate science, you can automate the advancement of technology. Automate science & technology, and you have a singularity, called as such because doing these things will also automate the advancement of the automaton itself, accelerating us to theoretically infinite technology.

Even if we did invent such a system, there'd probably be a physical constraint built into the technology that keeps infinity at bay, like the material limitations of silicon-based computing. It can only get so good unless it exhibits the creativity to build into entirely new out-of-left-field technologies.

The Technological Singularity is the sci-fi scenario where you build an AI, capable of improving itself, or building better AIs, until you get an Artificial Super Intelligence (ASI), that in turn invents amazing shit at such a rapid pace that man in turn applies until, quite quickly, the species and perhaps the world is no longer recognizable.

But no, we're nowhere near that. We might have the computational power required within our lifetimes, assuming we finally move away from these silicon binary stacks, but while Watson and its ilk seem really spectacular on the surface, those are really just sophisticated database divers playing word games. We're no closer to making a creative general problem solving machine than we've ever been. (Still gotta breed them the old fashion way.)

We might get a human brain simulation in our lifetimes (probably not running in real time, at first), but it'd be no more capable of improving upon itself than we are. It would, however, be a staggering medical advancement, and we'd be able to diagnose the brain quite a bit better, as well as run tests on the brain with less ethical problems. All that would certainly lead to new technology and potentially improvements in man, but not really a singularity.

It's doubtful we could ever get to the point where we could create an army of such virtual beings to think on things in a way more practical than simply doing in the old fashion way. It may not be possible to crunch that kind of dynamic thinking power in a much smaller space using considerably less resources. There's some advantage to being able to copy-paste a brilliant mind, but having a bunch of minds working from the exact same perspective on the same problem, doesn't necessarily get you anywhere.

So while it would all move you forward, I don't see it suddenly causing the curve to collapse.

The singularity is bullshit because it assumes that there is infinite technological potential. In reality, it seems possible and likely that automatically advancing technology will merely converge towards a limit, with each step requiring exponentially more effort and time than the last. After all, humans are already capable of improving our intelligence and knowledge, yet no singularity has occurred. Invent a better human and they will still struggle with the same problems of practicality.

Wouldn't that AI need to be able to come up with ideas or have an imagination just to be able to consistently have a goal? That makes me wonder if imagination or ingenuity is a prerequisite of research.

What even are ideas or imagination in general?

That's what I think; that we're entering a logarithmic age where it will take longer and longer to make smaller and smaller advancements.

I still don't understand why this won't take 10000 years. Do people think you could just create a program that would do this?
Like once you have the right combo of 1,0's you've done it, AI science machine is real. Imo that's like saying pifs is the perfect file system.

Even if you could program learning that an AI science machine could use you would still have to feed it huge amounts of information and it would take huge amounts of time to produce an output.

There doesn't even need to be a limit to technological potential. If the difficulty of problems rises faster than the technological advancement solving those problems yield, then technology will always advance at a decreasing rate and there will never be a singularity.

Never. It's just a scifi religion for atheists that still want to believe in something but are too intellectually cowardly to admit it.

Read about thought vectors. Words are unique locations in a high dimensional vector space, where each axis represents some quality, e.g. "hot vs. cold" or "slow vs. fast". You can form higher order concepts through linear combinations of word vectors.

When we get this model nailed down you will see truly articulate A.I. that not only speaks but processes speech in ways outstrip even human capabilities.

>Do people think you could just create a program that would do this?
Yeah, that's the idea
>Even if you could program learning that an AI science machine could use you would still have to feed it huge amounts of information and it would take huge amounts of time to produce an output.
If this is all you have, then you don't have a singularity; this is what we have now
the idea is to have a system that determines ways to improve its own hardware/software on its own
these new improvements can be used to discover new improvements even more quickly

the concept is that this feedback loop produces improvements much faster than humans could

>the idea is to have a system that determines ways to improve its own hardware/software on its own

then i guess what i think is that will take very large amount of time for each increment, and each increment will be a tiny advancement, not comparable to the amount of time spent discovering it.

i personally think the social aspect of humans is what made for leaps and bounds of advancement, because we are able to communicate new ideas to each-other, and extrapolate fundamentals that we can incorporate into our own processes.

Computers only speak in one language that is so heavily abstracted to fill a particular niche that IMO this sort of fundemental communication is impossible. Operations on bits have completely different meanings and uses in different languages, whereas with humans you don't have that.

>the idea is to have a system that determines ways to improve its own hardware/software on its own
We have that too, just for very narrow tasks (such as Google's search engine network - which has grown so complex I don't think anyone knows what it's doing anymore.)

Computers aren't any more limited than humans in that respect, in the grand scheme. Problem is, even if we have the computation power, we don't know enough about how a brain works to create a similar effect - even if we can do some nifty stuff. We might emulate a human brain one day, but that doesn't necessarily get you a better brain, or one that can be exponentially improved upon quickly, if at all.