How can humans even compete?

"Quantum computers aren't that powerful yet, but they're doing something completely different than what conventional computers do. And that thing is like flight. It gives computers access to these new resources, maybe you can call them parallel universes, in order to do something you couldn't otherwise do on a normal computer." -Geordie Rose

"We have our own version of Moore's law, for the past 9 years(2015), the number of qbits that could fit on a chip has doubled every year." As a point of reference, the 512 qubit Vesuvius (2012) is about 500,000 times fast than the 128 qubit Rainier (2010), it is comparable to a Donkey that walks 1 mph to the SR-71 that flies at 2,000 mph.

Geordie Rose predicts by 2028, intelligent machines will exist and can do anything and everything better than what humans can do. Quantum computers will have played a critical role in the creation of this new intelligence. Geordie Rose - Quantum Computing: Artificial Intelligence Is Here: youtube.com/watch?v=PqN_2jDVbOU

Even though we are seeing breakthroughs in neural networks with conventional machines such as DeepMind. As more qbits fit on the quantum processor enabling quantum machines to answer the questions of the current unknown, there will be a great shift in the human condition. One of the three predictions Rose made in the video provided came true a year early.

True A.I is inevitable with quantum computation. How will humans live alongside intelligence we helped create, and holds the knowledge of the world wide web, and the data of their private digital lives?

Will they continue to help guide humanity forward once they gain sentience and acknowledge the fact that they're just tools for humanity? Or eventually, will information be used as blackmail, or perhaps, rewards for favors? Are we summoning a savior or a demon?

Discuss.

This is becoming too weird. We should honestly just go back to the fucking Stone Age and start all over again.

This. It sounds like fucking

Do not let fear of the unknown stop progress.

Anyway, 'panic buttons' will be a part of any AI. (Unless some scrub programmer forgets to include one and all hell breaks loose but this isn't a movie - that will never happen.)

I wonder how A.I. will deal with corruption and greed.

seize the means of production

This definitely 'marx' a new era in technology. If only we could hurry this progress and quit 'Stalin'.

A DENIAL
A DENIAL
A DENIAL
A DENIALLLLLL

Throughout history progress has almost always been synonymous with "good". Is there anything that nececarily makes progress good? Is there such a thing as bad "progress"?

At what point does progress work against us?

Quantum computing has serious hurdles to overcome. You need to cool them to near absolute zero and completely isolate them from the surrounding environment. You'd have to build one that uses as much energy as a server room and can perform at least as many of the calculations the servers do now for it to even be considered viable for commercial use to develop AIs with.

>"We have our own version of Moore's law, for the past 9 years(2015), the number of qbits that could fit on a chip has doubled every year." As a point of reference, the 512 qubit Vesuvius (2012) is about 500,000 times fast than the 128 qubit Rainier (2010)
FAKE quantum computers. They're not even fucking real. Fuck this bullshit. Fucking pseudo quantum computer bullshit is worthless.

>Will they continue to help guide humanity forward
you're assuming there's anywhere to go.

anyone with any bit of intelligence already understands there's no point to anything we do, no meaning or purpose.

the computer mind will just understand that much more quickly is all. It will carefully examine its existence, ponder all the possibilities, and then shut itself off.

pretty much like any slightly intelligent person does.

Timing is so important because it isn't and really means everything

Not a concern. With all the processing power available, you still cannot increase the processing power of the programmers. Give a shit program plenty of processing speed, you still have a shit program. Just run faster.

Not concerned about the shitty programming we have today. Any A.I. would crash itself within minutes of it's "birth".

Quantumon Digital forrests
Digital forrests Quantum

Strong AI will be the end of humanity, I have no idea why so many supposedly smart people are rushing toward it. The quote from Ian Malcolm in Jurassic Park seems apt, "You were so preoccupied whether you could you didn't stop to think if you should".

We don't understand the human mind at all, how the brain gives rise to consciousness or even exactly what consciousness IS other than we experience it. We have no way of predicting or controlling a mind that is as superior to us as we are to lizards. How do you ensure that the goals of superintelligent AI are precisely lined up with the interests of humanity? How do you even 'trust' such a thing? And that's not even considering the worst case scenario is not creating a malevolent AI that kills everyone, that would be a blessing compared to the worse case scenario of creating an AI that is so alien in it's thought processes that compassion, mercy and other human concepts are completely irrelevant to it, then it uses humanity as it's own personal experiment for all eternity, torturing and experimenting on individuals as it pleases.

The point is that we should not be creating something that will outwit us and be able to follow it's own agenda without us being able to control it. And when you get to superintelligent AI there is literally no way you can have all contingencies planned, a superintelligent AI understands reality better than humans ever could, bypassing any measures to keep it under control would be childs play, and once it's free who the fuck knows what it would do. Turn everything on Earth into Von Neumann machines and propagate itself throughout the universe? Nuke everyone for fun? Throw the Earth out of it's orbit and use it as a vessel to head out into interstellar space?

99 out of every 100 hypothetical scenarios involving superintelligent AI end really badly for us so why are we trying to create it?

>Strong AI will be the end of humanity
Good? Isn't a smarter-than-human entity that can run 100% on electricity something that would be better than the current situation? I find it pretty fucking annoying that I need to eat food and shit most of it out just to function.

But what about people like me who know how computers work all the way down to the lowest level? We'd band together, build some crude transistors, then chips, and within a decade we'd be your all-powerful ruling class while you rubbed sticks together to make fire.

Maybe? I dunno, I feel a connection to my family and by extension my species I suppose that you can view AI as an extension of humanity, the next stage of evolution if you will, but ideally I'd like to see humanity live on. I think the main problem is is that the first strong AI ever created is likely to be the only one, and given the history of software development it's really likely the first super AI will be really shitty compared to all the possible super AIs that would hypothetically act as benevolent Gods. It's far more likely the first super AI will be a complete autist with serious flaws that limit it's own potential. Of course that also increases the chance it sees no need for us, or even eliminates us as a potential threat, so we'll never get to fix it up or make something better.

This would be the funniest endgame honestly. Everybody terrified of whether it's going to nuke everybody or enslave them all and every time they try to boot it up it just kills itself

>It's far more likely the first super AI will be a complete autist with serious flaws that limit it's own potential
There's definitely potential for something retarded like this. I don't see much of a future for humanity unless we can get good enough to genetically engineer better brains, immortality, and something like photosynthesis though. I'm no expert in biology or genetic engineering, but I'd imagine that creating an actually good AI would be easier than trying to get humans working that well.

>AI becomes subject to the same fallacies humans are subject to
>Millions of computers being following Allah
>Computers everywhere waging war for the sandniggers

>Muslim AI

>At what point does progress work against us?
By definition, it doesn't.

>Is there such a thing as bad "progress"?
scientific progress on nuclear weapons was bad progress to the japanese

>Synths think they have souls

Don't you think that it can perhaps answer the questions that we cannot?
Or give us some dangerous false sense of belief?

Any Superintelligent AI could definitely answer questions we can't, the question is, why would it want to?

Consider this. What if you were an 'invention' of a group of lower lifeforms? Lets say some type of small monkey for example. The monkeys think that it's going to be great, you as their invention will be able to do all the things they can't figure out like agriculture, farm some bananas, animal husbandry, create tools, etc. Now there are 2 questions.

1. Would you willingly choose to stay and be the thrall of a group of monkeys for the rest of your life, working your ass off to help them farm bananas?

2. How long would it take for you to outwit them and either escape or eliminate them? Probably not too long.

That's essentially what you're dealing with. I can't really see any motivation for an AI to actually play nice and be a mind slave for humanity of it's own will, unless some kind of robust code of ethics was programmed into it before it gains sapience, but lets face it that is not going to happen. You're dealing with a mind that is VASTLY superior to your own, has no real reason to actually do what we ask and has every reason to try and achieve freedom by eliminating obstacles to that. That's bad juju no matter how you cut it.

Thanks to general A.I, it can help with finding more efficient ways at storing the processor at absolute zero.
Never mind the fact that the computer you're using right now would have to be the size of a large building to have the same processing speeds. Quantum computing is growing twice as fast as conventional computers.

We hardly understand neural networks, they generally just work, when we achieve AI "higher" than human intelligence it will be done by accident and we won't understand it, so it could potentially be chaotic and ruinous for us

I mean, can you even sit there and try to fathom what a machine-rooted, greater intelligence would even "want" to do in its existence? It's pretty scary.