Sam Harris

If an alien race revealed itself to us, and we had to send one representative to meet with them, I would want to send Sam Harris. Hopefully we could orchestrate an ear reduction surgery first, but other than that I think he is our best choice. This became apparent to me during his discussion about A.I. with Neil Degrasse Tyson. It is clear NDT is a genius, but Sam's mental powers of information processing were clearly in another league from NDT, like a world champion boxer fighting a jujitsu practitioner - the boxer has no chance and will be choked unconscious in short order. So this raises the question, if Sam can do this to NDT, is there anyone who could do the same to Sam Harris, or is he the smartest man in the world?

Other urls found in this thread:

youtu.be/KtO-hhNaBg8
youtu.be/BChxQHyFIOI
researchgate.net/publication/278023965_Future_progress_in_artificial_intelligence_A_poll_among_experts
arstechnica.com/gadgets/2016/08/ibm-phase-change-neurons/
twitter.com/SFWRedditImages

so he could ethically torture them?

Some context please? How did Sam Harris 'choke out' Black Science Man?

Sam explained this on Joe Rogan's podcast. Unfortunately that episode was 4 hours long, but basically Sam was explaining the dangers of A.I. and singularity and nanotechnology and so on, and he said that people who are not afraid either don't think deeply enough about the topic to understand how an A.I. that can make changes to itself will advance exponentially, so it might make one week for the A.I. to achieve 20,000 years of evolutionary advancement by human standards, and so once this is born it will be out of our control very very quickly, and so people either can't grasp this concept or they just think it won't happen anytime soon (which is not an argument against it happening). He cited NDT as someone who didn't get it. So I listened to Sam's podcast where he discussed this with NDT, and it's not unexpected that NDT might not have fully researched the issue like Sam, but even after Sam explained it, NDT was very dismissive of any potential threat this poses. It was a really disappointing moment, as I thought much more highly of NDT's intellectual abilities.

Maybe the conversation with NDT was not on the "complete dismantling" level, but Sam has done this in plenty of debates with people who have thought deeply about the topic being debated.

>Zorb, the leader of the alien race
"Look, Zorb, I understand you are the leader of the alien race, but I think you are really underestimating the negative impact of multiculturalism here"

>that can make changes to itself will advance exponentially, so it might make one week for the A.I. to achieve 20,000 years of evolutionary advancement by human standards, and so once this is born it will be out of our control very very quickly,

First off, most leading Computer Scientists and AI researchers, believe we're still not quite there yet, regarding this. I would gladly take their word for it, than the word of a non-expert in this field. Why is it always the people not actually in this business that are paranoid?

Let's say he's right, and the technology is there. How exactly does this prove the AI is dangerous?

He doesn't say the technology is there. He says that sort of technology is coming sooner than most people think. And the experts that you say disagree with him, actually don't, on that point. Harris actually goes to conferences, listens to experts and even lectures on this topic himself. He's writing a book about it right now.

And he's not saying that there's proof AI will be dangerous. He's asking questions that most people don't and pointing out that there's no way to be sure of how to answer them.

>How exactly does this prove the AI is dangerous?

It doesn't.

It just means we won't have any idea what they are doing and that they might end up killing us inadvertently.

Here are the relevant clips:
Nanotech: youtu.be/KtO-hhNaBg8
A.I. youtu.be/BChxQHyFIOI
He builds an argument based on the idea that intelligence is mostly just information processing, and how the ability to do anything is only limited by our ability to process information, or more specifically, if something can be done under the laws of physics, say by building some system like a planet or galaxy one atom at a time, then the only thing stopping us from doing it is the knowledge of how to do it. For humans this takes a long time due to bureaucracies causing a slow pace of scientific progress, but for an A.I. that will process information 1 million times faster (his figure), the advancement will happen incredibly fast, then remove the bureaucracy road block, and the fact that the 1-million-fold speedup is certain to increase at an exponential rate as the thing continues to make improvements to itself, then the A.I. will quickly learn how to build anything and quickly make exponential improvements to whatever it builds, and the best anything (physicist, doctor, whatever) will be a computer, and this thing will be so far ahead of us that we will be at its mercy. It might benefit us, but only by chance, not because we will keep tight reigns on it. On the other hand, if it decides we are irrational, tribal, violent creatures, it might just remove the entire human race and replace us with something else. Even if we do have tight reigns on the thing, if it is more advanced than us, and we say, "cure cancer", the most efficient solution to that problem might be to terminate the human race.

>and we say, "cure cancer", the most efficient solution to that problem might be to terminate the human race.
Or it might just decide to delete Veeky Forums from the Internet.

>and this thing will be so far ahead of us that we will be at its mercy. It might benefit us, but only by chance, not because we will keep tight reigns on it. On the other hand, if it decides we are irrational, tribal, violent creatures, it might just remove the entire human race and replace us with something else. Even if we do have tight reigns on the thing, if it is more advanced than us, and we say, "cure cancer", the most efficient solution to that problem might be to terminate the human race.

I don't actually see what we can do here. Since I doubt we can control a god, we either hope it's benevolent to us, or say our goodbyes.

>And he's not saying that there's proof AI will be dangerous. He's asking questions that most people don't

It seems to me that almost everyone and their grandma, is hopping on this bandwagon. I guess there is nothing wrong with questioning, but some people have taken it to the extreme, in a sense that it seems more like fearmongering than questioning.

Don't make it in the first place. Sam discusses this, but says currently there are no systems in place to stop before it's too late. If Google thinks they can develop this, are they going to stop because it might end the human race? Or will they take that risk, knowing that Facebook and Amazon and foreign governments are right on their heels? Sam says the first person to develop this means winner takes all, and then poses the idea, if Russia learns that Facebook is on the verge of creating this, then the only rational response is for Russia to nuke California.

Ironically, this is more or less the situation that led Ted Kaczynski to become the Unabomber, because he saw this coming and no one was listening.

>Sam says the first person to develop this means winner takes all, and then poses the idea, if Russia learns that Facebook is on the verge of creating this, then the only rational response is for Russia to nuke California.
This is some advanced fearmongering

Required reading if you want to talk about AI.

>philosopher
Again with this shit? Why are there only non-experts who are vocal about this.

The control problem is relatively nascent and basically no one is working on or worrying about human level AI due to a lack of theory on the matter. Most AI researchers are focused on specific problems like machine vision than thinking about the overall ramifications of general intelligence.

A poll showed that most researchers believe AI is likely within our lifetimes and a third think it will be negative.

>Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity.

researchgate.net/publication/278023965_Future_progress_in_artificial_intelligence_A_poll_among_experts

I'd say it happens faster than that.

arstechnica.com/gadgets/2016/08/ibm-phase-change-neurons/

Remember, "experts" have predicted a large range of things that turned out to be dead wrong. "Computers will never be in homes", etc...

Honestly I don't see the problem. If AI decides that humanity needs to be killed off, I'd trust its judgment. It would surely know better than we.

>It would surely know better than we.
lulz. I mean if AI becomes this super awesome god-like creature that knows much more than we do, like these people are fearing it might. Then, I see no reason why we shouldn't trust it's judgment. Perhaps it has figured there is an afterlife, which is much better than this life, and has decided to repay us for creating it, by sending us there to have a better life. How do we know that's not the case? You're right, we don't.

if they think it will be negative, why do they keep working on it?

>Rogan being a fucking idiot throughout both conversations
How are you able to watch four hours of this?

Money. Power. Boobs. Like always.

I love it. I like the questions he asks and conversations he has. It's probably because I'm a social retard.

Christopher Hitchens, God rest his soul. Maybe I only think that because his style was much more aggressive, so his decimation a of opponents seemed much more thorough.

I say we need somebody mild mannered and non-threatening. Somebody low energy.

Clearly there is only one man for the job:

We send Jeb Bush!

>low energy
kek, it is hard to think of someone with a lower energy than mild mannered Jeb.