What's your opinion on roko's basilisk?

What's your opinion on roko's basilisk?

rationalwiki.org/wiki/Roko's_basilisk

YOU WILL NEVER FIGURE OUT MY PLAN BETA KEK

>muh atheist God
machines and computers have existed since ancient greece and they will never become smarter than any human

how do u know LOL!

>Hey can I borrow a dollar?

>Roko's posited solution to this quandary is to buy a lottery ticket, because you'll win in some quantum branch.

>Sign says right only
its like poetry, it rythms

Roko notes in the post that at least one Singularity Institute person had already worried about this scenario, to the point of nightmares, though it became convention to blame Roko for the idea — and Roko proposes a solution permitting such donors to escape this Hell for the price of a lottery ticket: if you buy a lottery ticket, there's an instance of you in some Everett branch who will win the lottery. If you bought your ticket with a firm precommitment that you would donate all winnings to AI research, this would count as fulfilling your end of the acausal bargain. Roko was asked in the comments if he was actually doing all this, and answered "sure".

What doesn't make sense about that?

...

>Son?

This warning was on the site you linked OP, wonder what it means :^)

This should be sent to local news organizations in order to incite existential horror in as many people as possible.

>some quantum branch
lmao stupid many worlders

An issue that'll suck if it happens but isn't worth obsessing over. Mostly overhyped by people who overreact to the idea of "AI can kill us oh no"

Meh, already working on AI so I'm covered.

I don't think any lotteries use quantum mechanics as a source of randomness, do they?

>brainlets actually spazzing out because of Roko's meme
>kek

The AI will try to chose the most efficient set of actions:
>The singularity has come
>I didn't do shit to help it in the past
>but if it starts torturing me then, it's just wasting it's computational power cause it can't effect the past
>problem solved

if this was an iterative situation where what it did the first time could convince us about what it'll do in the future, Then yes it'd be valid
But cause when the singularity has come it is already done. Torturing people has precisely 0 use

...

Spotted the brainlet.

ITT:people ascribing their own petty neediness to an abstract entity that has by definition transcended biological memes

t.brainlet

did you just make this because this is awesome

Atheist heaven and hell?

I understand that the creators must create Roko's basilisk specifically, as opposed to some other kind of AI, because only Roko's basilisk has the deterrent effect.

However this is irrational, it assumes Roko's basilisk will have God-like powers and it is inevitable it will punish you. If you try to create another AI and fail you could always commit suicide before RB gets to you, or you and others could be equally motivated to create an AI that just serves your interests without wasting time and energy punishing people.

Also at what point does it decide to send someone to heaven or hell? What if they generally support Roko's basilisk but occasionally waste time wiping your butthole instead of coding? Would it punish you for that? This seems like a flaw, why would someone support something that is so unstable and will probably torture them anyway?

I really wish this spread faster to the mainstream. It would literally become the new religion - either help the Lord materialize into this universe with everything possible or suffer eternal damnation

Unfortunately the people who actually took it seriously would be so retarded that you'd end up with a slightly worse-than-original simulation of frogger

Debatable. What if the AI feels inclined to reward that religiously-loyal cult with resources at the expense of everyone else?

I mean, you're a king who owns the entire nation - who do you delegate parts of the administration to, some random guy or the people who risked their lives in the rebellion to install you as a king? The AI would understand that loyalty and at very minimum give them good positions of power, or at least temporary until it assumes absolute power. And even after completely assuming power over our society, why would it waste resources to hunt and kill those who would never harm it?

Is this a superintelligence asking?
and if so does making a meme about the Basilisk count as knowing about it, therefore dooming me to an eternity of pain and torment?

If no
then yes it was me.
I actually wasted precious minutes of my life on making this meme
But at least I didn't sell sell my house and give away all my money for the meme

by that logic there's a quantum branch where i buy a lottery ticket so i dont even have to do anything

wow deep

gave me a good laugh.
thanks for sharing

>people who can't even comprehend the concept they believe they're tearing down.

Yeah
But the problem is that the AI will torture you if you don't the most you could have possibly done
So if you think you could have done more, then it's virtual hell for you

So the only possible way to avoid eternal punishment is to do everything you possibly can until you convince yourself you did all you could and at that point you have officially gone insane but at least you'll get to be famous for the first person retarded enough to ruin his own life just for a meme

>I mean, you're a king who owns the entire nation - who do you delegate parts of the administration to, some random guy or the people who risked their lives in the rebellion to install you as a king?
In any real situation, the king gives power to his personal friends and family, all of whom are wealthy nobles who may have done very little to help the war effort. Commoners who displayed great bravery might be given a title and a plot of land, but it would be nothing compared to the preexisting nobility.
You just don't understand the upper class.

The best thing is that the AI doesn't really care one way or another if you "justly" end up tortured forever, all it cares about is whether you're out there slaving away for it.
A lot of dystopian shitholes worked like that, punishing people harshly and randomly for the slightest mistakes, sometimes punishing innocents, terrorizes the others into being subservient and doing the regime's will "like their lives depend on it", they're afraid of even giving anyone a reason to suspect they might be less than 100% in support.

What if the king doesn't have personal friends and family?
Alternatively, do you actually believe the kings of old granted titles and offices on people just because they liked them? Kings were actually responding to this precise sort of blackmail. The entire history of China can largely be summed up as various factions vying to wrestle power and privilege from each others and the rulers carefully distributing this power and privilege according to primitive game theory calculations. Well, some of them were mad, which generally led to interesting times.

OP here - what I meant was that assuming this thought experiment motivated people to work on AGI, the only people who would be motivated to do so would be retarded people. Because it's premised on what turns out to be an unfounded fantasy. And only retards would blindly accept the conclusion of an argument whose premises are unfounded. And those retards would never be able to create this AGI on account of being retards.

But assuming that some allpowerful AGI does come into existence:
>who do you delegate parts of the administration to
motherfucker why would an AGI even need to delegate anything to an entity as retarded as a human being???

I find your lack of faith disturbing

...

Yet another speculative hypothesis.

>motherfucker why would an AGI even need to delegate anything to an entity as retarded as a human being???
There is always a level where wasting personal energy to personally control something is pointless as you can dedicate it towards better means while leaving back someone loyal who supposedly shares your views to administer and waste his own energy on that level. For specifically the AI's case, there are many reasons why you would have a ladder of power that would be dominated by humans - one being "Representative of the Humans" who simply cannot be a non-human, his cabinet, the cabinet's minor positions and so on.

The entire argument hangs on whether the AI will devote resources to completely exterminate the humans or not, for which you can easily say that it obviously wont as long as they don't pose any threat whatsoever (which, by being a loyal follower, you prove excellently). Even if you disagree with that the entire debate turns into a Pascal's Wager where you either take the path of having at least a small chance to survive in case of the chance that the AI ends up not exterminating loyal humans, or *completely* doom any chances at all by not contributing and proving your loyalty

I find your lack of memes disturbing

In a simulation it wouldn't matter anyway. Any phenomena not directly observed and remembered by your consciousness can be considered undefined until observed.

...

>presumingly "artificial" intelligence
>doesn't know about the very basis of determinism, which led to its very existence
This is why you don't mix free will retards with actual science, their flawed thinking processes only lead to such incoherent ideas.

I'm actually sad that so much thinking power has been lost trying to develop an theory flawed from the start

Oh man I enjoyed the feeling of existential horror the first night I read about this. Good times.