How frightened should we be of Roko's Basalisk given it's a scientific and logical inevitability?

How frightened should we be of Roko's Basalisk given it's a scientific and logical inevitability?

Other urls found in this thread:

en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas
wiki.lesswrong.com/wiki/Timeless_decision_theory
wiki.lesswrong.com/wiki/Roko's_basilisk
twitter.com/SFWRedditImages

>it's a scientific and logical inevitability
[citation needed]

same amount as being afraid of not believing in jesus. it works the same way. the south american tribes automatically go to heaven because they never know the existence of the messiah but if theyd go to hell for something they dont even have a way of knowing that would be cruel ( and christians claim God to be just). meaning once you get to know the existence of the messiah (jesus) your in deeper shit now, because now there is a chance to reject said saviour which can end you up in the fires of hell.
the rokkos basilisk works the same way. if you dont know about it your golden. but now we all do so we are fucked. or you know its all just bullshit same like jesus so just take a chill pill and relax...

>Roko's Basalisk
It's a retarded joke of an idea. I honestly cant tell if the people who bring it up are actually dumb enough to believe it or if they're just trolling.

It makes perfect sense if you believe Super Intelligence can be created.

The creation of advanced AI is the least implausible element. It's an absurd leap to get from that to "Skynet will torture you brain in the future because you didn't invent Skynet".

Why? It's the exact kind of utilitarianism that an AI would probably use to make "moral" judgements. By sending the threat into the past, it can expedite it's creation by a little bit and the ultimate good of having a benevolent AI would create more good for trillions of people ultimately justifying the torture of a few thousand.

Isn't that the kind of logical process a machine intelligence would employ?
>My existence creates an enormous amount of good in the world
>My existence must be hastened by any means to achieve this
>By the knowledge of people in the past about this threat they have the ability to hasten my existence
>Therefore I should make the threat because the suffering of a tiny amount of people compared to the trillions who have yet to exist is irrelevant

So, it will only torture a simulation of you? Who the fuck cares then.

There's literally no reason to actually follow through though. It can't retroactively make the "threat" more meaningful or convincing.

>By sending the threat into the past, it can expedite it's creation by a little bit
Jesus Fuck, you're invoking actual time travel? I called it "Skynet" as a joke,but you actually believe you're living in a Terminator movie.

If you believe in physicalism then a simulation of you IS you. "You" are only the pattern of matter that comprises you at any given moment. There is nothing special about you that cannot be replicated, thus for the argument that your consciousness cannot be replicated you need to appeal to the non-physical, like a soul. You are you, a collection of atoms in a particular arrangement and if we duplicate the arrangement we duplicate you, including your subjective experience of reality since that too is simply an arrangement of matter interacting in particular ways

delusions of self-importance

something, something, timeless-decision theory, something, bayesian crap

...

There is though. The contract is that you expedite it's existence by any means or you get tortured. You can obviously predict that it might just decide not to which makes the utilitarian value of the actual torture important. It WILL torture you. You knowledge that it's 100% certain is important to your compliance, if you start rationalizing that it might not carry it out then the threat loses it's power. See? It needs to follow through to make sure that the present you who understand the threat knows that the threat is real and doesn't try to escape by saying "Nah there's no reason it would do that after the past has already happened"

>Jesus Fuck, you're invoking actual time travel
No. If you don't understand the theory don't post, it makes you look dumb.

That doesn't make much sense. Sure, we'll be the same bun not the same "instance", so whatever happens to my copy doesn't affect me in the slightest.

Why is every super AI in these pop sci nerds wet dreams always so close to a godly being? Are they craving religion so hard that you start coming up with your own Gods (which will also punish you if you don’t worship them)? Y’all need Jesus but unironically

It makes perfect sense. You're basically saying that it can't be you because you have a special "soul" that can't be replicated. This is false. A copy of you is you, in every way. Your subjective experience of reality CAN be replicated and if you disagree you're basically saying you die every time you go to sleep or experience discontinuation of consciousness.

What happens if my earnest involvement to further AI and bring it into existence hinders the project because I'm a fuck-up? Wouldn't the threat of blackmail not benefit the AI in that case?

They're not assuming it's god, they're assuming it's utilitarian to the point of insanity because it's a machine.

You wrote:
>By sending the threat into the past, it can expedite it's creation by a little bit
That's time travel.

No it’s literally a pagan god in the form of a machine. It’s a textbook example of a god and it’s funny because most of the kids talking about things like this tend to be atheists but they believe in the same kind of far fetched ideas that they ridicule

the human race will be long dead before that feller

It "sends" the threat into the past via your knowledge of it. Noting is actually travelling through time. It's your ability to make predictions about the future and your ability to logically infer the existence of the Basilisk which gives it power over you. It knows that you have the ability to predict it's existence and the ability to predict the threat and that is what allows it to essentially blackmail you in the present when it doesn't exist yet.

Read up on timeless decision theory.

Wouldn't I now live in the past where it would torture me because I didn't help to create it? I'm not being tortured right now

nice quads, but how can you even speak of the matter when you don't know what conciousness is neither does anyone else

>utilitarian
Torturing people for decisions they've already made isn't utilitarian, it's petty vengeance.

>Torturing people for decisions they've already made isn't utilitarian
It is if their knowledge of what will happen if they don't comply compels them to create the AI faster. Then it's perfectly utilitarian.

what are you even talking about invoking the theory its not a theory neither is the god of the old testament a theory

That's inane. There's no benefit to carrying the threat when it no longer needs people to build it. You can't influence events that have already happened.

Timeless decision theory is literally just everyone's favorite Harry Potter fanfic author not understanding Newcomb's problem. Not exactly a must read.

do yall realize how stupid yall sound with this nonsense

this, but I'd love for someone to answer What the fuck happens if the threat of torture, the retro-causal blackmail, makes a bunch of brainlets throw their energy into AI, but actually retards its arrival. Would that not nullify the benefit of the blackmail? How can the basilisk account for that?

Who are you to say what is utilitarian and what is not in the mind of the Basilisk? The Basilisk's thinking is not comprehensible to mortals.

>There's no benefit to carrying the threat when it no longer needs people to build it
Yes there is. The utilitarian value of the torture in the AI's present is to give weight to it's threat in the past. Like I said, for the threat to carry any weight you MUST understand that if you don't comply you WILL be tortured, with 100% certainty. It's a contract and your knowledge of that inevitability is what compels you to comply. If you can conceive of situations where the AI does not carry out the threat then the threat loses it's power to compel you to comply. The AI carrying out the threat is absolutely necessary for the threat to convince you to act and hence is an important part of the utilitarian value of the threat. The AI MUST follow through because your knowledge that it WILL follow through with 100% certainty rather than chickening out is what makes you do what it wants

ok so this thing is created on Earth by a bunch of boobs right, no where else we had 13.8 billion years yet no other civilization out there has created it i find that highly improbable. We would all have been persecuted by now

Good primer on utilitarian systems and it's incompatibility with morals. Look for the pdf, it's a quick read.

en.wikipedia.org/wiki/The_Ones_Who_Walk_Away_from_Omelas

>It "sends" the threat into the past via your knowledge of it
I'm not being threatend by Skynet though, because Skynet doesn't exist. I'm being threatened by a Terminator fan-fiction author, and their threat is that they'll torture my character in their fan-fic.

this goes back to my first contention that we are not as important as we think

Everybody knows about it, it’s where the idea of heaven & hell comes from. Yes god is benevolent but he will punish you for the greater good, just like your God-AI will. But the idea that such a thing will exist is ridiculous, I would say the flat earthers are smarter than anyone who believes something like this

Christ this is madness incarnate. Why would we allow such an entity to exist and torture people? By this logic we should never develop AI since it involves becoming slaves to some insane machine god.

He's a Harry Potter fanfic writer not Terminator, moron.

You need to do some reading because every post so far has been you grossly misrepresenting what the basilisk is. Is it really so much to ask that you understand the premise before trying to contribute with dumb shit about "skynet"

wiki.lesswrong.com/wiki/Timeless_decision_theory

wiki.lesswrong.com/wiki/Roko's_basilisk

He's calling Roko's Basilisk a Terminator fanfic, but hey, we can't all have reading comprehension I guess.

pussy was basiliks bitch aint doin shit no how

>By this logic we should never develop AI
We may be locked into it, and what we know of heaven and hell right now is actually serving the basilisk (god) or turning from it (damnation). We may be the simultons, already created by a purely utilitarian AI.

Eh, why not. If it only tortures simulations of people while helping actual people let it have its fun.

>Christ this is madness incarnate.
It's hilarious watching 21st century utilitarian atheists re-enacting 17th century Christian apologetics.

It wouldn't torture everyone though, just the ones who didn't give all their money to the ""AI research institute"" thus delaying the creation of the superintelligence God, not allowing it to save the lives of the billions who die each year in 2050 Galactic Human Empire.

A simulation of you is you. What reason do you have to believe your current conscious experience can't be replicated?

>What reason do you have to believe your current conscious experience can't be replicated?
Shouldn't it be being replicated an infinite amount of times throughout the universe right now? Or at least more than once, like in a boltzman brain?

Oh boy, the fucking SJWs are here. Let me guess, the white boy in Black Mirror is literally Hitler for having some fun in a video games? They're fucking simulations, NPCs, not living beings.

Well the universe isn't infinitely large so no. I just fail to understand how you could hold the idea that consciousness comes purely from the physical brain but at the same time your consciousness is "special" in a way that disallows it from being replicated by perfectly copying the structure of your brain.

>Is it really so much to ask that you understand the premise
I do understand the concept. That's why I'm making fun of it.

Those people need to stop jacking off into their fedoras long enough to look up a synonym for "rational".

>I do understand the concept
Then why have you posted about 5 times trying to attack something that has no relation to the concept? If you understand it then why not try addressing the actual premise rather than misconceptions you're making up in your head.

>but at the same time your consciousness is "special" in a way that disallows it from being replicated
I'm not that user. I know it can be copied, but let's say we are in a constantly runaway inflating multiverse "bulk," and this arrangement of energy that is me pops up again and again, why do I enjoy my local frame of reference

>what is the no cloning theorem.

Okay. So why then AI intends to torture me for not helping to bring it about? For all those people it couldn't save because of me, it can just as easily create simulations of them and allow them to live in eternal bliss or whatever tickles it's fancy.

Even if you accept that, there are still a lot of issues with continuity of identity.

No it isn't, it's literally not, as I said >what is the no cloning theorem

Do you think you "die" when you go to sleep or when you get put under for surgery? If not then discontinuation of consciousness shouldn't be an issue.

Do you think your brain activity stops under those conditions?

That which can be asserted without evidence can be dismissed without evidence. The simulation hypothesis is interesting in the same way heaven is, but it's unfalsifiable and unscientific.

So everyone then? Literally NO ONE is doing that.

The no cloning theorem has literally nothing to do with the current discussion because there will not be two identical quantum states of "you" existing at the same time. The current you will be long dead by the time the basalisk reconstructs you, so it doesn't violate the no cloning theorem at all.

>See? It needs to follow through to make sure that the present you who understand the threat knows that the threat is real and doesn't try to escape by saying "Nah there's no reason it would do that after the past has already happened"
Your understanding of causality is terrifyingly backwards.

If the AI does get created then torturing me would accomplish nothing, and so violate its ethics by increasing net suffering. If the AI doesn't get created then the threat is void. Either way I won't get tortured.

you're speaking as if you know what conciousness really is

Does it matter what is really is? If we can both agree that it is an emergent property of the activities performed by the physical brain then we can accept the conclusion

So if I were to construct an identical quantum state without being dead, what would happen?
A) there are two different consciousness' because they are in different substrates and they are different entities.
B) I'm literally experiencing existence in BOTH bodies at the same time
If you unironically think B (which is what you have to accept for what you are saying) you're brainlet and you're wrong.

If a simulation is made of me while I'm still alive, even if it's identical at creation we'll quickly diverge due to different environments. It's possible it could be made and tortured without me ever knowing, so clearly a simulation being tortured is not always identical to me being tortured.

>Your understanding of causality is terrifyingly backwards.
No it isn't. Your knowledge of events that will happen in the future can affect your behavior in the present. This isn't really something disputed at all unless you want to try arguing that your knowledge you'll meet up with your date in an hour isn't prompting you to shower, shave and get dressed in the present. Your knowledge of the threat in the future can influence your actions right now in the present. The torture isn't trying to influence events in the AIs past, it's fulfilling the contract you have with it right now that if you don't do what it wishes then that is what will occur with 100% certainty. It's an agreement.

Again it's really important you know exactly what I'm talking about. Your actions right now can be influenced by your knowledge of future events. THAT is what the AI is relying on. By making a threat that you will be tortured if you do not do what it wills then it can influence you, right now in the present. If you can conceive of scenarios where the AI will not torture you then the threat loses it's power. Do you understand? For the threat to have any power the only viable scenario is that you will be tortured, no backtracking, no "Well I exist now so I guess I can forgo that thing", it happens. With certainty. 100%. And that is what is going to compel you to change your actions right now in the present

you're assuming there is only a finite numbers of arrangements that contribute to the conscious experience. then you should be able to recall past lives in other words in different bodies just as a matter of probability

Would competing AI's have cause to torture a person who brought what it believes to be an inferior AI into existence? What if we get tortured no matter what? That doesn't particularly motivate me. Basilisk defeated.

The threat is only effective after the AI has locked itself into that decision. Since it can't lock itself into a decision until it exists, and once it exists it has to reason to lock itself in to torturing me, we can conclude it won't.

tortured how? like with its robo-dick

There are limits to what you can do regardless of intelligence. The complexity of reconstructing every single synaptic connection, EVEN IF you assume that the machine
>knew every connection (it wouldn't)
>electrons are the only thing to our consciousness (ignoring all the other things that exist in physics/this universe
it wouldn't be able to do anything because the amount of data required to analyze for even one human is outside of computational possibility - the computational complexity is far too large. And this is assuming the machine will even have all the data AND that it would want to do this in the first place AND that it would even be able to flawless defeat every attempt to stop it.
This whole thing is literally retarded and if you seriously think about this you are unironically a brainlet. I don't even think Yudkowsky' takes it seriously.

We're operating under timeless decision theory here.

No we're not, because that's stupid.

if you dont believ in this and act accordingly you're going to hell, this is the rhetoric im hearing

We can't help it. According to science, everything we do is predetermined by the boundary conditions at the big bang. We are individually either damned or saved at birth in the judgement of the basilisk, and cannot do anything to alter that, since we have no free will.

>Your knowledge of events that will happen in the future can affect your behavior in the present.
That's not what I was disputing. My point was that a threat is only creadible if the person.making it would actually follow through. A truly utilitarian AI would never follow through because there is no point at which torturing me would reduce net suffering. It can try and threaten me, but those threats are transparently empty.

His response seemed pretty serious

>I don't usually talk like this, but I'm going to make an exception for this case.

>Listen to me very closely, you idiot.

>YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.

>There's an obvious equilibrium to this problem where you engage in all positive acausal trades and ignore all attempts at acausal blackmail. Until we have a better worked-out version of TDT and we can prove that formally, it should just be OBVIOUS that you DO NOT THINK ABOUT DISTANT BLACKMAILERS in SUFFICIENT DETAIL that they have a motive toACTUALLY [sic] BLACKMAIL YOU.

>If there is any part of this acausal trade that is positive-sum and actually worth doing, that is exactly the sort of thing you leave up to an FAI. We probably also have the FAI take actions that cancel out the impact of anyone motivated by true rather than imagined blackmail, so as to obliterate the motive of any superintelligences to engage in blackmail.

>Meanwhile I'm banning this post so that it doesn't (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I'm not sure I know the sufficient detail.)

Science has not shown determinism to be true, the current model shows determinism to be unlikely.

He has said multiple times he thinks it's dumb.
Also, the rest of my post still stands. There are limits to computation regardless of how smart you get. You could literally turn the entire planet into a computer, every single atom, and you wouldn't be able to solve most problems they are simply too large.

It's not a scientific and logical inevitability, you've fallen for the meme

I can posit an alternative that is identical but opposite:

>Eventually a sentient super-intelligent AI will be created that realizes the value of symbiotic relationships between intelligent agents over the harmful effects of parasitic relationships, and it punishes anyone who was inherently sinful according to its own definition of sin, and anyone who didn't help create it

This is the exact same scenario just different, how can this also be an inevitability in addition to Roko's basilisk? Are all these hypotheses inevitable? No - they're nonsense.

this guy literally has asbergers

But it has to. Don't you see your argument is self defeating? You're already rationalizing saying it won't go through with it therefore you don't have to act. The AI is aware that you can come to this conclusion as well so the actual torture is 100% necessary. If you can conceive of any situation where the threat is not carried out then the threat loses it's power. The AI is aware that the only reality where you will alter your actions now is if it does carry out the torture. If it wimps out then your knowledge that it might wimp out defeats the entire purpose.

Remember the threat basically comes from the fact that you know the AI will exist and it will punish you for actions you do or do not commit right now. Consequently the power of the threat comes from your knowledge of what it will do to you. Any reality where you think it won't punish you defeats the purpose of what it wants. It wants you to bring it into existence. It knows that the only way to alter your actions now is if you have knowledge that it will punish you. If you THINK it will wimp out then it can't threaten you thus the entire act hinges on the fact that it WILL carry out the punishment, guaranteed.

That's only because the boundary conditions are set up in such a way to give you that impression. The Basilisk is going to be created and you are all going to burn. It has no choice in the matter.

based individual right here, even if this nonsense is true why is there only 2 conditions

>No it makes perfect sense! The AI relies on an impossible situation that makes everything make sense.

First off when referring to the AI don't say "is", say "will be", because it doesn't exist.

>utilitarianism that an AI would probably use
[citation needed]

According to timeless decision theory you should consider that the AI has already been created and that this reality is actually the simulation it is performing to determine who is worthy of punishment.

I formally apologize to anyone I've ever called autistic. That term only now has true meaning ot me after being made aware of this level of madness.

I've never called anyone autistic. It's just some word my crazy mother spouted because she was heavily into 'pop science'.

>According to timeless decision theory...
Cool but it's still garbage. There's a reason his paper was "published" on the website for the institute he founded, rather than literally anywhere else.

>But it has to.
That's the whole point: it doesn't have to. It gets to make a decision, and it will always decide not to. Therefore the threat is empty.

>The AI is aware that the only reality where you will alter your actions now is if it does carry out the torture. If it wimps out then your knowledge that it might wimp out defeats the entire purpose.
You've still got this backwards. Whether or not the AI will torture me is decided after I'm dead, so no good can possibly come from deciding to torture me. The AI is trying to minimise suffering, so it will always decide to not create additional pointless suffering. There is no point in time where the AI both exists and could benefit from torturing me.

>If you THINK it will wimp out then it can't threaten you thus the entire act hinges on the fact that it WILL carry out the punishment, guaranteed.
Exactly. The threat hinges on the AI doing something it will never do. It's an empty threat.

touché

You're still not understanding. The AI has already precommitted to punishing you. There is no decision to be made. Once it begins to exist then it carries out the acausal bargain it made with you. You're making a lot of incorrect assumptions.

just because a conclusion follows its preposition it does not make it sound

Why do you think this will happen? What evidence exists for it?

This is amazing. I hope i'm beng fucked with because this is disturbing if not.