Imagine that determinism proved to be true, and free will was just an illusion. How do you feel about that possibility?

Imagine that determinism proved to be true, and free will was just an illusion. How do you feel about that possibility?

Other urls found in this thread:

arxiv.org/pdf/quant-ph/9907009v2.pdf
twitter.com/NSFWRedditGif

How could that be true? We're choosing everytime.

A fish doesn't have an opinion on water.

not literature. reminder to sage and report

This is low-key the philosophy board. Deal with.

I wouldn't care too much because even if free will didn't actually exist we'd still have the illusion of it.

Believeing in free will is a staple of being a brainlet.

OP. Free will is an ilusion but determinism isn't true. What is this shit?

>using trendwords

D:

I kind of feel my mind is insulated against the anxiety or meloncholia that would produce due to a heavy layer of post modern and platonic thought surrounding my mind

christ op it's your freshman year, you should be out getting laid not posting on chans

how do you know?

mediocre movie desu senpai

Is it free will when you stub your toe on a corner of the table?

And that is because he is not capable of the conceptual faculty, only the perceptual one.

>low-key
No philosophy is explictly allowed

And how do you assume that?

i dialed a friend, asked the audience, and made it 50/50

Agency is a prerequisite of rationality so "proving" determinism is really just sawing off the branch you're sitting on.

Same.

Touch your arm, cut out a chunk. Touch where it was. You don't feel that. Now you know the quantum is obsolete-that a=a. Because a contradiction cant exist, you r not the universe. Non-absolute.

It literally doesn't matter either way

>Agency is a prerequisite of rationality

How so?

He just defines rationality as such, get with the times man.

If I ask a tree what the answer to 2 + 2 is, and my breath shakes the boughs so four leaves fall to the ground, does that make the tree a mathematician?
To prove something is to weigh the true against the false, and knowingly choose the true. Inanimate systems cannot create meaningful rational statements.
all shitposting btw, I have no idea if any of this is true

>high-key upset
kys senpai

Have you heard of Automatic Theorem Proving? Or Automatic Reasoning in general.

> How do you feel about that possibility?
Do I have a choice?

Wouldn't really surprise me desu.
If it were true then everything would be fucked because you couldn't blame anybody for their actions, nobody would ever strive for anything that doesn't come naturally, and people would generally act like cunts

Exactly the same way I feel now.

see:

Literally what?

Would they though? If it were true they would act the same, unless they knew it were true, then they would act different.

life is a hyperbolic paradox, the absurdity that we both have no choice and we do is simultaneously true and functional

if I had I wouldn't be shitposting on a khergit butter-churning forum but after googling it let me say:
>man builds system of reason
>man builds machine to list logical consequences of that system
it still gets its impetus from a thinking mind, doesn't it? Sure he may shoot it out further than his mind can consciously follow, but computer programs follow all of our rules, abide by our choices. I'm curious how exactly you think automated theorem proving removes the necessity of agency from reason.

>computer programs follow all of our rules
Human brain activity follows all of physics' rules. It's not really any different, there's no point where what you're doing is fundamentally beyond the scope of some series of physical cause and effect relationships. It's just that biological brains are a lot more convoluted in how their output is generated, kind of like long-lasting storms confined to a bowl of jelly.
Also machine learning programs don't use explicit instructions explaining what programs should do directly e.g. you can write a program that trains on a known data set to minimize an error function and if it's successful you'll end up with a program that can solve problems you yourself don't even understand how to solve, usually because the problem is way more complicated than what would be practical to figure out through a pile of explicit rules (image recognition and self-driving cars work this way).

>Human brain activity follows all of physics' rules.
except that's what this whole debate is about, isn't it? Don't go trying to assume your conclusion.
And I think you're missing the point about setting rules for the machine, since all of the problems neural nets etc solve are of the same type as the ones they are trained on. They are not making new discoveries, only replicating old ones onto more complicated sets of data. Also statistics != mathematics, let's be clear.

>except that's what this whole debate is about, isn't it?
No, that's an actual, demonstrable fact.
arxiv.org/pdf/quant-ph/9907009v2.pdf
>We find that the decoherence timescale s ( ∼ 10 −13 − 10 −20 seconds) are typically much shorter than the relevant dynamical timescales ( ∼ 10 − 3 − 10 − 1 seconds), both for regular neuron firing and for kink-like polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way.
You are only allowed to honestly believe in non-deterministic "quantum consciousness" if you also believe consciousness has NOTHING to do with neuronal firing, because Tegmark established they will fire exactly the same way as predicted by classical physics regardless of any quantum effects that happens in proximity to them. The timescales for decoherence aren't anywhere near long enough to let them influence neuronal firing. Brain activity is a phenomenon of classical physics, it's a product of ordinary physical cause and effect.

>all of the problems neural nets etc solve are of the same type as the ones they are trained on
Define "type."
I could make a parallel claim that all of the problems human brains solve are of the same type as the ones they are trained on. And the extent you argue the preceding sentence is untrue is the extent I would argue your own sentence above is untrue.

>implying consciousness is local
>implying consciousness has to have to do with neurons firing
>implying Tegmark didn't completely miss the point with his autistic fixation over neurons rather than microtubules and ion channels which actually could be affected by quantum computations and then interact with other elements classically, passing on the causality of those quantum events to the classical world
Hameroff answered his "argument" a few years later and blew it the fuck out. Too bad you didn't read that.
Is there any feasible way for a machine to compute Godel's Theorem? I doubt it - the problem inherently lies outside the machine's ability to compute.