Computation and the mind - once again

Hi Veeky Forums it's me again, the guy who refuses to accept the Church-Turing thesis. I've come up with a new argument for you.

If everything can be simulated by Turing machines, then this includes the subjective experience of pain. So in principle it should be possible to create an algorithm which genuinely makes the Turing machine experience pain. But this can easily seen to be absurd. Pain is by definition an experience indicating potential harm to body or mind. Now how could a Turing machine be hurt? By it's own definition there is no way to destroy parts of the machine.

Now you might think you're clever for simply naming one of the states "pain" but I counter that this way you don't account for the subjective experience which has to be a process and not merely a state.
/argument

As a byproduct I'd like to introduce the new concept of a self:destructive Turing machine. That is a TM which can lose parts of its abilities by processing certain malicious inputs. Feel free to discuss.

Let me save you from the pain.

It's impossible to prove simulation and free will is physically impossible.

I don't get this argument at all. Pain is no different from any other sensory inputs. A strong enough neural impulse will kick in a hardwired instinct, you don't need to "learn" pain. If you design a machine to feel pain, it will feel pain.

Can you explain the accounting for a subjective experience part a bit more?

I'm new here, so please don't sage me.

>If everything can be simulated by Turing machines
I haven't found any information about it. Can you explain it further?

>free will is physically impossible
I don't understand that, is it still science or it is now philosophy? Can you explaint to me how (and why) free will doesn't exist?

Can you define free will first ?

>Pain is no different from any other sensory inputs.
Pain is not a sensory input. OP's argument is about the subjective experience of pain.

>I haven't found any information about it. Can you explain it further?
What do you mean? The Church-Turing thesis? Look it up. It states that every physical process can be simulated by a Turing machine.

Free will is not just physically, but already on a more fundamental layer impossible: logically!

>you make a decision
>I ask you "How did you make that decision?"
>case 1: you explain the procedure
>thus you were following a procedure and were not free
>case 2: you cannot explain how you made the decision
>then you obviously didn't have full control over it

This hypothesis can not be precisely formulated in the language of mathematics, because it combines both strict and ambiguous terms, the interpretation of which may depend on the particular person. Therefore it is treated more as a philosophical point of view.

Yeah, the moment they resort to define free will independant of the known physical laws, it automatically becomes paranormal and the rest of the argument becomes a joke.
If they define it in the frame of physical causality, it's automatically pre-determined.

So the question is if a Turing machine can experience the qualia of pain?
If it's a conscious machine with a simulated body map and all the necessary circuits, I again can't see why not.
If OP wants to claim that conscious experiences can't be simulated by a Turing machine, he needs a stronger argument.

>If it's a conscious machine
It's a Turing machine. If you look up the definition, you won't find "consciousness" anywhere.

Pain is the interpretation of the sensory input. The way we evolved is that our bodies identify pain as a threat and danger. I'm not sure what you mean by "qualia of pain"

>I'm not sure what you mean by "qualia of pain"
Are you a p-zombie?

> qualia of pain
> p-zombie
and now there are two things you gotta explain.

Well yeah, that's why it's a pretty narrow argument, it just immediately leads to the question whether consciousness is computable by a Turing machine.
Qualia just means subjective experience, some of which might be unique to a person. The way something feels to you, essentially.

A p-zombie is a hypothetical robot who looks like a human and perfectly behaves indistinguishable from a human, but doesn't actually have consciousness. For example when you hit him he says he feels pain, but actually he doesn't.

>it just immediately leads to the question whether consciousness is computable by a Turing machine.
Exactly this question OP wants to answer negatively by reductio ad absurdum.

How do you know robots do or don't have consciousness if it's unfalsifiable in the first place ?

It's a conceivable thought experiment.

which is trivial sincet it's exclusively subjective by nature.

This is a bad argument. Even if a Turing machine isn't able to lose abilities and really be threatened, why would it be able to feel like it was happening ? Because pain is not the loss of abilities, it is the feeling that happend to warn you. Have you heard that some people with only one arm can feel pain in the lost arm, because the brain expect it ?

I know about phantom pain. However even this makes no sense anymore when applied to a Turing machine. How could a Turing machine experience phantom pain?

Phantom pain is a neurological disorder. Your arm might be chopped off but the part that was interpreting the pain in your brain is still there and still has the habit of recieving signals from your arm, disregard it's not there.
So phantom pain is not something extraordinary or anything, it's just as plausible as any other neurological mechanism.

>Pain is by definition an experience indicating potential harm to body or mind. Now how could a Turing machine be hurt?
We don't even need to assume full Turing machines to refute this short-sighted philosophical garbage. You can consider ML algorithms to experience "pain" every time they make a decision that leads to unfavorable results. When AlphaGo trained for months, every time it would make a move that resulted in a less favorable outcome, it experienced "pain" and associated those moves with playing worse. If AlphaGo were to continue making shitty moves, it would eventually "die" because its makers would have decided it was a shitty algorithm and trashed it.

I know. But it can't be simulated by a Turing machine.

pain is a physical signal, whereas a Turing machine is abstract
pain cannot be abstracted

>a simple algorithm truly experiences pain
This is ridiculous and totally misses the point.

anything can

>muh feels tell me pain is different than other signals because it hurts me

>philosophically fascinating
>plagues tons of minds
>destroys lives
>trivial

You're missing the point, amigo. just gave you a valid analogue for pain and instead of refuting it somehow, you just said its missing the point. Your point was whether a Turing machine can feel pain, and that guy said "it can feel a sensation which is analogue to the function and form of pain." Those amount to pretty much the same thing.

If your question was about pain, and you counter every single response by saying "Well thats not REAL pain" then you're never going to get an answer because your subjective conception of pain is impossible to be objectively imitated.