STOP BEING CONSCIOUS

STOP BEING CONSCIOUS

Other urls found in this thread:

youtu.be/jyS4VFh3xOU
twitter.com/NSFWRedditGif

BEGONE P ZOMB

IT WAS JUST AN ILLUSION BRO
EXPERIENCE DON'T REAL BRO
IT'S JUST A COMPUTATIONAL INTERFACE BRO
NO THIS ISN'T A FORM OF HOMONCULISM BRO IT'S LIKE NOT EVEN THERE

jukka sarasti pls go

strawman the thread

Saying consciousness is an illusion is like saying a television does not exist, and that what you think is a television is actually just a television program. How the fuck can you watch a television program if there is no television?

John Searle is right, this faggot is literally just making a category error. The English language screams into the night as it is ruthlessly force fucked by an old bald contrarian man who claims its mind does not exist.

lmao

"no"

What if he is actually a zombie and truly doesn't get consciousness?

Makes ya ponder

>all these "people" arguing there is no consciousness are literally just p-zombies, simulacrums whose design does not even allow them to comprehend what consciousness really is
>your own reality is an elaborate troll job and you are just arguing against figments of your imagination
>you are actually god and the creator of your own universe of subjective experience and for some reason you decided it would be a good idea to bait yourself with this retarded world full of figment humans

gnosis ahooooooooooy!

I think David Chalmers has literally called him a p-zombie a few times

Isn't Dennett saying, in your analogy, that television programs are television just as a language effect? So it's like a type of nominalism or something. Why should the reverse of that, still a matter of language, be any more valid?

I haven't read any of the authors, I'm just going by what you said.

...

Ees ze uniwerss an simulatron? Melon Mesk n statisstik mabey?

>Saying consciousness is an illusion
"Illusion" is a really badly misleading term that people bring up in this context.
It *could* be an OK word to use for what Dennett means (and he might use the word himself sometimes even, I don't remember if he does or not) except that it has connotations of qualia e.g. seeing an illusion (mirage) of water in the desert. It doesn't need to be defined this way, but so many people do define it this way that I think it's better not to use that word at all for this topic.
>How the fuck can you watch a television program if there is no television?
The argument is really that there's no real "watching" of any sort to begin with. Instead of qualia like the "experience" of "redness" when you look at an apple, Dennett and those who subscribe to the same sort of position are arguing no such "experience" happens and that you only believe and behave around the fictional (yet useful) reference point of sensory stimuli as though it were a non-physical phenomenon that "appears" to you.
Suppose you're a genius and built some extremely high quality robot that behaves like we do (e.g. commenting on how nice a sunset "looks" to it). You built it from scratch and never at any point gave it any sort of capacity for "qualia" / "experience." You also know as its creator how to account in detail for what makes all its different behavioral processes work, purely in terms of physical cause and effect relationships.
This robot would then be the basic idea of how eliminative materialists view the way we work. It wouldn't be the case that the robot would be "experiencing" some "illusion" of visuals or heard noises. It "experiences" nothing and its behavior in response to light stimuli or sound stimuli would be explicable in terms of physical interactions, including its behavior of reporting what it's "experiencing," or making private note of what it has "experienced" to give it the capacity for future secondary responses made in reference to its initial responses.

oouou intresding taught

That robot may or may not be a p-zombie, there's literally zero way to know in the same way that a rock might actually be conscious but you can also never know that. We have no way to measure subjective experience. Even if you can make a complex machine that seems to act like a human, at what point does it transform from being like a typewriter to being like a person? "It acts like a human" is a shitty definition, because simulations can be very convincing.

>a rock might actually be conscious
I think the reason most people don't believe rocks are "conscious" is because what people report as their own "subjective experience" is pretty thoroughly / consistently altered in response to physical things done to the brain (anesthesia, psychedelic compounds, brain tumors, seizures, stroke, etc.).
That plus the brain is very complicated and by copying some of the basic notions of how networks of neurons operate we're able to get all sorts of useful programming applications like image recognition or self-driving cars, none of which you would expect if the brain weren't the organ for "consciousness."

Also:
>simulations can be very convincing
If you take Dennett's / the eliminative materialist's position, matching behavior wouldn't be a mere "simulation" of how we work, it would be an equally valid reproduction of it.

About the rocks though, my point was you don't know if the physical processes happening to a rock somehow produce qualia. For example, let's say over 1000 years, as a rock is eroded by water, each instance of waves eroding small pieces of the rocks produces something analogous to human synapse. Perhaps this process is very slow, so that over the entire millennium the only subjective experience the rock has is something that humans might recognize as unease such as "I feel bad about this happening", in a sort of alien way that a rat feels bad about something without language.

So the only real reason it is assumed that only humans/animals have consciousness (versus a cloud, or a solar system, or a bag of feathers) is because it is the most intuitive version of consciousness to us. We feel a thing, then we see others acting like us and project our experience onto them. Mammals and such behave close enough. But this says nothing about different systems, it is unknowable to us that a cloud's physical processes might produce qualia. We only see consciousness in things like us because it is intuitive, but it could very well be possible that somehow extremely complex things like weather patterns also produce subjective experience.

So with that in mind, it seems like consciousness is basically unknowable outside of one's self. If I can't know if a cloud has subjective experience, how I can know that another person does? Just because they appear closer to me proves nothing, they are still a different thing.

i think a kind of 'panpsychism' is the only tenable position available to the physicalist-monist
definitely advocate for 'levels' of subjective experience, and a broadening of the concept of subjectivity
whitehead seems to have been after something like this, tho he's firmly rooted in the platonic tradition
anti-reductive in the extreme
don't understand 3/4s of what he's on about
evasive way of recommending him to you if this is where your thinking is currently

Indeed, consciousness may be a "Universal standard" akin to mass, energy and the like; merely capable when it gets machinery to operate.

Will check him out, thanks.

I'M FUCKING TRYING!

So basically, when you make a robot that can act like a human because it is based on the same physical makeup of a human, that robot suddenly "poofs" into subjective experience and is no longer a simulation because it cannot be if it act likes a human? But why? Where is the proof?

Where were you when Searle almost shitposted Dennet to death?
www.nybooks.com/articles/archives/1995/dec/21/the-mystery-of-consciousness-an-exchange/

why wouldn't it have subjective experience?

Rhodopsin, stupid. Plain and simple. Physical on-the-eye imagery —> mental image (both are very real). Then what? There's no next step in the chain? (I've read much of Dennett & like him, but come on now, why are these parts of biology, real biology, excluded?)

The Chinese Room is a pretty retarded argument desu

If you don't understand it, maybe.

No, it's just retarded.

Then go ahead and explain it, because I bet you probably don't understand it.

The Chinese Room isn't an argument. It isn't even a thought experiment since there's no means within the scenario to actually derive a conclusion or contradiction.

Different user. Seems like an irrelevant outdated argument. A proper intelligent machine would not be modeled using explicit instructions. See Stockfish vs. AlphaZero.

Lol no

>A proper intelligent machine would not be modeled using explicit instruction
This is the typical argument you get from IT guys.
The programming is irrelevant.

>one of us is dead wrong, and the stakes are high
When you can't disengage from your own self-serving sensationalism for an entire paragraph, I become pretty dubious towards anything you've got to say about any particular thing.

It destroys the thought experiment because there would be no way for Searle to insert himself in the room and simply "process them according to the program's instructions", because there is literally no such set of instructions.

You can't derive *true meaning* from anything material - it only exists in the phenomenal realm. The idea that mechanistic operations within the brain can generate consciousness isn't any more reasonable than the idea that a computer can.
Ultimately, the Chinese Room is just an assertion that claims to have solved the hard problem of consciousness without actually doing so.

What if my coke is conscious and I'm slowly killing it by taking a sip?

>claims to have solved the hard problem of consciousness
No, it doesn't, because it's not addressing the hard problem of consciousness. You literally don't even know what the argument is about. It's a criticism of functionalism.

I understand what it's about, but if you're gonna claim to know for certain that X produces consciousness and Y doesn't, you are in fact claiming to have solved the hard problem.
Also interesting that you ignored my main point.

why should it? if you make the robot, you can't prove that it does. what people like dennet are asserting is that if you were to build a robot similar to how human brains work, then it would have to be conscious because a machine built like a human brain must be conscious. but they don't explain why. what I want to know is the exact moment when the thing becomes conscious. so when you're making that robot that is "the same" as a human, I want to know the timestamp, the exact moment when that robot goes from a pile of parts to "I have feelings and subjective experience now, my consciousness has emerged"

dennet tells nothing about this and just says "lol whatever nerd consciousness is an illusion" with a plate of word game spaghetti and calls everybody who disagrees an absurd solipsist. I think the reason people like him to do this is because to admit the hard problem exists must be a waste of time because it is unfalsifiable and therefore not actionable. but it is indeed actionable: knowing that we cannot determine whether or not a thing is conscious creates an ethical dilemma. there should be a discussion about whether or not there even *should* be a conscious robot (which can devolve into anti-natalist shitposting) and that if it is made, how it should be treated. I'm leaning toward the idea that there should never be an AI that *can* simulate suffering, i.e. we should avoid making a thing that seems to suffer, just to be safe. maybe it's a p-zombie, maybe it isn't, you can't know. cue science fiction stories about robot slave rebellions. I think it might be better to just avoid the whole thing by having regulations about what robots are made so now we get into the economic ordeal that if we don't make it, somebody else will and now we are at a disadvantage. now I am the crazy dualist luddite hippie just getting in the way of progress, according to le technocratic STEMlord robot masters and those on the good side of the cash flow: normies who can't be bothered with ethics when comfort is on the line.

the whole thing is just a shitshow and will end in suffering whether robots are p-zoombies or not
t. pessimistic doomsayer

>eh hypodenuz es da skaroot ada sum ada skares eh deh udda too side

Lmao, Mathfags on life support

Why are you responding to my post like I suggested the robot would gain subjective experience when I clearly argued neither the robot nor any of us would have / do have subjective experience under Dennet's model and/or eliminative materialism in general?

Physiology is real, and also mostly irrelevant to the "qualia" debate beyond it being helpful to occasionally write "behavior and physiology" instead of just "behavior" so people like you don't nitpick about it. Of course blood pressure rising for example when you're engaging in pain behavior is a real thing, but again, that doesn't have much to do with the "qualia" topic.

>old people on the internet

>the room understand chinese
>the room also understands english
so I guess if you pass a note, in english, asking the room what it's saying in chinese, it would be able to answer, no?

the same thing goes for the theory of evolution, yet that's widely accepted :^)

>the room also understands english
Where are you getting that idea from? The room understands Chinese, nobody ever specified that the room can do anything with English. Are you jumping to that conclusion just because the guy inside the room might be able to understand English?

You can't pinpoint the exact moment biological machines become conscious either (is a cat conscious? what about a mouse, or an insect, or a worm?) But we know at some point it does, so the same most likely applies to mechanical machines.

>I think the reason people like him to do this is because to admit the hard problem exists must be a waste of time because it is unfalsifiable and therefore not actionable.
No, I'm pretty sure the real reason people agree with eliminative materialism is because the notion "qualia" literally exist doesn't really stand up to any sort of scrutiny other than some variety of "I KNOW I EXPERIENCE THINGS IT'S DEFINITELY REAL AND YOU CAN'T QUESTION IT!"
And suppose "qualia" are real and current physics doesn't account for it. Well then, what the fuck? How is some alleged phenomenon A) real B) non-physical and C) very strongly associated with physical brain activity to the extent where you can get the same reported alteration in consciousness on demand with the right chemicals introduced to the brain?
If it's just non-physical and doesn't interact with the brain at all, then why this tight connection between what happens to the brain and what the subject reports about his "consciousness?" And if they *do* interact, where is the physically measurable impact that would need to be there by definition for anything to be called an "interaction" with something physical in the first place?
And lastly, there is no shortage of experiments showing dramatic disparities between our beliefs for what we think we know or are perceiving or are remembering vs. literal reality. So it seems kind of silly to be so trusting of the literal reality of our personal beliefs / intuitions to the extent that we think physics is the problem for missing what somehow *must* be there, I personally think it'd make a lot more sense to give the benefit of the doubt to the ideas / models that were built up using the reports of many different parties cross-checked against each other vs. a notion of something appearing to you that you and you alone are even able to begin claiming exists. You could argue there are billions of parties corroborating this "qualia" phenomenon, but that's not really true. There are billions of parties each making a report about his or her own alleged "qualia," and what you really have that's capable of being corroborated by multiple parties is the existence of the report, not the existence of the thing the report alleges.

the central thesis of the anti-chinese room nerds is that if the room can communicate in chinese, it understands chinese.

the room can also communicate in english, since if you pass the guy a note in english, he can respond properly in english

so the room understands both, by your metric

There's no difference in the argument between "push these symbols around that represent symbols in language" and "push these symbols around that represent symbols that represent symbols in language". The first collapses into the second.
Modern machine learning is the first. It's still symbol pushing, but the information might be encoded in weights instead.

>if the room can communicate in chinese, it understands chinese
Agreed.
>the room can also communicate in english, since if you pass the guy a note in english, he can respond properly in english
Not sure what you're trying to prove by even speculating about English with that thought experiment.
Panning out to the bigger picture here, the thought experiment is shit because it preys on how you rightfully feel like a process is different if you warp its proportions badly enough. And it's hard to think of a more proportion warping scenario then the Chinese Room when you consider the rate of information processing a normal human brain is doing to handle language to the realistically impossible eternity of time and galaxy spanning space required to make an equivalent process happen purely through pre-written books that handle every possible combination of characters and meanings and appropriate responses in a directly deterministic fashion.

>very strongly associated with physical brain activity to the extent where you can get the same reported alteration in consciousness on demand with the right chemicals introduced to the brain

This is the problem. That those specific chemicals match conscious experience in humans is irrelevant, because you don't measure consciousness directly, you measure the chemicals. My argument is that the chemicals are not the consciousness because the reason you suspect those chemicals match consciousness is because it is intuitive, i.e. you give a man alcohol, he acts drunk, then you see him behave drunkenly and decide he is conscious based on the intuition that he thinks, he acts like you would, and you think that you think. Consciousness, before it is measured in brain chemicals, is already presupposed to be a state in which the conscious being behaves in certain ways: displays intelligence the way humans/animals do, uses language the way humans/animals do, etc.

You cannot prove that a rock on the street does not also have experiences. Those experiences are not intuitive because rocks do not behave intuitively, they express no intelligence or choice that we can tell. But you still can't measure the possibility that a rock doesn't perceive its surroundings via physical apparatus. You don't know if a limestone quarry being formed a certain way produces vision or not. You only know eyes do because it is intuitive and you have decided to study that specifically because it's so easily relevant to your own experience.

>That those specific chemicals match conscious experience in humans
Match *reported consciousness*. That's an important distinction.
>Consciousness, before it is measured in brain chemicals, is already presupposed to be a state in which the conscious being behaves in certain ways
Except you can literally account for behavior in terms of physical processes e.g. a variety of behaviors c. elegans exhibits can be explained all the way down to the level of its neural connections.

>Match *reported consciousness*. That's an important distinction.
Why does it have to be reported to exist?

>Except you can literally account for behavior in terms of physical processes e.g. a variety of behaviors c. elegans exhibits can be explained all the way down to the level of its neural connections.
You can account for behaviors in organisms you can understand like the ones that have neurons. You can't account for them in rocks, but that doesn't mean the rocks don't have them. Maybe you just don't understand rock behavior. Maybe rock behavior just isn't intuitive but they are still having thoughts and qualia in ways that would be comparable to human subjective experience, if we could measure them.

If someone claims they have a million dollars to give you, do you know the million dollars exists, or do you just know their report about a million dollars exists?
If someone claims they have an "experience," do you know the "experience" exists, or do you just know their report about the "experience" exists?
A report is something you, me, and fifty other people could witness and corroborate the existence of. The "experience" that report is alleging is not.

okay, so you are moving from a point that your interlocutor here has already conceded--that the subjectivity of anything that is not you is absolutely opaque to you--to the invalid conclusion that, because that subjectivity is not accessible to you, it isn't 'real'.
our inability to tackle subjectivity in a comprehensive manner is not a problem for subjectivity, but for any attempt to make it comprehensive, or 'objective'. one of the short cuts that philosophers and neuroscientists and randos on the internet use to 'solve' this 'problem' is to try and eliminate subjectivity from their ontology, which is kind of like dropping the second law of thermodynamics from classical mechanics because the implications are fucking spooky and depressing. it's just a way of avoiding the question.

Stop being a zombie
youtu.be/jyS4VFh3xOU

however, this doesn't mean that subjectivity or 'experience' is some special 'stuff' with incomprehensible properties. i would argue that it doesn't have properties at all, because it isn't a thing. it's a mode of being, and i don't think it's unique to 'sentient' creatures, but totally, universally pervasive.