What's your favourite example of a philosopher or cognitive scientist BTFO'ing the computational theory of mind and...

What's your favourite example of a philosopher or cognitive scientist BTFO'ing the computational theory of mind and amodal representations?
I've read Searle's chinese room and Harnad's symbol grounding papers, but looking for more inspiration.

Other urls found in this thread:

link.springer.com/content/pdf/10.1023/A:1008255830248.pdf
cogprints.org/240/
twitter.com/SFWRedditGifs

>He still thinks The Chinese Room is real philosophy
link.springer.com/content/pdf/10.1023/A:1008255830248.pdf

That paper has 4 citations. Are you Larry Hauser? Is that your paper?

Thanks for the link. I found it here
cogprints.org/240/
Just reading the intro, where dude sounds mega butthurt, like this whole paper is a way to get revenge for the time he walked in on Searle gangbanging his mom with Rumelhart and McClelland.

tl;dr??

pretty much anything written by Peter Hacker

Reading through the paper now, this really takes the cake:
>Computers, even lowly pocket calculators, really have mental properties - calculating that 7+5 is 12, detecting keypresses, recognizing commands, trying to initialize their printers - answering to the mental predications their intelligent seeming deeds inspire us to make of them.

I can't make this shit up.
A paper only it's mother could love. He might actually be Larry Hauser...

physicalists btfo!

>he still thinks that perfect imitation of consciousness is qualitatively different from real consciousness
Brainlets, they never learn.
>DUDE WHAT ABOUT MY QUALIA
How many times does this ancient meme needs to get BTFO, before you see the light?

*starts a forest fire by simulating one on a computer*
pssh, nothing personal....kid

wow an article in an obscure book by a >literally who

you really showed him with that google search user

>John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal to those of brains." On a morecarefully crafted understanding – understood just to targetmetaphysical identification of thought with computation ("Functionalism"or "Computationalism") and not AI proper the argument is still unsound,though more interestingly so. It's unsound in ways difficult for high church– "someday my prince of an AI program will come" – believersin AI to acknowledge without undermining their high church beliefs. The adhominem bite of Searle's argument against the high church persuasions of somany cognitive scientists, I suggest, largely explains the undeserved reputethis really quite disreputable argument enjoys among them.

>Of course, the professor was of no help at all either. On the personal side, Larry Hauser is man that looks like James Carville and sounds like Professor Frink from the Simpsons. He is constantly shoving his republican ideals down the throats of his students. Most professors that I've had, even if its obvious that they are are conservative or liberal, usually refrain from sharing their beliefs because they know that it is not the proper forum. Hauser on the other hand purposely constructs logic problems around his political beliefs, and I had a hard time answering them just on the matter that I don't agree with anything he believes in. But for the grade, I forced myself to.

>*starts a forest fire by imagining one in his brain*
Please, don't engage in big boy discussions before your pubes have fully grown.

do you even know what computation is?

Do you even know what a valid argument is?

what is computation?

explain it to me

and then explain how that computation could result in consciousness

Again, you're welcome to make any argument pertaining to the topic at hand anytime. For your educational needs refer to online resources and nearby community college.

computation is semantic manipulation

a simulation of a conscious mind would be...a simulation of a conscious mind

>do you even philosophy bro

That's a strawman. Chinese room isn't about perfect imitation of consciousness, it's about >identical output given identical input doesn't equal "it's alive mwahahahaha"

in fact, searle states that brains are just machines, implying that any other machine that behaves in the same way will also have mental states. but the types of machines being produced up by current AI programmes are too ghetto (read: semantically impoverished) to have mental states

he doesn't think a computational process can be conscious, period, since all computers do is syntax in, syntax out according to fixed rules.

he does think consciousness, intentionality and so on are biological, and can be reproduced by a machine, but first we're going to have to learn how neurochemistry works with this kind of thing

try like, basic phenomenology

This.

Also,
OP is either Larry Hauser, a sycophant student, or his research assistant. From what I've read, Hauser seems to be enunciating a psychological knee jerk without any legitimate substance. He blurts a vaugely presidential "Wrong!" without any follow-up alternative or direct argument. That's not how science or philosophy work. When you say that someone is wrong, you have to show how, and then construct a new framework that better explains the input data, phenomena, or thought experiment. Hauser fails to do this on every level available.

you can’t impart your experience of love to me—and that’s not only because you’ve never felt it

I find it heart-warmingly ironic that one of the only philosophers of mind I've read who actually seems literate about machine learning and AI, argues what he does.

Any recommendations for contemporary phenomenologists that are aware of the current state of cognitive neuroscience?

phenomenology is dead user