Really fires up the neurons

University of California, Berkeley, philosophy professor John Searle writes:

[Computers] have, literally […], no intelligence, no motivation, no autonomy, and no agency. We design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. […] [T]he machinery has no beliefs, desires, [or] motivations.[27]

Searle basically makes a living saying obvious things that it's fashionable for philosophers to deny.

Not the most glamorous or interesting career, but there you go.

So in other words, everyone getting on the AI train is a fool, and computers will never be sapient?

searle-land debate WHEN?

I don't know whether he makes any long-term predictions like that, but his position as I understand it is basically that thought isn't just algorithmic or functionally definable, but has a biological component to it, that's it's a thing certain biological creatures do, and that computers (right now) don't.

This, I think, is obvious to everyone but people who have ideological investments in pretending our technology is capable of things it's not. It's a kind of deep confusion to believe that computers as they are right now literally 'think.'

Yes.

Consciousness as it is contained within an individual organism has many occult layers to it that traditional science simply dismisses offhand as irrelevant. They can have fun with their computer boxes and monitors all they want but all they're going to invent is a somewhat believable chat-bot.

Yes, true
Don't tell me you fucks will be the sentimental ones in the upcoming AI rights trials we'll have to face, will you?

Ray Kurzweil and Michio Kaku can suck my future dick

well in practice it probably doesn't really matter if they are or not

in fact its far more frightening that humanity could be destroyed by a pure machine logic

I'd much rather be destroyed by something that at least gives the illusion of acting human

>I'd much rather be destroyed by something that at least gives the illusion of acting human
What's the difference?

>literally
this guy is a fucking pseud

I always took it as the same as animals are to human beings. We'll have pure routine robotics and true AI robots (if we wish to create them) in 2 different categories so that we don't end up with an AM mastermind computer.

At least if the human race gets eradicated by human-like AI they will be fitting successors
Essentially just like humans but superior in every way

Getting eradicated by a cold unfeeling machine is just too comical

niggas whom'st shite thyself

There will be robots programmed to think like Heidegger.

He's right and all the futurist wankers are retarded.

We've just started exploring automation and weak ai, nobody can even imagine how to go about making strong ai

He's right you know.

All "motivation" is originated in Feeding and Fucking,and in that order. Making an AI require some abstract substance to continue its existance periodically will plant the seeds of desire within it. Beyond that is scary territory:it may become greedy and will devour what it can as fast as it can to insure its own life,and possibly learn to deny this substance to other AIs to eliminate competition. It may decide megalomania is its only logical attitude. And what happens when it learns it doesn't need anything after all? And it learns how to be Angry?

There's a book in that. I shudder at the ending.

t. brainlet

Can you define consciousness?
To me these sort of statements are identical to when scholars were saying that the sun was orbiting the earth because it was the obvious explanation.

Can you? The scientists that think replicating human thought processes and consciousness are 100% naturalistic materialists.

No I cannot. I think it is just as ignorant to say that AI is not possible as it is to say that AI is possible.
But we will definitely learn a great deal about consciousness by attempting to create it in machines so either way it is a worthwhile task.