Have any of you read his work?

Have any of you read his work?

His argument that nothing except for AI alignment is a relevant problem is pretty convincing imho desu.

Other urls found in this thread:

iep.utm.edu/chineser/
twitter.com/NSFWRedditImage

...

How is he a pseud?

Appears to be an analytic philosopher with a demi-Fregean understanding of what "reasoning" ontologically is, including a substance ontology of "logic" that recapitulates the pre-1960s mistake of fuzzily taking logic to be variously transcendental and general/philosophical instead of normative.

Presumably he would have learned that this is dumb and derivative if he had stuck with it long enough to get yelled at by at least a Quinean, if not a continental philosopher who has the patience to really explain why he's a moron. But he seems to have gotten distracted like a small baby by the just as juvenile and just as atavistic daydream of lashing his ill-defined general conception of logic to computers and automation, so that the logic will be able to "make decisions" really quickly instead of slowly like we supposedly do now, where again, the ontological commitments underlying his definition of "decisions" and "decision-making" are left fuzzy and retarded and naive, almost pre-Fregean.

Like most idiot techno-analytic computer fetishist retards, he has not only confused the automation of tautological "decision" chains for "thinking," but has compounded his confusion even more, by further confusing "CRANK THAT SHIT UP TO 11, MAKE IT GO REAL FAST AND ADD STACKS AND STACKS OF META-RECURSION, SO THAT IT ALMOST LOOKS LIKE THE SIMULATION IS RUNNING ITSELF AND GETTING SMARTER!" for specifically human-like thinking, sometimes called general intelligence, because he can feed what appear to be general intelligence-requiring problems like
>Should I water my plants in this situation?
into a machine that already has logic switches for WATER and PLANTS and modal statements and a set of criteria for determining the relevant factors of a "situation" all tautologically pre-defined, and can feed them through a Rube Goldberg machine that spits out something that "passes a Turing test," a meaningless statement concocted in the era of GOFAI.

I wish him well on his quest to create the most complex version of a 1980s text adventure ever conceived. I'm sure adding algorithmic recursion to Zork's programming until it fuzzily passes a meaningless nonsensical Turing test in certain contexts constitutes general intelligence. I'm sure reifying a substance ontology by gluing a Rube Goldberg supercomputer to it magically constitutes "thought." Deep Blue is totally thinking. We just have to teach Deep Blue how to """"think"""""" about more """"situations"""""" and then it will be a real general intelligence!

the only real philosopher is nick mullen and the only pressing problem is buying shit

Can you re-release this post in brainlet speech?

different guy here. what the other user wrote is basically one huge polemic. Not really much content to it apart from "logical operations ain't thinking dummie".

Be honest with us, user. Are you a communist?

>born on september 11th
I can't read an author with a birthday like that. The feng shui is all wrong

Not him but he is basically saying that OP's picture is mistaking basic programming loops designed to spit out answers to inputted questions for actual "intelligence" and he is implying that the guy in OP's pic doesn't understand that there will never actually be some super-intelligent AI that can think like a human but times a million and that the guy is relying on the tech-geeks equivalent of mystical mumbo-jumbo by foregoing any critical thinking about the subject and instead just going "lol well it's the future so of course there will be super-intelligent AI" instead of facing the problems with that analysis.

Ergo the guy's alleged thesis (according to OP) that AI alignment is the actual only pertinent issue is actually faulty because AI won't ever be anything other than a really complex calculator that is designed that answer questions but can't actually think for itself or learn other than in the context of learning more general information and getting better at answering questions.

I think he makes a good point, I don't entirely rule out the possibility of genuine intelligent AI but I think that it will not be hard to create 100% reliable safeguards that by default prevent any problem from arising and if that was too hard people would just stick with "just below superintelligent" to avoid that issue. So I think it's not actually that much of an issue.

essentially Turing Machines have nothing to do with human consciousness, analytical philosophy is hilarious schizophrenia and people who think computers are like brains and that our thinking is like binary are stupid. That’s all you need to take from it, the whole project of Theory of Mind and Mathematical Logic is bankrupt

The brain is just a computer, though. It's a biological thing and there's no reason we can't make a better brain.

Appreciated.

>The brain is just a computer, though. It's a biological thing and there's no reason we can't make a better brain.

The problem is you don't know that though. You could have a computer perfectly simulate a human brain down to the last atom and it still might not actually be thinking.

For all we know there may be something something crucial to thought about having living brain cells in a living organism that no computerized simulation will ever accurately capture.

see for example the chinese room problem

iep.utm.edu/chineser/

>Reddit spacing
typical

wow you really got me BTFO wtf I love AI now

I really like your post and agree 100%. Sometimes I lack the words to describe what is so shit about "techno-analytic computer fetishist retards", but you've nailed it there.

He's a cuckold, according to his Wikipedia.

lmao, good post

you will be tortured by roko's basilisk though so watch out

Stop shilling ea here. attention from ironybraindead underachievers on Veeky Forums is bad

>roko's basilisk
Is it true that this had some people fall into deep depression?

supposedly some people on yud's forums got genuinely freaked out by it

>Logic don't work like he think it do.

>implying a Veeky Forums migration wouldn't raise IQ and drop autism levels.

conscientiousness and trustworthiness are the important attributes

>depression and anxiety masked by a philosophically retarded concept

yud is the biggest pseud in history if hes being unironic. if hes ironic hes the most redpilled philosopher of all time

It's probably because it reinstills their fear of death. Pretty much everyone on there is an atheist who mollified their natural fears by convincing themselves that they can live on forever either actually or in computers once the singularity happens. The basilisk says, even if you do manage to live forever that doesn't guarantee you'll have a good time. It's stupid as shit, but that's what happens when you rely on crutches instead of actually confronting the absurd.

pls elaborate

if you don't believe AI is real (well, i mean, can be realistically created) is there some special property of matter that makes biological intelligence possible but synthetic intelligence impossible?

Lovecraftian madness even.

Isn't it basically the philosophy zombie problem? You and I are communicating right now, but what if only one of us is actually self-aware and the other is just programmed to appear that way? I think it's another thing that most people ignore because it's unprovable and pointless.

what is e.a.?

>he has not only confused the automation of tautological "decision" chains for "thinking,"
But has he really though? He may be a dipshit, but I think you're bullshitting. Good post nonetheless.

Also I think I recognize the poster behind this persona.

>confronting the absurd.
I still don't know what this really means.

Kierkegaard and Camus are pretty thorough about it. I'd recommend just picking up one of the quintessential existentialism books and reading it, rather than relying on Veeky Forums posts to summarize it succinctly.

In the (popular) sense as Camus meant it, the absurd is the tension between the inherent meaninglessness (or perpetually perceived lack of meaning) of life and the desire for Man to seek and create meaning.

Are there any completely reliable methods of weight loss besides mega-liposuction and adipotide?

By "completely reliable" I mean that their theoretical and pragmatic efficacy is not subject to revocation by quirks of metabolic disprivilege. So "starve yourself" doesn't work because its pragmatic efficacy relies on your fat cells being willing to relinquish lipids before your body cannibalizes muscle tissue and otherwise starts doing serious damage to itself, which your fat cells can just refuse to do if you're metabolically disprivileged.

Mega-liposuction and adipotide don't care if your fat cells are malfunctioning and refusing to release lipids. They just physically kill or remove fat cells. Anything else like that, or which operates at a similar level of disregard for metabolic disprivilege?

Interventions that operate orthogonally to malfunctioning fat cells or other metabolic disprivilege only, please. I will delete comments suggesting diet or exercise.

>I wish him well on his quest to create the most complex version of a 1980s text adventure ever conceived.
Gotta admit I got a good chuckle out of that, I like the way you write user.

Jesus Christ...

"Metabolically disprivileged" is an amazing phrase. It makes me laugh every time.

>I will delete comments suggesting diet or exercise.