Have any of you read his work?

Have any of you read his work?

His argument that nothing except for AI alignment is a relevant problem is pretty convincing imho desu.

Other urls found in this thread:

iep.utm.edu/chineser/
twitter.com/NSFWRedditImage

...

How is he a pseud?

Appears to be an analytic philosopher with a demi-Fregean understanding of what "reasoning" ontologically is, including a substance ontology of "logic" that recapitulates the pre-1960s mistake of fuzzily taking logic to be variously transcendental and general/philosophical instead of normative.

Presumably he would have learned that this is dumb and derivative if he had stuck with it long enough to get yelled at by at least a Quinean, if not a continental philosopher who has the patience to really explain why he's a moron. But he seems to have gotten distracted like a small baby by the just as juvenile and just as atavistic daydream of lashing his ill-defined general conception of logic to computers and automation, so that the logic will be able to "make decisions" really quickly instead of slowly like we supposedly do now, where again, the ontological commitments underlying his definition of "decisions" and "decision-making" are left fuzzy and retarded and naive, almost pre-Fregean.

Like most idiot techno-analytic computer fetishist retards, he has not only confused the automation of tautological "decision" chains for "thinking," but has compounded his confusion even more, by further confusing "CRANK THAT SHIT UP TO 11, MAKE IT GO REAL FAST AND ADD STACKS AND STACKS OF META-RECURSION, SO THAT IT ALMOST LOOKS LIKE THE SIMULATION IS RUNNING ITSELF AND GETTING SMARTER!" for specifically human-like thinking, sometimes called general intelligence, because he can feed what appear to be general intelligence-requiring problems like
>Should I water my plants in this situation?
into a machine that already has logic switches for WATER and PLANTS and modal statements and a set of criteria for determining the relevant factors of a "situation" all tautologically pre-defined, and can feed them through a Rube Goldberg machine that spits out something that "passes a Turing test," a meaningless statement concocted in the era of GOFAI.

I wish him well on his quest to create the most complex version of a 1980s text adventure ever conceived. I'm sure adding algorithmic recursion to Zork's programming until it fuzzily passes a meaningless nonsensical Turing test in certain contexts constitutes general intelligence. I'm sure reifying a substance ontology by gluing a Rube Goldberg supercomputer to it magically constitutes "thought." Deep Blue is totally thinking. We just have to teach Deep Blue how to """"think"""""" about more """"situations"""""" and then it will be a real general intelligence!

the only real philosopher is nick mullen and the only pressing problem is buying shit

Can you re-release this post in brainlet speech?

different guy here. what the other user wrote is basically one huge polemic. Not really much content to it apart from "logical operations ain't thinking dummie".

Be honest with us, user. Are you a communist?

>born on september 11th
I can't read an author with a birthday like that. The feng shui is all wrong

Not him but he is basically saying that OP's picture is mistaking basic programming loops designed to spit out answers to inputted questions for actual "intelligence" and he is implying that the guy in OP's pic doesn't understand that there will never actually be some super-intelligent AI that can think like a human but times a million and that the guy is relying on the tech-geeks equivalent of mystical mumbo-jumbo by foregoing any critical thinking about the subject and instead just going "lol well it's the future so of course there will be super-intelligent AI" instead of facing the problems with that analysis.

Ergo the guy's alleged thesis (according to OP) that AI alignment is the actual only pertinent issue is actually faulty because AI won't ever be anything other than a really complex calculator that is designed that answer questions but can't actually think for itself or learn other than in the context of learning more general information and getting better at answering questions.

I think he makes a good point, I don't entirely rule out the possibility of genuine intelligent AI but I think that it will not be hard to create 100% reliable safeguards that by default prevent any problem from arising and if that was too hard people would just stick with "just below superintelligent" to avoid that issue. So I think it's not actually that much of an issue.