After reading all great philosophers, where are we stuck now? Have we even solved anything since the Greeks?

After reading all great philosophers, where are we stuck now? Have we even solved anything since the Greeks?

Other urls found in this thread:

youtu.be/CgG1HipAayU
youtube.com/watch?v=Ebnd03x137A
twitter.com/AnonBabble

yes, we have elevators and cars
we still don't know what is after death though

Philosophy hasn't solved anything since ever.

Well, we've at least created new problems

In terms of accessing new knowledge or making new discoveries, right now there are a lot of attempts from many corners to rehabilitate or return to metaphysics, but no one has really put forward a convincing program that allows one to break out of the transcendental framework.

Most of what philosophical thought does now, even if you cut away the lion's share that is just sociologists and postcolonial thinkers and similar people appropriating bits and pieces of established philosophy for their own theories, is critique, which is itself derivative of the transcendental framework.

Right now philosophy is maybe best characterised as knowing it wants to do something else, something new, but being unsure how to do it. And so every year, a thousand philosophers rehash the old paradigms and try to squeeze another drop from them, or budge another inch forward in terms of knowledge, and delude themselves that they made an original contribution. Another thousand books from within the trap on how we need to escape the trap.

>Copenhagen interpretation deeply influenced by Kant's metaphysics
>Theory of Evolution deeply influenced by Hegel's dialectics
>Boolean Algebra (without which the theory of Relativity isn't possible) impossible without formal logic

>Philosophy hasn't solved anything since ever.
Yeah, well, maybe you should actually read a book or two?

a masturbatory circle influences another masturbatory circle

Patterns are still unexplained. In particular in Pattern recognition / machine learning on exactly why artificial neural nets produce the A.I. revolution we see today. No one has yet comprehensively explain how a connection of neurons leads to intelligence even though we can already do so in a machine. Basically neural networks are a black box, where you feed in large amounts of data and get intelligent behavior out on the other end.

Neural networks are perfectly comprehensible in principle. They are mechanical in nature. The nature of consciousness is a black box, because consciousness is ontologically distinct from a machine.

Dangerous to conflate machine-produced outputs that are intelligible to our minds, and the nature of our minds.

Good luck in a world without quantum mechanics...

Yet machines are starting to do everything that human beings are able to do. And I don't mean play chess or go or drive a car. Machines are already creating music and reproducing symphonies in the style of distinct musicians. Not to mention art. More and more of the human ingeniousness is being chipped away. All this is due to pattern recognition but the scientists can't explain what the machine is doing except learning from data. This is where philosophers are suppose to come in and offer some insights and possible solutions but none of that is happening because all the philosophers are stuck with consciousness and the worst offenders are the ones that say there's no point because machines can't be conscious.

>I'm a fucking moron and hate philosophy for not putting food on my plate

Have you read Dreyfus' books, What Computers [Still] Can't Do?

The machines are reproducing things that WE find intelligible, that's all. To say a computer can algorithmically "replicate" Bach is to say that he can make something that makes our brains recognise it as Bach-like. That's what all computing has been for the last 20-30 years.

What you should be scared of is what happens to future Bachs when instead of letting humans generate real creativity, whatever the hell it is deep down in the soul, we simply tell a computer "give me something like Bach and Liszt put together" and let it generate playthings for us. For the exact same reasons that when you get up every day, you should ask yourself
>Who am I? What do I value? What do I want to do today, and why?
instead of saying to your cellphone or creepy robot companion
>What do people who have lived lives thusfar algorithmically similar to mine, in superficially measurable respects defined by human technicians, tend to do on a day "like this one," where "like this one" is also defined superficially by human technicians?

That's where this species is headed. Algorithmically "guided" lives are going to regularize human existence, because people confuse the products of free human consciousness for superficially indistinguishable but soulless replacements generated by a computer.

We are approaching a half life 2-ish era
Philosophy can only keep remarking all the mistakes philosophy made in the past that allowed things to be how they are now
If after the next big happening humanity, or part of it still exists, contemporary philosophy will work as a guide to what to avoid in the making of new structures
Anything else is delusional surrogate goals and activities

The guy in your pic and his phenomenological investigations are important, since phenomenology should had always been the base of philosophy

We already live in the world described in the second part of your post
Books for this feel?
Also
Could you elaborate on the last passage of your post? I know asking this is annoying but it sounds interesenting and my little brain cant handle all those big words at once

Relax, it's probably that dork from the other thread who wants to fake his way into getting a philosophy MA so that he can get "money and pussy"

> Algorithmically "guided"
No. The algorithm displayed in machine learning is general to a certain extent. It's not every program is running a specific code that a person has written for that specific task. It is neural networks which is inspired from our own brains.

Human beings are running a general algorithm in the fact all human intellectual endeavors are based on neurons. Neurons you can say is the general algorithm human intelligence is running on, even consciousnesses is running on.

Machines can also display creativity and generate it's own data sets when it is given a body to interact with the world. And the thing about human creativity is that everything we created is inspired from elsewhere. From what we read, seen, or interacted with before.

And again philosophers are stuck on trying to explain consciousness while ignoring questions on avenues which may help explain consciousness. Honestly, people need explain functionality before diving into explaining if something has a soul, whatever the fuck a "soul" even means.

modern day science is is just a more formalized and certain form of philosophy, and its not like one replaced the other. The two are deeply rooted in each other. The most fundamental shit is being discovered in quantum physics, neurobiology, A.I, etc so i definitely definitely wouldn't say we are 'stuck'

lmao

have you heard the shite those programs produce? here is beethoven:

youtu.be/CgG1HipAayU

good post

Dreyfus book is outdated. Machine learning is the new hotness. The start of the a.i. revolution is right now and I fear by regarding it as pointless and trivial is going to make philosophy even more of a joke.
That video is was made in 2012. The more recent ones are decent such as:
youtube.com/watch?v=Ebnd03x137A .
There's a lot of others.

Neural networks replicate "stupid" (unconscious), procedural aspects of human learning, whose context and role within consciousness as a whole we have NO WAY whatsoever of understanding yet - including whether they somehow make up consciousness altogether, or for the most part, or what. That's exactly why they're dangerous, and why it's dangerous to equate them with thought as a whole.

The kind of thinking you're doing in this post is very similar to what AI and cognitive science researchers do all the time. It's reductive and materialist. It's not just that it's simplistic, it's that massive metaphysical assumptions are couched in every single one of the little twists and turns of thought it takes to generate the positions you're ultimately taking for granted.

>Honestly, people need explain functionality before diving into explaining if something has a soul
This is exactly the problem. It's not that I necessarily disagree that essence equals "function," or that "function" is an extra-human entity whose essence can be exhausted by our understanding. (I do, but that's not the point.) It's that we don't have enough information to agree or disagree meaningfully on these points, and you're already jumping ahead to implementing programs on the basis of those assumptions. It doesn't have to be a soul, you don't even have to throw out materialism, but it might involve levels of complexity and holism that are not reducible (by definition) to the behaviour of a substrate like neural networking.

>And again philosophers are stuck on trying to explain consciousness while ignoring questions on avenues which may help explain consciousness.
No one is ignoring it. Dreyfus says neural nets might be a principle or foundation of consciousness but he says they miss the point yet again, mostly for the reasons I just gave.

>Dreyfus is outdated.

Saying this shows you don't actually know, you're just a scientism fanboy. Machine "learning" is the current flash-in-the-pan of half-braindead AI researchers BECAUSE they intentionally incorporated, or tried to incorporate, Dreyfus' critique, after stubbornly refusing to for decades. All the current trendy literature is claiming to undergo a phenomenological turn. They're still fucking it up though. If you read the stuff, and look at their citations, they only read childish summaries. They try to grab a few insights without realising the whole point Dreyfus is making, and so they're reiterating the same mistakes they made in the 70s and 80s.

Machine learning is a windup toy. The fact that it also doubles as a really efficient way of procedurally shaping people's thoughts (for example, by funneling them all into consuming the same kind of content, which is most of the reason this technology gets funded!) is fucking scary. But aside from that, it has nothing to do with consciousness, no more than a pocket calculator does.

Thanks.

It's going to get wayyyyy, way worse. This is just the opening phase. Once you realise that the appearance of modern AI/cogsci programs are just the logical end stage of the "rationalisation" processes of capitalist modernity, you realise how scary it is.

For all the meme bullshit about accelerationism, it's basically correct. Cybernetic self-organisation is the death of humanity. Everything that deviates from the system can just be reabsorbed and prevented from happening again. Neural nets and evolutionary pseudo-AI can endlessly close any holes that open up, until the possibility of opening any holes vanishes or becomes infinitesimal. And things that tend to reduce rebellion from the system, like docile contentment, will be selected for until they are so efficient as to be inescapable. Gradually humans go from being tightly constrained variables, to not being variable at all, just predictable cogs, to being superfluous altogether.

A lot of modern philosophy is about this. Heidegger's technological enframing, Adorno's dialectic of enlightenment, Adorno's critique of the culture industry, aspects of Deleuze and Guattari, Foucault's critique of the human sciences, aspects of Nietzsche, lots of stuff. The most important thing that philosophy has to do in this century, and as quickly as possible, is to understand the dichotomy between the kind of totalitarian, hybridised animal-machine "life" that capitalist rationalisation automatically evolves toward, and the integral principles of freedom and creativity that make humans distinct from animals and machines.

Nick Bostrom's book Superintelligence is also good, though people like to bash on it because it's popular, for giving the basic idea that any general intelligence, if we ever created one, will be completely unconstrainable by definition. But the things we're creating aren't anything close to general intelligence. They are just very efficient extensions of the advertising industry's ability to shape human life until every human life is the same, for maximum efficiency.

What is meme bullshit about accelerationism? It made complete sense since I first read about it tbqhwyl

>That video is was made in 2012. The more recent ones are decent

They're not though. The process sounds more refined but the thing itself is still utterly stiff and unconvincing. Like a sweet sounding parrot reciting a shakespearean sonnet. Or a dead body caked in so many layers of make up so as to look alive. One more layer is surely going to do it!

I'm not as well read as the other user but i agree with his conclusions. I think machine-learning is missing something important and it is dangerous for us to act like it isn't there

Pic related is my 2 cents in terms of reading recommendations

So... half life 3 confirmed?

>Machine learning is a windup toy.
Yet this windup toy is why people are going to be out of jobs in the next twenty years. Whether you like it or not there is something special about pattern recognition that has yet to be explained. Dreyfus book was written in the 80's and it was a critique of symbolism, the top down approach where people think that representations was all you needed. Dreyfus was right that it was shortsighted to have a human centric view of intelligence, of pure language and mathematics. The A.I field has changed since then. The whole paradigm has shifted to the bottom up approach, Dreyfus criticism does no longer apply. Now we have machine learning, and it works but we don't know why and we have yet to reach the limits to what neural networks can do so it's stupid to say that the computer scientists are fucking it up. Quite frankly the people who are fucking it up are the philosophers in not tackling this question.


>machine-learning is missing something important
That's the point. Philosophers should try their best to explain exactly how machine learning is producing all these results we see, whether it is image recognition, self driving cars, art, music, beating human beings at GO. What is machine learning doing exactly is the great problem to be solved and I don't think the computer scientists themselves can answer it without knowledge from other fields.

THE FACT THAT

Most of philosophy is self-referential, and has no point except itself. Take for example metaphysics. You explain something that we can't have informations on. Some other guy contradicts you, giving other explainations on things that we can't have informations on.
But I like philosophy. It's fun.

>reproducing symphonies in the style of distinct musicians

Barely. You don't seem to understand what it would imply to actually do this in any substantial way.

Process and Reality.

The incredibly rare cogent poster!
I'll only contribute to this by saying that the best articulation of this thesis I've ever seen is Gravity's Rainbow. But Pynhcon's Theater of the Absurd is a stupid response.

What do you make of GEB's strange-loop thesis? Best defense of reductivism I've ever seen (perhaps I've outed myself as an uninitiate).

Are you the same guy who literally wrote what philosophy has been going on since its beginings to contemporary in a bunch of posts a few days ago?

>Wishing luck to a Copenhagen Interpretation denyer
Kek

>Philosophers should look at how machine learning is producing these results.

Its clever use of billions if not trillions of MOSFET transistors and electrical memory. These billions of MOSFET transistors and banks of memory are being used to parse problems in such a way that billions MOSFET transistors and flash memory can solve them. It all still occurs in a totally deterministic fashion.

>People are still impressed that boolean logic can be applied to every task.

In the future the task of humans will merely be to define goals for the machines.

huh, so just like mathematics