I have a worrying proposition

I have a worrying proposition.

>Philosophy is the most important topic for humans to study.

Reasoning :

1. AGI (artificial general intelligence) is going to be the most important technological advancement we will make over the next 200 years.

2. Designing an AGI that is safe is the most important challenge to solve. Humanities existence depends on it.

3. Understanding the word "safe" requires philosophical knowledge that humanity doesn't have.

Therefore :

Philosophy is the weak-spot in the coming AGI revolution (revolution in the same way as the industrial revolution, not some singularity skynet bullshit)

Conclusion : We must all focus our efforts on philosophy.

I hate this conclusion because I spent my teenage years mocking philosophy for being a topic for pseudo-intellectuals that enjoy mental masturbation.

Am I wrong? Please tell me Philosophy is not important.

Other urls found in this thread:

moralmachine.mit.edu/
paulgraham.com/philosophy.html
twitter.com/SFWRedditImages

A lot of philosophy classes are mental masturbation, it does however provide a base with which you can begin to judge the society around you. If you go to a philosophy class and just adopt someone else's ideology you're doing it wrong.

There are also people who go full retard with philosophy and miss the point.

An intelligent person is by default a philosopher whether or not they acknowledge the title.

While I agree there needs to be more of a focus on philosophy as a requirement, and if anything, a Ph.D. should mean what it stands for, as it once did... Focusing *all* efforts on philosophy is retarded. Philosophy doesn't do much of anything, in and of itself, it's there, instead, to inform everything else you do. I mean, you can't even teach philosophy, with a philosophy degree alone.

It would help in AGI, but given that, at this rate, the first AGI is probably going to be a human brain, simulated at less than real time, once you have that structure to play with, it's much easier to pick it apart and stimulate it to figure out how it ticks, and from there, maybe you can work up something practical.

Philosophy won't help with AGI, at all.

moralmachine.mit.edu/

I don't care. If the AI wants to kill us, then so be it.

well... Okay.
"Philosophy" is vague. if you're argument rests on the term safe, you could get away reading someone like quine, or studying semantics. You don't need to appreciate "philosophy" as a whole to do these things. I think there are a handful of philosophers who all scientists and mathematicians should read, but I would never recommend getting into the entire practice as a whole. There's no reason to read anyone prior to kant, and most who come after him aren't that great either.

So what you're saying is that philosophy needs to be explored to make AI more human?

>. Understanding the word "safe" requires philosophical knowledge that humanity doesn't have.
to be safe in general is for some Thing or Quality of a Thing, A to be more or less guaranteed to continue in the state it is in
this is an ontological question, that is easily answered much more rigorously than i bothered
yes understanding things is important
philosophy is important because it is a pleb filter
stop posting on sci

>to be safe in general is for some Thing or Quality of a Thing, A to be more or less guaranteed to continue in the state it is in
*a

no. paulgraham.com/philosophy.html

No not to make it more human.

A sufficiently smart AGI (artificial general intelligence) will be able to solve problems that humans cannot.

For example : Stop all spam emails.

Problem : Stopping all spam emails could be achieved by killing all humans. We need to engineer this option out.

Problem 2 : If the AI can't kill us to stop spam, maybe it can just put us in a coma. We need to engineer this option out.

This keeps happening. Infinite ways for humans to get screwed over emerge from even the simplest of goals. "Make a bunch of paperclips" you tell the AGI. "Ok. I'm going to need atoms, humans are made from atoms. Goodbye!".

This is no trivial problem. In even the simplest of goals, the AI NEEDS to understand what values humans have, to avoid violating them.

Maybe it uses all of your pet dogs for paperclip atoms. No we need to code in that pets are not to be used for paperclips.

It is a bottomless pit. We need a universal theory of human values that can be learned by every single advanced problem solving AI on the planet.

Believing in the AI god meme.

Think he's saying we need to understand philosophy better, not only to build core codes of self-perpetuating ethics and better understand and define consciousness, but also to better be able to draw between reals and feels and logic traps, and to better predict the potential consequences of AI and its place in human society.

Which is all true - all the best science has been done by folks who also had a background in philosophy. But, well, until recently, everyone with a higher education had a background in philosophy, and concentrating on philosophy *exclusively* isn't going to get you anywhere.

Specialists are all well and good, but specialists blind to all else are fucking dangerous.

Currently, in order to achieve artificial intelligence, we must be able to express consciousness with Boolean algebra

Let a be A be some entity or property of an entity that exists in a steady state
Let the extension of A be everything that threatens the existence of A
so Ay implies -A
A can only be safe if AO

If the domain is {a b d poopoo peepee}
A {pee pee}
B{a,d,b}
C{poo poo}
Names refer to themselves

(Ba v Bb)←→Cpoopoo)→Ax
So A is not safe from pee pee if either (Ba v Bb) and C poopoo or -(Ba v Bb)& -Cpoopoo
if C poopoo but B isn't a or b, then A is safe from pee pee.
Who retarded can you be OP, it's obvious you are illeterate. A child should be able to define safe as I just did.

/thread

>Everyone studies liberal arts.
>No STEM majors to bring about AGI.
Op has solved his problem.

The correct answer is to not worry about shit like this and just let people do what they want.

here's another question, why develop AGI in the first place? if you define AGI as human-like intelligence, consider what humans do: tend to their biological needs, reproduce, and work a job where they mostly perform a set of fairly specific tasks.

any AI whose main goal is self-replication is inherently dangerous to human interests, because they'll be in contention with humans for resources.

limiting the ability of an AI to directly act on its environment, along with using sensible objectives is probably the best safety net. they should be data machines with limited physical influence.

>if you define AGI as human-like intelligence
Nobody defines it like that.

Pretty decent response.

how do you define it then?

I think wikipedia puts it quite well.
>Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.
What matters is what it can do, not how it thinks.

>3. Understanding the word "safe" requires philosophical knowledge that humanity doesn't have.
Designing a AGI to be safe also requires a great deal of other knowledge that humanity doesn't have. That makes philosophy *a* weak spot, but by no means the only or most important one.

Yes the smart thing to do would be stop just short of true AGI.

The problem is how would you actually enforce this. Once we get close enough to AGI, its just a matter of time before someone pushes it over the edge. The potential gains and winner take all nature of AGI make it too good to pass up, and it seems inevitable that there will be a race to the finish line.

>A lot of philosophy classes are mental masturbation
what science isn't?

>If you go to a philosophy class and just adopt someone else's ideology you're doing it wrong.
this...if two philosophers meet and agree on every single point, at least on of them isn't a philosopher

Science actually has some uses in the real world.

compartmentalize. limit the damage a self-replicating AI can actually do with better access control, fail-safes, and possibly other "watchdog" AI.


>The potential gains and winner take all nature of AGI make it too good to pass up, and it seems inevitable that there will be a race to the finish line.

aren't nuclear weapons basically the same? we've managed not to kill each other thus far. AGI is just another tool that must be treated with appropriate respect and caution.

>compartmentalize. limit the damage a self-replicating AI can actually do with better access control, fail-safes, and possibly other "watchdog" AI.

in fact, this is important even if everyone has the best intentions. i'd rather not be a single cosmic-ray-induced-bit-flip away from annihilation.

Not all AGI's are human-like, just as capable of generic problem solving.

However, it does seem that the most likely first type of AGI is going to be a simulated human brain (which would be very human-like)... However, the reproduction instinct in humans is kinda basic. You have no immediate instinct to "build another human" per say, you just have a social instinct to bond with other humans, and to stick your dick in their holes. This is not conducive to reproduction when you're a virtual construct.

Of course, such an AGI may wish to build more of itself, even without a working core drive towards it, but it maybe, at first, at least, especially in the case of a simulated human brain, which probably wouldn't even run in real time, the sheer hardware limitations may prevent if from readily doing so. By the time they are, you may be able to customize it to the degree where it wouldn't want to do that.

...Until someone decides to turn it into a viagra advertising super malware virus.

>An intelligent person is by default a philosopher whether or not they acknowledge the title.
Intelligence != Wisdom.

Also, I can tell you've never taken a philosophy class. It's not about pondering unanswerable questions, as near every philosophy thread on this board might suggests. "The meaning of meaning" is not a question in modern philosophy anymore than "If a tree falls in a forest, and no one's there to hear it..." is the totality of Buddhism. It's about training you to critically think, to strip the chaff from the wheat, to know truth from lies, self-induced or otherwise, to avoid mental traps and overcome those obstacles.

It's true, OP's proposition that we switch to philosophy alone is a dead end, as philosophy is not meant to be an end in itself. It's instead designed to inform all that you do so you don't fuck it up and endlessly build bridges down into a bottomless abysses.

In other words, it's specifically to prevent mental masterbation.

And these days, specialists, however intelligent they may be, often end up ignoring the basic reality of the world outside of their field, because they've not been rounded and armed with even the most basic tools of philosophy. It's the worst kind of mental masterbation - the kind that has consequences in the real world. That not only hampers research, in the case of cross-science efforts like AI, but is also wasteful and dangerous.

>hurr durr im a brainlet who cant use math but philosophy is more important

>Of course, such an AGI may wish to build more of itself, even without a working core drive towards it

a long-term objective would require an AI to maintain its agency for some length of time to ensure that the goal is accomplished. self-replication is almost an implicit part of any long term goal.

>old scientists from back when if they said anything bad about [insert dominant organized religion here] they'd be lynched didn't say bad things about religion
wow cherrypick me more

Yeah, if you age and die. Otherwise, not so much so.

An AI would have motive to replicate for the same reason it was built though - to get more thinking work done faster.

If your first AI is not running at real time, and requiring ludicrous amounts of hardware to maintain as is, this isn't a problem. By the time you reduce the hardware requirements, you have also probably picked it apart sufficiently hard-predict or hard-limit its behavior.

None of those scientists are from that far back, and some of them were atheists, and in one case, claimed he was effectively God (albeit, under the pretext that everyone was too).

>1. AGI (artificial general intelligence) is going to be the most important technological advancement we will make over the next 200 years.
Wrong, we are a few thousand years away from that if we are lucky.
It is not going to happen.

>Philosophy is the weak-spot in the coming AGI
No, the weak spots are in this order:
Physical limitations on processor manufacturing which will essentially halt CPU speeds.
A mathematical framework to help us actually achieve such an AI.
Implementation.
Legality.
Morals.

>Please tell me Philosophy is not important.
It is a worthwhile subject to study if you care about individual growth, but it is completely pointless to study it as a "science" with the aim of "absorbing knowledge".

Lmao at a non philosopher that thinks they know what the real world is

Those guys in the right? They are philosophers and they don't know it

They all got their Ph.D.'s back when that meant what it stood for, as philosophy was core to all higher education at the time. The guys on the left, on the other hand, were failed by the newer system. (Well, that, and they're really just TV personalities, rather than scientists, but meh, point still stands, dammit!)

Also... I've clearly never learned my left from my right.

By left you meant right, right?

Your other left.

In the US, they no longer teach us philosophy in high school - nor our right and left in kindergarten.

>In the US, they no longer teach us philosophy in high school - nor our right and left in kindergarten.
Well of course not! You can't get a real job just by learning your left and right, why would they waste time teaching that shit?

xD user

The meaning of meaning" is not a question in modern philosophy
I beg to differ

this is why people think philosophy isnt necesarry anymore

?

>Legality.
>Morals.

aren't those phil fields?