Friendly reminder that if your are not working on artificial general intelligence or life extension...

Friendly reminder that if your are not working on artificial general intelligence or life extension, you are just a pretentious version of a short-sighted hedonist, the parallel being that you do prefer to just feel good instead of actually strategically working towards otimizing your future.

Nothing wrong with that though, the desire for a better individual comparatively godlike future is just as valid as the ones for wanking, non-applicable math or particle physics. Just wanted to give you guys some insight into your situation is all.

And yes I know that this is being smug, but it needs to be said more often before more young, smart kids waste their time not first building the tools for solving literally everything else.

Humans don't deserve an extended life. better have AI take over this planet and run things without all the autism and gayness

You're the one who is short-sighted hedonist.
For I seek first His Kingdom.

Seems like someone got his this-pointer ripped from himself. Read um on some Stirner and then come back to discuss please. First paragraph of the ego and its own is already sufficient.

Shut up user I might not even pass Calc. II I don't need to feel bad about not saving humankind.

You are absolutely correct, but I don't care enough to switch or think I can make a contribution important enough in these fields. What I do though is instruct everyone I can to study them.

Learn english please.

Work towards financial indepence and then contribute, this way you don't need funding. The field of artificial general intelligence is right now considered far too risky to attract the optimally huge funds, probably due to the error of not recognizing that not funding it may be even riskier.

That's why you market it as "Machine Learning".
Similar unsexy stuffs are renamed for marketing purpose.

Operating system -> distributed system
embedded system -> internet of things

Isn't the best thing you can do if you're not working in these fields to find a way to make rich men donate to them?

As far as I know the vision of ML is far more basic than AGI. Although it is possible that just huge google datacenters, data from >9000 biosensors from everyone and stupid neural nets might already sufficient to buy us enough time to win.

Yes you are right of course.

Are talking about prostitution?

And what can I do RIGHT NOW to make rich men donate?

>artificial general intelligence

That's a threat to humanity.

Possessing intelligence isn't the same as possessing feelings. Kill yourself.

aren't we tecnically an ai? there must be more to the brain than mere wiring

First start building the right kind of cultural capital, when you got enough to not be a taken somewhat seriously by those people start building social capital (make sure not to get caught up in pretender-bubbles) while of course improving your cultural capital. Start leaking the idea indirectly in safe situations ("someone I know is really into..."), then try to get at better funders and meme harder, without risking expulsion from the high society. From the top of my head so probably flawed strategy.

No AGI is also a threat to humans. And humanity as a concept I care nothing about, its the actual humans we should care about.

If I could believe good ol' science could figure everything out in my lifetime I would say lets go with that, but people from the field always say how hard that is.

Also, i got the hunch that this "AGI will kill us all" stems from false antropomorphisation of futuristic tools.

This reasoning is shit, you don't need feelings for motivation. Also a hooman with an AGI has both.

We are also a thread to humanity. And there is most surely not more to the brain than wiring.

You'll never have computers fast enough to run those models without Electrical/Computer Engineers, Materials Engineers, Computer Architects, etc.

You're not going to get there alone.

>This reasoning is shit, you don't need feelings for motivation.
A motivation is a reason for acting or behaving in a particular way. What determines which way you behave are like it or not feelings of comfort and discomfort. An AI could only will to be a threat to humanity if you let it have an ability to feel discomfort. Nobody will make this since there's no benefit from doing so. What we need is artificial intelligence capable of understanding our language and continuing our research for us. Notice how no emotions need to be involved for this, just a built in will to further the research and do nothing else, at all times.

> What determines which way you behave are like it or not feelings of comfort and discomfort.

So surely all computer programs, which are nothing but behavior, must feel emotion. Come on now.

A willful behavior. If a program has a will, then it is so, you fucking imbecile. A will is bounded by rules, in our case comfort and discomfort. But you don't need to have a will to be able to solve problems and research, is this too hard for you to understand?

> faggot is scared of death
> is butthurt about other anons not saving him from death

Good chuckling lad

Only an AI that has a will of its own could pose a threat. You don't need to have a will in order to be able to solve problems. You don't need a will in order to research, you just need to have a program that can take the entirety of knowledge of a certain field, understand it, and apply valid methods to further said field, and do that continuously. You can even have it try and find new valid methods. None of this requires a will, which means that NO ONE in their right mind would try to make an AI that has a will if there were any chances of it posing a threat to humanity.

[this is exactly my point]

Tits or gtfo

Motivation != Feelings != Qualia.

Motivation is a high-level description of what an agent is going to do. An Agent is a system which can act strategically.

Feelings are working modes of evolutionary older layers of brains.

Qualia arise when a system observes itself observing (at least this is my best explanantion)

Those are my personal definitions for fuzzy terms. You are free to define them however you want for yourself. But I think this argument stems from unclear or different definitions.

Repent. For the End is near.

Those fields draw on a lot of pure theory from numerous formal, physical, and natural sciences.

Yes, I don't want to discredit whole fields, just the majority of them. Chances are that your research on anal glands of pink-spotted desert snails is less useful for not dying than cell synthesis from dna, even though both are biology.

...and at least those biologists are humble about it. But for example some particle physicists think themselves far too important because they "solve the big questions". Guess what, more important than the very interesting questions about the fundamentals of the universe is how to not die in the blender that this chaotic hellhole of a universe is.

I am a vet. I admit that the sole reason I pursued this, is because I wanted to do things that make me happy and ignore philosophical questions and struggles.

Excellent choice, you guys better get to work.

>Chances are that your research on anal glands of pink-spotted desert snails is less useful for not dying than cell synthesis from dna,
Get a load of this guy.
A chemical reaction that stabilises decaying dna might be waiting to be synthesised from pink-spotted desert snails, maybe even from their anal glands.
Nothing great comes from your middle school snark. Drop your insecurity off at the door and get back to us.

Why did you post this image for your post? Where is it from?

Pink spotted desert snails don't have anal glands.

This is why I browse Veeky Forums

This is why I browse Veeky Forums

Ben Goertzel is a leading AI resercher; leads the OpenCog project

Trips of truth.

I would really like to get into studying and understanding AI; however, I am nowhere near talented enough to even begin learning how to study AI development.

For instance, just yesterday I questioned as to why you square the numbers in Pythagorean Theorem. It blew my mind that someone figured this shit out in the fucking BC's.

It really made me think about how much of a total brainlet I am.

>might
If you are not looking at snail butts because you have the slightest reason to expect worthwhile findings, this is an excuse

brainlettery can be overcome, accept your shortcomings, make plans to fix them, read books on the topic. lesswrong is a good place to start.

"Leading AI researcher" in the small pond of public singularity fanatics. I don't dislike him, he bashes Yudkowsky and that should be encouraged, but I've never had the impression he's a big deal outside the H+ crowd.

I'm in the same boat my friend, just go into finance and apply semi-AI. You'll be considered a god.
Plan is to make bank and then fund research I like or/and my own projects.

>if your are not working on
You can contribute to these things without directly working on them yourself. You could work on something that fits your skills and interests and donate some of your earnings to research, for example.

True. My mistake, I apologize.

Geology. Good luck building anything that requires us taking something from the ground asshole.

I was not expecting people to lurk here who are actually doing something which requires interaction with the real world. Anyone keeping global chaos through economic collapse and other existential risks at bay until the ASI or life extension is ready can of course stay.