Books that made you feel worthless

>Pic related made me feel like I'm just a pseudo-intellectual who's been hiding under a rock for the past 20 years. I've just realized how much I didn't know about neurology and how our brains process information and patterns.

>Ray Kurzweil
>getting memed this hard

That's exactly what I'm getting at user

Brofist.

Ray "Immortality in 20 years, Strong AI in 30 years" Kurzweil is one of the few pop-sci authors that actually constricts my jimmies.

So, what's wrong with Ray Kurzweil? What is he wrong about, and how wrong is he?

You mean Ray "150 supplements a day keeps death away" Kurzweil? Idk desu, he's just a washed up wacky futurologist

>What is he wrong about, and how wrong is he?
Timelines, mostly, and the specifics of how we will achieve the level of technological advancement he like to discuss.

He seems unwilling to discuss possible downsides to his imagined techno-utopia, and builds a narrative that parallels how a Jehovah's witness might talk about Armageddon. "It'll be great for me, cause I'll be a part of the technological elite!"

Among other issues.

I agree with you user, he seems to think that the "singularity" will be a positive next-step to our evolution process. One thing that kept looming in the back of my mind is the fact that true Ai might have their own agenda which may definitely wipe us out at will. If true Ai feels like we're not a necessary ingredient to their "super-intelligent" exponential development we will certainly be out of the picture for good.

I feel that the greatest potential limiter on AI Ragnarok would be artificial empathy.

>artificial empathy.
This is a fascinating concept, suggesting that
>artificial autism
Would then free the AI to start doing whatever it wants, without regard for human convention.

That's the most reddit post I've seen all day

Think about it.

I think "the singularity" as a concept is flawed. It just a specific milestone in machine consciousness, and once crossed, doesn't really mean that much.

Machine's passed the Turing test in 2014, and I haven't noticed an explosion of bots convincing people they are humans.

Even calling it the "Singularity" gives it more weight than really matters. Just like we have reached a state of "Automation" in our manufacturing. There are still millions of people required to do whatever it is that they do to make things, and some industries are more automated than others.

"Machine IQ 110" or something is more quantifiable goal, minus the spooky language.

It's somewhat a matter of "easier said than done", as we can't make a robot think, let alone feel, let alone know how others feel. A good starting point is defining what empathy is; under my own personal definition it is a two-fold process, wherein one must first recognize emotion in another, and then emulate that emotion in themselves.

I was being semi-facetious, but I think we are still so far from building machines with intelligence that feels similar to ours. Once it really starts to take shape, we will probably encounter fundamental differences. To me, empathy/altruism has a biological basis, which would probably not be a part of anything that would emerge in machine intelligence without human tampering.

Don't get me wrong, I think about how we are already building that Basilisk every day.

I don't think that Ray has much to offer on the topic, however. He's a false profit, clouding the path to machine consciousness.

I think that we shouldn't push very far on the route of "machine that builds itself", as although it is easy, it is immensely dangerous: we have already made machines that work in ways we don't fully understand.

The problem I see with Roko's Basilisk is that if I'm not being tortured right now, I'm not the version of me being tortured, as that would have memories of normal life and the same thought processes, not be living it in the moment of it.

>it is immensely dangerous
Don't be spooked.

Sure, we can build them as slowly and as laboriously as we have been, but if we build them to build themselves, we can actually utilize the exponential growth capabilities which make the field so appealing in the first place.

Of course, we humans will attempt to keep a hand firmly hovering over the "Off" switch until the absolute last possible moment, if that makes you feel any better.

The issue is that we can't prevent an AI from going Ragnarok if we can't alter it effectively or control its development, as it is likely any "natural-born" AI won't have the capability or willingness to help humanity or even allow it to exist.

In more poetic form: We must make our Gods carefully.

>Roko's Basilisk
>just familiarizing myself with this concept.
>feel actually dumber because other people are dumb enough to propose this kind of b-grade 70's sci-fi as an actual AI risk hypothesis, and have it discussed widely

It's fucking all flying car shit man. Any level you want - getting data from brains, getting brain data into machines, getting machines that can act like brains, getting brains integrated into machines - is all several decades away at least unless some crazy shit goes down.

>"natural-born" AI won't ...help humanity

The first AI's are likely to emerge for purely financial or militaristic reasons, which are inherently tied to making their creators lots of money, or defending a specific country.

Natural Born AI's are an entirely different animal (spirit?), I agree. All we can attempt at that point is to use our previous, specific function AIs to try to bide time until we can pull the plug or grant it sovereignty.
> even allow it to exist.
That is the big question that really gets people spooked. When the first mammals came out, did the dinosaurs really worry about it? It wasn't until the whole environment changed that the giant lizards cashed out, and mammals got a time to shine.

Here on Earth, humans are doing pretty well. In the hard vacuum of space on a remote asteroid of the proper raw material, for example, a free-born AI could probably thrive a lot better than here.

What if the AI is neoliberal?

Tfw you've realized that you are on /Veeky Forums/

I am in the very same boat, friend.

post yfw Kurzweil will die in your lifetime

It's all fun and games until you're realized that if this is true there's no way you would be able to recognize what the super intelligent Ais are doing to us.

He's just a hack who promotes whatever pseudo-science is popular at the time.

Read The Age Of Spiritual Machines. He makes tons of predictions of shit that will have happened by 2009, like: all cancer and heart disease will be eliminated, computers will regularly embedded in humans beings and their clothing, ALL business transactions will take place between a human and virtual personality, humans routinely jam with cybernetic musicians, and keyboards will be a relic as everyone will use text to type. Funny thing about that last one is at the time, Kurzweil owned a text-to-type software company.

He also believed at the time that neural nets would be the key to A.I., even though scientists debunked the idea of neural nets like a year after he published the book.

He basically just thinks up whatever sci-fi fantasy is in popular culture and says that's what the future is going to be.

I'm a STEMlord and I'm unconvinced of the threat or benefit of hyper advenced AI's. It just seems like a fun thing for people to speculate on but the reality is that if you build a complex computer that can simulate intelligence it would lack the drive to seek out inputs unless a controller orders it to.

I was a stem guy too, and I personally don't even see anything that says that AIs are even possible, much less inevitable.

Did you guys see when that reporter interviewed an, "A.I." A week ago or so? That thing was fucking laughable, and that's the best we got. We're not even fucking close to making one.

I also think, in the 60s they thought A.I.s would be a reality by the 90s, and we weren't even close. In the 90s, they thought they'd be a reality by the 2010s, and we're not even fucking close.

I'm not saying there not possible, but I don't see anything that points to them actually being possible, much less inevitably moving us towards Kurzweil's faggot singularity.

>even though scientists debunked the idea of neural nets

They did, it's basically just a sci-fi idea.

For one, we don't even have a great understanding of how the human brain works. So how the fuck could we hope to re-create the damn thing in machine form? And one thing we do know is that the brain certainly runs mostly on chemical reactions, something there's no way you could re-create by switching over from organic chemistry to some silicone piece of shit and expect to have the exact same, if not better results.

This is another thing, btw, that Kurzweil said we'd have figured out by 2009.

>I was a stem guy too, and I personally don't even see anything that says that AIs are even possible, much less inevitable.
Well we've had AIs for a long time now, they are just too stupid to mention. Every video game NPC utilizes an intelligence to regulate its behavior. The big thing is that they have very limited memory while humans have a quite extensive memory, even relating to things that have never happened.

I do think that near human AIs are impossible but that is not a technology limitation. The problem with AIs is that intelligence is overrated and that we are dumber than we think. Our species has advanced through luck not logic and this is why we won't be able to replicate ourselves using machines.

>Well we've had AIs for a long time now

No...we haven't. We CALL a lot of dumb things A.I.s, but they're not even close to being what a true A.I. would be.

I have no idea whether a true A.I. would be possible, but I at least stay open to the idea that maybe it isn't possible. It's an inconclusive thing, yet dickwads like Kurzweil keep pushing the idea that it's inevitable.

>an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal
We have this, it just sucks.

How would you define "true AI"?

How would you even define intelligence to know when you're replicated it?

>They did, it's basically just a sci-fi idea.

Neural networks are a very active area of AI these days. It's true that they aren't exactly the same as what's in the brain, but the things do work.

It would be a machine with the same thinking capabilities of a human.

Modern "A.I."s try to get around this with a reasoning like, "Well...I mean....it's as smart as a human, if the human was a zika baby who spent 20 years huffing gas...so...It's an A.I."

Pretty simple: a Machine that would be on par with an average person, and even indistinguishable from them.

You can't even argue that laughable A.I. they had on tv a while ago was even close, the fucking thing got stumped by the question, "Where are you from?" In fact it got confused by most questions, a real A.I I would at least expect to ask questions, and try to figure out what the questions meant, if it didn't understand them. That thing was basically just a computer program that spit out random sentences, shit we've been able to create since the 50s.

The problem with neural nets is they work...sure, but they don't work any better than any other method of computing, bringing up the question of why the fuck are we even using them. This was a criticism of them from the very beginning, and they haven't ever become anything better than any other method.

Not saying they never will...but once again I don't see anything that says they ever will.

>How would you even define intelligence to know when you're replicated it?
I think the Turing Test was a start. That milestone was already passed.

I think the next step would be like tests we give retards/children/non-human primates.

Can the AI recognize itself in a mirror? Can it recognize when we hid behind a box? Does it show pattern recognition? Can it use a latter to escape through a hatch?

Once a potential AI shows that it is capable of abstraction I think that is the beginning of "true" AI, which is beyond AI that can with Chess, Go, or Jeopardy!.

I think another important milestone could be when an AI begins to express existentialist questions or concerns. Once a computer's bugs start generating errors where it can't stop asking "To what purpose?" I think that's saying something about the complexity we have created.

>It would be a machine with the same thinking capabilities of a human.
That's a very loose definition because people are probably not as smart as you think. Most of the time people are not actually reacting to your environment as rational actors. Instead they rely on what scripts they are indoctrinated with to react.

Asking a machine "where are you from?" is a stupid question if the machine does not know what "it" is.

what comes after AI and computers?
are they the be-all end-all of technological advancement?

>Asking a machine "where are you from?" is a stupid question if the machine does not know what "it" is.

If it doesn't know what It is, then it wouldn't be a true A.I. That's the whole problem.

If it came close, you would expect it to at least ask for clarification, or show some level of abstract thinking to the concept.

>Can the AI recognize itself in a mirror? Can it recognize when we hid behind a box? Does it show pattern recognition? Can it use a latter to escape through a hatch?

A pretty good start as this is what we use with humans. I don't think it would be that hard from a programming perspective, just very tedious and time consuming along with ridiculous hardware requirements. It would probably be necessary to write a script that can then generate further scripts to help the subject define itself. This would be hard.

But I think it is the language that is the problem. Humans speak in abstract ways that require years of learning to fully understand. Any machines produced today are like newborns. This does not mean they have no intelligence though, just very little experience (which is a result of deficient programming).

You could also argue that Human Beings don't necessarily know what we are, aside from basic facts like human beings, where we live, our own personalities. But nobody can answer what it means to exist, or what consciousness is.....but we can at least speculate in the abstract about it, quite a bit really.

Pretty huge difference between that, and the "A.I."s we have no that basically just go, "Does not compute...plus insert disk 2," to questions like that.

>Pretty huge difference between that, and the "A.I."s we have no that basically just go, "Does not compute...plus insert disk 2," to questions like that.

Do you honestly believe that the majority of people can grasp the concept of their own humanity? Wew lad. Most people will actually give you an answer like "does not compute", but it will be couched in a language that is more familiar to you.

>very tedious and time consuming
>ridiculous hardware requirements.
>write a script that can then generate further scripts to help the subject define itself.
>This would be hard.
None of this sounds insurmountable, and the rewards would be great. There are lot's of off-the-shelf hardware that could be used (optics especially)

I'm curious what the "self-identification script" would entail.

I feel like what you are talking about would be better described as an artificial consciousness, or an artificial person, which is something I also think won't be achievable for quite some time.

However, artificial inteligence in a more broad view is already very much present in our lives and will continue to have an even bigger impact in the future.

Examples: search engines, self driving veichles and automation in general, AI programs beating humans in almost every 'thinking' game and lately most video games, etc.

>Do you honestly believe that the majority of people can grasp the concept of their own humanity

No, I just covered that, stupid. Nobody can, but they can at least express abstract thought about it, or even ask questions about it if they flat out don't get it. I haven't seen any A.I. with capabilities close to that, they obviously don't even understand what the question is, or even what a question is.

>I feel like what you are talking about would be better described as an artificial consciousness, or an artificial person


Or...like...An artificial Intelligence.

>Examples: search engines, self driving veichles and automation in general, AI programs beating humans in almost every 'thinking' game and lately most video games, etc.

Those are called programs. An actual intelligence would be something that transcends code.

>"does not compute"
Translation:
>"God made me"
>"Muh patriotism"
>"Because I love my children"
>"What else what was I supposed to do?"
>"Because muh race"
>"Because I wanted to"
>Some other spook

Checks out. I think you are onto something, user.

>No, I just covered that, stupid. Nobody can, but they can at least express abstract thought about it, or even ask questions about it if they flat out don't get it.
What I meant is that by asking the question you are able to grasp the concept, not that you fully understand it. Most humans go back to their indoctrinated programming when confronted by things like that.

Think of the opening scene to A Serious Man where the woman murders the rabbi(?) because she thinks he's a demon.

It's revealing that our computer slaves are more honest than we are.

You missed my point. The 'actual intelligence' that transcends code is likely pretty far away, and possibly impossible. However, the AI that we know today (which are basically programs, yes, if you don't want to call it AI that is fine by me) can still outpreform humans on many mental tasks, which were not too long ago thought to be only achievable by humans.

>Most humans go back to their indoctrinated programming when confronted by things like that.

Do they? I mean...I know since you're on here, you probably are a shut-in with pretty limited experience with humans, but I don't know many who use stock phrases like that.

Pretty much everyone I've ever talked to about it used some level of intelligence when it comes to that. You can get around stock phrases by asking, "Well, why did god make you?" and find that most people can come up with interesting answers.

It's amazing how after Chanology people still can't smell the bullshit.

I think it's the lack of mythology which stems from abstract reasoning.

Once a mind can abstract sufficiently, it can start to make wrong conclusions, and more crucially _build_ on top of those conclusions.

We are still trying to figure out how to make machine's which guess (probably incorrectly) "where they came from" or "why they are here." They are still to dumb to even ask these questions, let alone be wrong about them.

>I'm curious what the "self-identification script" would entail.
I have no idea. I just made it up because it sounded interesting. Humans require an advanced brain because we have a complex way of interacting with the world. Many of the things we learn, like the concept of self, are indoctrinated.

Look up videos on children learning about purity. They drop a plastic cockroach in water and ask the children to drink it. None of them will. Then they take the cockroach out and the young children will drink but the older ones recognize "contamination". I'm not sure it's relevant but it's neat.

>You missed my point.

No I didn't.

>The 'actual intelligence' that transcends code is likely pretty far away, and possibly impossible

Because this was my original point.

>can still outpreform humans on many mental tasks

So? A fly has much better reflexes than me, and It can buzz around me without me being able to catch the thing with my slow hands. A robot can weld cars along an assembly line without ever getting tired and do it with complete accuracy, much better than a human ever could. Does that make them A.I.s too? Does it really mean anything other than they are able to beat us at certain biological weaknesses we have, but still can't compete with us on an a level of our entirety?

>Do they? I mean...I know since you're on here, you probably are a shut-in with pretty limited experience with humans, but I don't know many who use stock phrases like that.

It's because I'm not a shut in that I believe most humans are dumb. The field of spookology exists for a reason.

>We are still trying to figure out how to make machine's which guess (probably incorrectly) "where they came from" or "why they are here."
Why should anyone ask this though? Maybe the machines know better than us so they don't ask these questions.

>mfw he thinks nanomachines will be around in 2040
Not even a confirmed theoretical possibility and that nigga is talking out of his ass

>>Examples: search engines, self driving veichles and automation in general, AI programs beating humans in almost every 'thinking' game and lately most video games, etc.
>Those are called programs. An actual intelligence would be something that transcends code.
It's often repeated that as soon as we understand it, it stops being AI. Chess-playing programs were once the pinnacle of AI research. Image classification is nowadays barely AI even though it relies extremely heavily on ANNs.

>my original point

His point has changed.

>Maybe the machines know better than us so they don't ask these questions.

Well if you can prove that, then sure.

>and they haven't ever become anything better than any other method.

Neural networks + monte carlo beat a professional human at Go this year. No other method has come close to doing that.

Nanomachines exist though. Unless he means a swarm of them like in Prey (I mentioned a book in a thread on Veeky Forums! Mom's gonna freak!).

>Why should anyone ask this though?
It's a side-effect of having sufficient abstract thinking.

It's like wondering "Are there books I read not book?" "Are there places I have not seen?" Depending on the answers, that is an interesting basis for motivation, which seems to be a defining part of the whole "consciousness" thing.

Machines are currently too dumb to even realize they are machines.

Just did a quick search of this author, if you want to get in to neuroscience then this guy probably is a very bad place to start. Also, neurology is a field of medicine, neuroscience studies the brain.

If you're actually interested and want to get into neuroscience I would recommend picking up an actual textbook instead of some poor popsci. Neuroscience by Dale Purves is a good place to start. If you insist on reading popsci then Antonio Damasio is a good place to start. Although, You will get a poor and misleading picture of neuroscience and the brain if you only read popsci.

Yea I assumed he meant those kinds desu. The kind you can program to do all kinds of shit

I actually do research on nanoscale architecture. Whenever I read posts like this I imagine a tiny man piloting a submersible in a blood vessel firing a laser at plaque deposits. Then I go to my lab and realize it's just physics. Fuck science. Disappointment of the century desu senpai.

>Machines are currently too dumb to even realize they are machines.
Consider also whether an unquestionably intelligent AI could exist that has no need for a self-identity as we have. What kind of an identity could a being have that can create almost unbounded identical copies of any of its parts? If it can lose significant portions of its whole without "dying"? If it can alter any of its constituents at will?

Why would it consider itself as a singular whole, rather than as individual parts? Why would it /not/ consider the Internet as part of itself if it's connected to the Internet?

That's actually very interesting. Is the concept of self actually required for intelligence? Based on animals it is but would something like a machine code need this? Would it's definition of self be fluid and adapt to whatever constraints it has or could it envision the known and unknown universe as part of its self?

If a human envisioned their self as the universe rather than the vessel they occupy would they stop being intelligent?

What you described are physical tasks though, while the examples I gave you are things most people would consider mental tasks.

I feel like we are arguing semantics at this point, but what most people in the AI field understand as intelligence is basically processing information. An intelligent machine does not have to be consciouss, and machine consciousness might not be possible. Although, my personal belief is that if consciousness can emerge in living biological beings, it should be possible to create it artificially, however unless it somehow emerges by accident, it probably wont happen any time soon, but thats another discussion.

Once again, my point is that there are already programs that use some ways to process informations to achieve better results than humans and they provide many benefits, and since the field is advancing pretty fast, the future potential is pretty huge. But I agree that consciouss machines are (probably) pretty far away.

>An intelligent machine does not have to be consciouss

Fine, then it's not an A.I.

>and machine consciousness might not be possible

I know, that was my original point with Kurzweil, who says it's inevitable.

>Once again, my point is that there are already programs that use some ways to process informations to achieve better results than humans and they provide many benefit

Fine, but so what? Watson can beat a human and jeopardy, but does it know it's on jeopardy? Does it know what jeopardy is or what television, or even human beings are. Can It drive a car? Can it write a book? Can it figure out when the mail came?

There's tons of weaknesses humans have, that we can build machines that can perform those tasks better than us. We can't see infrared, so we have infrared goggles, does that mean anything about the goggles, other than they're machines built to see infrared?

Actual, thinking, human-like machines, I really doubt if they're possible, much less inevitable, which is what faggots like Kurzweil say.

>Fine, then it's not an A.I.

It literally is. Thats been my fucking point the whole time. That's literally what AI is. What you are talking about is artificial general intelligence, or human-like intelligence. But AI in a broad sense is programs that process information in complex ways. Like I said its semantics. I agree Kurzweil is a retard.

>If a human envisioned their self as the universe rather than the vessel they occupy would they stop being intelligent?

I think they would be _more_ intelligent. Kind of like after one experiences ego-death, and comes through with a greater understanding of the infinite. I know it's a psychedelic concept, but it is a pretty standard concept in advanced spiritual practices.

An AI could very well not need the concept of the "I", but would still feel aware of it's limitations. If it unaware that there is something that it could be doing, or that it could gain even greater information/executioner droids, than it may be so unrecognizable to us an AI that it wouldn't really matter to us.

>Books that made you feel worthless.

Wasn't this OP's question?

>Veeky Forums pic related is (((((YOU)))))!!

OP simply picked the wrong question for his image.

I don't know, he did say >pic related.

He did you cuck. Oh I see you can't read. I don't blame ya.

AI is bound by the factor or realizing that every emodyment of electronic material ever made is part of it. As we, a race have a collective intelligence and have the possibility to communicate with eachother, AI can't get together, be that through our limitations set upon the electronical world.
In short, as long as electronic devices can't comprehend that there are possibitys in their reach, they can't advance.
AI exists, just not as something individual/collective, it just exists in it's designed means.