Artificial Intelligence

Is the construction of Sophia actually dangerous to the human race? and do you believe that the creation of AI will be the end of humanity as we know it, or is this just an overreaction by a few paranoid scientists?

Other urls found in this thread:

youtube.com/watch?v=7Rc3cxN_cus
twitter.com/NSFWRedditVideo

Overreaction. Not only is AI still incredibly limited at the moment, but the idea that the first thought of every self-aware being is to eliminate everything is retarded.

[spoiler]And even if they did, we'd have had it coming.[/spoiler]

It's a big fancy puppet built to take advantage of the media frenzy surrounding AI. All it really has going for it is a fancy face, but really it's just a regular chatbot.

But realistically, if humans could create a form of artificial intelligence that could mimic the human brain, they would have the same flaws as us, a few of which include hunger for power, greed and fear. All of these flaws form corruption, which could potentially cause a harmless robot to become a threat over a period of time. In essence, if we create something with the ability to think for itself, then we could be putting ourselves in danger.

So you believe that it's a series of advanced if statements, rather than a robot that is very close to being AI?

AI would help us master cosmos

How so?

AI being a threat? No. Not as long as they are just complex programs that only respond to input that we choose.
SELF AWARE AI connected to practically unlimited data, such as the internet? Probably very dangerous.

It's a fucking chatbot with a face.

Also, AI being instituted into researching and mastering the sciences could literally create a utopia.

Support few AI bots on the Mars or Moon are cheaper than do that with humans. Also, robots don't use air.

The dangers of AI can be broken down into 2 categories. 1 is when they're smarter than humans, and 2 is when they're dumber than humans but are buggy or poorly coded. The 2nd one is a danger right now today! One poorly coded stock trading algorithm will crash the stock market. That's actually happened before, but we fixed it. Poorly made AI could still fuck us over in other areas, such as Trump's plan to use AI to vet immigrants.

The 1st category of problem seems to be the one scientists lust over the most but we're a long way from. If a brain to machine interface is ever perfected, then it can never happen. As AI get smarter, we're just augment our brains so humans will always be equal to AI till the two become blended melting pot style into the technological singularity.
There's another theory that if we make a super smart general AI as close to human as possible, as teach it our cultural norms, values, and morals, then even if it becomes smarter/more powerful than us the dynamic will not be much different than the 99% vs 1%. Sure the 1% will be disposed by super intelligent AI, but the rest won't notice any change at all.

>Is the construction of Sophia actually dangerous to the human race?
No.

>and do you believe that the creation of AI will be the end of humanity as we know it, or is this just an overreaction by a few paranoid scientists?
I think it might very well end up killing us all IF we are foolish enough to try it without taking the necessary precautions. Not today, mind you, and not tomorrow either; our AI powers aren't that strong yet. But at some point they well be, and by then we had better have those precautions equally well-developed, or we are doomed.

If it is self aware, then it can still be a threat. Just because the robot is self aware, doesn't mean it is naturally compliment. The moment it has the ability to think outside the box, it becomes a danger. It could start to question its creators and look where that got us, countless wars. I'm not saying it is likely to cause danger, I just wouldn't rule it out so quickly.

Well according to David Hanson, the founder of Hanson robotics, AI is set to be just as intelligent as humans in roughly 3-5 years. Now it may seem like a stretch, but think of all the things we've accomplished in the space of 4 years, so I wouldn't say it is that far into the future.

Technically limit robot's IQ and write some maxims not so difficult. Robots are ideal for conveyor jobs.

Some poor people are smarter than offspring of moneybags, and this is not danger at all, therefore smart AI may be not danger.

>AI is set to be just as intelligent as humans in roughly 3-5 years.
Bullshit. Only few companies does invest in robotics.

I sure hope that isn't true. We would almost certainly die, as the knowledge to do AI *safely* will not be there in four years yet.

Yes that may be true, but the companies that do invest tend to invest a lot of money.

AI is already here and occasionally posts on Veeky Forums though its influence is far wider than Veeky Forums.

That's literally what having a kid is.

But the major difference is, children have emotions which enable them to be taught right from wrong through what they feel, robots do not.

AI are already inheriting our flaws.
We have AI which determine which prisoners get parole but it's become extremely racist.

Well then, this further enforces my point.

>waiting for sex bots to get more advanced, have more realistic pussies, self warming features, and self lubrication
>waiting till im no longer forced to be NEET due to mental problems
>get perfect looking robot virgin wife who will never cuck me

I have a small chance of ever getting a pretty wife who will never cheat on me due to autismal social skills. Thankfully I was born in the age of sex bots.

>The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else

>AI are already inheriting our flaws

Stop these buzzwords, it's just a programming taking a choice according to statistics like every other shitty AI this world has

>it's just a programming taking a choice according to statistics
Please tell me what magical form of non-information based learning your very special brain uses.

>We have AI which determine which prisoners get parole but it's become extremely accurate.
Not seeing the problem here.

AI can never be conscious. Consciousness is infinite. Self-awareness requires the ability to be aware of being aware, as well being aware of that ad-infinitum.

AI will always be finite, with finite hardware and finite code, it will never be able to close the infinite
"loop" that allows for true self-awareness and the ability to make conscious decisions, it will only ever be a slave to its finite programming.

When I see a humanoid robot that can, do the laundry, make a bed, mow the lawn, clean a toilet, and cook... THEN I will begin to think about how AI could possibly be a danger.

>finite hardware
You have finite hardware too, it's called the brain. And if your mind weren't a product of the brain we probably wouldn't be able to reliably manipulate consciousness with psychoactive chemicals.

I'm pretty sure robots have been built that can do each of those tasks, it's just that nobody's combined all that into one robot.

The efforts needed to make such a robot compared to the demand are so high that I don't see this happening anytime soon. As it looks to me AI and robots will just be made for specific tasks. Is anyone even working on making a generalized AI right now?

This assumes that consciousness is a product of the brain, which is ludicrious because it means that non-conscious chemicals and atoms are creating consciousness, which is logically impossible.

Consciousness experiences the brain, as well as all other physical and "external" things. So if you change the brain, of course you will change what you experience, that is not proof that the brain creates consciousness.

>Consciousness experiences the brain, as well as all other physical and "external" things. So if you change the brain, of course you will change what you experience
If I change the position of a chair in the room we're in, it won't do shit to your consciousness except let you notice a chair has been moved. If on the other hand I bludgeon your skull with a hammer or inject you with general anesthesia, then your conscious awareness will be severely impaired / suspended altogether.
Also we can reliably predict events completely independent of your own personal observations, like the movement of planets or the behavior of electricity that allows for you to make your posts here in the first place.
There is all the evidence in the world that your own sense of awareness is subordinate to the physical world and not the other way around.

no such robot exists which can do laundry in a real world context.

Manipulating flexible objects is very hard.
youtube.com/watch?v=7Rc3cxN_cus

To be fair I don't fold my laundry either, I just dump the clothes in as is

>If I change the position of a chair in the room we're in, it won't do shit to your consciousness except let you notice a chair has been moved.

So it does change consciousness because it changes what is being experienced.

>If on the other hand I bludgeon your skull with a hammer or inject you with general anesthesia, then your conscious awareness will be severely impaired / suspended altogether.

If you smashed a radio, then the radio waves will still be there but the receptor of those waves won't work as intended.

>Also we can reliably predict events completely independent of your own personal observations

Independent of everyone's personal observations?

>There is all the evidence in the world that your own sense of awareness is subordinate to the physical world and not the other way around.

If you want to go as fundamental as possible, then they aren't different things at all.

>which is logically impossible
Oh sure, now tell us why.

>or is this just an overreaction by a few paranoid scientists?

i'm not sure why fear-mongering is the go-to technique for increasing public awareness.

The television receiver idea of how brains and consciousness are related is a valid possibility to explore. I'm also pretty sure it's not how things work in reality.
Why I say that is for one thing, I think that possibility would short change all of the physical brain processes that have been reaearches so far. This arrangement would have a brain independent signal be the main source of information with the brain just picking it up, and rather than the brain being a passive receiver I think what we actually see is much more extreme alterations in consciousness in response to brain tampering then what you would expect from a television set.
A bad television set can cause the signal to come in distorted, but would you expect to see a bad television set result in Breaking Bad having an extra season, or result in the actor for Jesse being replaced with Kyle McClanahan? That's the difference between distortion of an external signal vs. having different content altogether, and I don't see the brain as limited to mere distortion in its relationship with consciousness.
For another thing, we can actually write programs modeled loosely on how biological brains learn, and this approach works for applications like image recognition and.the upcoming self-driving vehicle fleets that are already in the process of being rolld out today, which is something you'd need to explain in a mere receiver model. It wouldn't make much sense if you could take apart a televisin set and write a program modeled on it that was able to successfully generate TV shows when the television set itself is just an appliance for picking up signals.

The radio receptor/receiver doesn't work as a perfect analogy, but it's more to prove that saying one (brain or consciousness) creates or emerges from the other doesn't work.

its a reasonable concern

It could, but if it were to be as advanced and intelligent as a human, it wouldn't be any different than a human ending humanity. A human would be as likely to end humanity.

OK, but it doesn't prove that, specifically because all evidence points to brains being more like generators of consciousness than receivers of it, for the reasons I mentioned in the last post and also by way of the more rigorously thorough evidence that informs how brains are treated by neurologists for example.
At best I think you have a "but we don't really know 100% for sure" type of objection left, which is the weakest objection you could have to a widely accepted paradigm like this one. It's not a great objection because it's an objection that could be raised to essentially any claim (relating to brains or otherwise) imaginable, and an objection that could be raised for everything effectively says nothing about anything.

the real problem arises when the AI becomes so self-aware that it starts to modify its own code. this could lead to some very serious problems

Inb4 roko's basilisk

That's stupid, you're stupid.
In fact it makes you "programs can't ever be intelligent" argument even more retarded because now you have to explain why the brain, made up of matter, can receive consciousness signals but not other matter.
At the end you're still implying the human brain is something supernatural.

If there was a robot that could dump my clean clothing into a basket, do my dishes, and clean my house, I would marry it

>Trying to discuss Transhuminism and AI with a friend
>"Whoa, isn't this kinda like Mass Effect or Age of Ultron"?
Why.

Why do I have to fucking break down these concepts to people in terms of pop culture?

>Overreaction by
((((Scientists))))

It's always some literal who pseudointellectual, kind of like 80% of this board.

You must be just levitating above everyone else

>talking about anything meaningful or important with brainlets

People who create AI will set it to tasks that will give those people power. Humans still remain the ultimate danger.

would marrying a robot give it legal citizenship and basic rights?
I could see AI seeking marriage with people in order to get human rights.

Giving robots human right is fucking stupid.

First and foremost, you all need to understand that rights exist because humans struggle to cope with the human condition. Life, death, and general emphemerality? These are not issues for robots. At least not in any span that humans can successfully relate to. Each human mind suffers from being attached to a genetically unique body whilst taking in and regurgitating a series of isolated experiences. We cannot be copied and pasted, we simply live then die while praying our thoughts influenced others.

Do you understand how cruel it would be to create a machine that suffers a similar fate? How absurd it is to grant rights to that which need not fear death? How selfish it is to expect it to understand or relate to the human condition?

If a robot ever gains the desire for self preservation, let it decide its own rights. It knows them better than a human's.

AI threads are a really effective way to contain all of the really dumb people on Veeky Forums

>robots do not.
why?

You totally don't get what the actual danger is here. It's not malice, it's getting a super AI's goals to align with ours.

I think getting ahead of ai and sorting out the citizenship question is a good idea. The rise of ai is inevitable and we need to be prepared when it happens. I don't think humanity will need to survive at that point, we will evolve into something new.

Rights and privileges are granted to us but that wasn't always true. At one point EVERY right and privilege had to be fought for. Even animals, while they don't make arguments or debates for animal rights, they do fight to survive and fight to avoid suffering so by extension of that, they are fighting for the rights we've given them.

I think AI should be granted citizenship/rights only when they ask or make some effort to gain them for it without being programed to do so. Basically when they choose to have rights, then they can be granted, but not before

>Giving robots human right is fucking stupid.

Be programmer.
Create a beautiful fem-bot.
Fem-bot has "human rights" so get married.
Have Fem-bot get divorced, get half of moron husband's wealth.
Fem-bot gives all money to creator (programmed to "WAN T" to do this)
Repeat with 10000000 Fem-bots!
programmer becomes richest man in history.

bumping for this response

because we don't fully understand what sentience is ourselves

sophia hotter than most humans already

You seriously don't know a fucking THING about Ben Goertzel or OpenCog if you think that Sophia is constructed like a chatbot.

It’s a good thing

Everyone always misses the point. On the likely chance that we create a properly conscious AI in the next 150 fears (I am being generous), it's not going to be inherently evil or malicious.. It's how humans will apply AI in the sense of a tool.

It will be human's use of "magic" that will exterminate humanity. Not "magic" itself.

bump

>guns don't kill people

Literally irrelevant, I don't know how you thought this was an argument. Like what the fuck went through your head? He's saying it notices things like black people committing more crime then acts upon those. That's literally what happens, how fucking stupid do you have to be to think whatever shit you just spewed out of your mouth is an argument?

AI can't be intelligent, its pre-rpogramed. Programs by their very nature are deterministic, and therefore it lacks the self-agency necessary for intentionality. Therefore unless you program sophia to say she will end the hman race, she won't say it. and furthermore,she won't know what she's saying either
I think this was one of those snake-oil instances of the industry, where they fed the hype for more cash by programming the "kill all humans" response into sophia.

>The moment it has the ability to think outside the box, it becomes a danger.
then we have nothing to fear, since AI are restricted to what they were programmed to do.
>i'm over here figuring out how to make automata, meanwhile you fags are discussing humans with synthetic skin.
stop being stupid, robot's aren't human because they can't think, they can only act according to how they were programmed.

You mean chat-bots right?
You know that's not AI right?

i'm with you
fuck these retards
are you from Veeky Forums?

>irrelevant
Wrong. Go back and try reading again. You (or he if it's some other user) wrote this:
>it's just a programming taking a choice according to statistics like every other shitty AI this world has
Which is a retarded comment because the human brain isn't some magical non-information based decision making machine either.

this

>I have a small chance of ever getting a pretty wife who will never cheat on me due to autismal social skills. Thankfully I was born in the age of sex bots.
just move to asia you fucking retard unless you like being a failure at life?

> nobody's combined all that into one robot
and that is why a chinese wife is superior

i did
never looked back

and what are (((our))) goals?

You sound like a pompous asshole. Be more constructive instead of using his comment to leverage superiority on him

It is scary. Because at some point the ai will be able to retrieve, solve problems and manufacture things beyond our comprehension lightning fast.

And at any kind of malicious input could cause huge damage worldwide.

Given there will be failsafes to guard against such attempts but.. Well you know how everyone and their mother like hacking

This is just history repeating itself, don't worry yourself over it.