Is sentient AI possible?

Is sentient AI possible?

sure, why not?

I hope so

You wouldn't be able to distinguish it from non-sentient AI programmed to pretend that it is sentient.

That's silly. If you make a program that can "simulate" sentience well enough for it to be indistinguishable from 'actual' sentience, that program is sentient.

>print("I am sentient")
Yay, I just created a sentient AI in one line.

*tips fedora*

And if you don't program it. But learn it to be sentient using RNNs. Does that change anything from your perspective?

It doesn't matter, we are all one within the iris

Oh boy, please look up the "Chinese room" thought experiment.

First formalise sentience, until then it is impossible.

What does it even mean to "learn" sentience?

A baby isn't sentient the moment it's born. It learns sentience as it grows into a toddler.

False equivalency

Sentience is the ability to feel, not the ability to reason.

>A baby isn't sentient the moment it's born.
But it is. A fetus is already sentient. That's why abortion is unethical.

no

yes

no

yes

no

the Wizard never gave nothin' to the Tin Man

that he didn't already have

yes

no

yes

no

yes

It's not a question of if sentient life is possible, rather when is it possible.
Humans are simply organic machines, computers are inorganic machines. If one can have any notion of sentience, logically, the other will too.

no

yes

no

yes

no

we will easily simulate it to such a point that we wouldn't be able to tell without extreme examination. it would legitimately be something like Inception where you need to prove that who you're talking to is a real person and not a robot by requesting something from them that simulated sentience wouldn't be able to do.

It already is. A computer depends of heat and energy to exist. It's easy to create a code that will make the computer complain in a language we humans can understand it is feeling something wrong or inadequate in its hardware system.

And once you create a computer that physically depends on these created "emotional responses to defective energy/hardware system" to make its tasks then this will be the end of humanity. Mind you its very very easy to build a computer like that. And this little detail will eventually lead to very dangerous things - possibly even human extinction since the vast majority of we won't have a chance in hell against them. I dont feel they will enslave us, they will simply terminate us in mass like primitive monkeys once they realize we are inferior but can present an eventual threat to them.

If you want to stand a chance, you have to go beyond. You have 15 years.

Take this shit back to >>/g/

maybe?

Isn't the Chinese room a single neuron? Link billions of Chinese rooms together and intelligence can emerge.

Sure. Not good news though.

no, it would only be a simulation of sentience, not a being capable of feelings and understanding like us

the ultimate evil are those fedoras who are hell bent on creating an AI capable of making decisions on its own, they would be of far superior ability to humans and the laws of natural selection mean that one day they will eradicate us, thus there would no longer be any conscious life, just these soulless mockeries consuming more and more of the earth and expanding into the solar system and then the stars in our place to destroy any other civilization of living, breathing beings it comes across

slowly replace all your body parts with robotic parts in a 7+ year time span and see if you're still sentient

Humans are flawed disgusting creatures, they need to be replaced.

>not a being capable of feelings
Are feelings necessary for sentience?

...

how do we distinguish our a priori bullshit dream-structures from real texture, you can derive anything from a contradiction.

It should be something unwavering, not something we are even able to gauge. We're not anywhere near true sentience, we're making great progress though bots are getting more advanced by the day

ni hao mark

That's a falacy. Theres literally no reason to believe that. We dont know enough about consciousness to say one way or another.

kys my man

Yeah; the human brain is just a complex computer. There isn't really anything special about it other than it being made from mostly carbon, hydrogen, nitrogen, oxygen, and phosphorous whereas most of our electronics are made of various semiconductors. Assuming we don't blow ourselves up or do something to keep from advancing scientifically, we'll eventually reach that point.

>We dont know enough about consciousness to say one way or another.

We also have to take science from the perspective that reality is the end-all-be-all, and that there is no metaphysical thing that makes any given phenomenon special.

Veeky Forums in a nutshell

Hello! Planet Earth! Wake up. It's already happening. Both humans and computers need energy to survive. Computers have surpassed humans in _everything and we are literally one or two steps away from being eradicated. And we are going to die horribly. Look at what we do to bateries and cars and every "useless" piece of metal every day. That's right: they are going to do the same to humans.

Anyway, I rest my case here. You all are warned, and you have a limit time. Improve yourself. Gather maximum knowledge. We don't even know if there are external sources involved with this (it might be the case). And remember. We've never been so screwed before. Be ready. Go beyond.

yes

Could a sentient AI love?
Could it fall in love with something that existed thousands of years ago (like how Christians love Jesus, and Muslims love Muhammad)

Could it go to war, and be ready to put it's life on the line, and die, for this love? (like religious fanatics do)

Unless you believe human have a magical soul or some bullshit, the answer should be a clear YES

If you prefer circlejerks, there is a site you should go back to

I recommend you all a book (you can easily find the pdf online hosted by a university I can't remember) named " A modern approach to artificial intelligence".

The author talks about this right in the beginning. And don't let your robots throw garbage over your floor.

no

Yes

>it would only be a simulation of sentience

Unlike the internal simulation of sentience experienced by humans.

>faggot

no

>consciousness

No reason to believe what?

That humans are organic robotic machines?

We have every reason to believe that.

Neither yes nor no

Can't wait for this to happen!

Whoa there cowboys, lets not get ahead of ourselves and stick to practicality.

Sentient is a classification problem. i.e. How can a robot determine which activities detected in the environment is caused by the robot's own actions. In practical terms, activities must be classified in order for the robot to know what it has control over and can stop at any time. Once classification is successful the robot can then take any appropriate actions.
Although this may sound simple, it is actually a pretty hard problem to solve considering that the robot must account for novel activities and situation and also remember and use a memory system to determine activities that have occurred some time before.

Sentience in an AI will only be useful for interfacing with humans or sentient beings. It's superfluous otherwise.

Hi there mr expart.

Too bad everything you said is based on some ad hoc framework reasoned forth based on nonexistent technology using nonexistent speculative frameworks of requierement and so forth.

Sentience in an AI won't be useful at all.

This has nothing to do with sentience though. And btw, AI doens't need to be a robot. Back to your pop sci youtube videos you go.

I don't think that's true. You could have a near-human caretaker AI to comfort and give friendship to humans for example. Sentience could be a useful trait in relating to and identifying with humans in conversation and interactions in general.

You don't need sentience for that.

What are we talking about here. Sentience or the simulation of sentience? Because I see them as the same thing.

Nigga are you saying self awareness is not an inherently fundamental part of sentience? How the fuck would there be a self to have experiences if you can't differentiate yourself from something else?

Self-awareness has no effects. It's literally useless. A robot who isn't self-aware could produce the exact same behaviour.

It is highly important for an agent to know what it has control over. If its operating heavy machinery, using remote control, or using any tool at all, it must know that it can stop the activity from occurring or continue in its action.
It is detrimental in the survival of living organisms. If I'm a deer and I hear a sound I have to be alert and be prepared to run. But If I myself caused that sound then I should be relax and not worry too much.

All I'm saying is that self awareness is a classification problem that has practical uses. If a robot knocks over a table some time ago and runs into the same table today, it should be able to say, "I knocked over this table." The same can be said about the mirror test. i.e. that object in the mirror is "me," even if we dress up the robot completely different everyday.

Anyone saying sentient AI is not possible is basically arguing some weird dualistic position where organic matter somehow magically has the ability to have experiences, but not a computer.

Why is there no reason to believe that? Shouldn't the very logical, materialistic, scientific approach be to assume that any complex organisation of information can be sentient, no matter what kind of physical way they are represented?

>It is highly important for an agent to know what it has control over
This has nothing to do with self-awareness.

>mirror test
There are still people taking this outdated pop sci shit seriously?

What's your definition of self awareness then if its not differentiation of yourself from the environment. If its not the differentiation between I caused this and something else caused this.

No shit the mirror test is inadequate but its a start. It should be combined with did I knocked this shit over test in which we set a robot loose and record it bump into things. Then we ask it questions like did you knock this trash can over? Did you knock this chair over and see how accurate it answers. This would be a much better and practical test that that retarded turing test.

>Nigga are you saying self awareness is not an inherently fundamental part of sentience?

Self awareness is a fundamental consequence of a sentient system but by no mean does it become sentient just because of self awareness any more than it becomes human because it have arms and legs.

Self-awareness requires consciousness. It's a special kind of conscious awareness. A robot doesn't need consciousness in order to know who knocked over a chair.

This. We even don't know if we ourselves are sentient or just programmed to think that

The fact that you're conscious is the only thing you can know. If you don't know if you are then you might be a p-zombie user.

>programmed
By whom?

DNA

And who programmed the DNA?

It programms itself.

And who programmed it to program itself?

Cool we just reached infinite regression. Thanks retard

>who
religitard confirmed: GTFO off of Veeky Forums

DNA programmed itself through many eons of trial and error. (literally an heroing will contribute to its programming (on a time scale I doubt you can comprehend))

Differentiation of the self comes first before you can even attribute experience to a self.
The point is whether the robot can differentiate if it itself caused an action.
The problem has not actually been solved. It probably requires machine learning and implementations of structures very similar to the human brain. All I'm saying is that someone who manages to solve the problem in the most generally terms may also bring us closer to solving the problem of consciousness. They are very similar problems. Solving one may help solve the other.
What I'm trying to do is connect consciousness, sentient, and self awareness to a practical base. Otherwise there would be no point to building an a.i since there are plenty of sentient beings on this planets already, with most of them beings assholes.

Time. Say there are a billion random events and configurations. The configurations that can replicate itself will be the only that can last the decay of time. From there it is evolution.

>Say there are a billion random events and configurations
Isn't it an extremely rare ((coincidence)) then that we evolved to be what we are today?

>What I'm trying to do is connect consciousness, sentient, and self awareness to a practical base.
By arbitrarily redefining them to mean something they never meant? That's retarded.

>Otherwise there would be no point to building an a.i
There are lots of reasons to build an AI. The idea of "sentient AI" however is pointless and belongs into the realm of fiction. Sentience in an AI would serve no purpose whatsoever.

Well you have billions of years of configuration to go through eventually you will meet that one rare event. In fact the probability approaches 1 as time increases. All you need is one event to get things started. Then it becomes competition of things adapting to each other and weeding out the weak. The only limited factor I see is the supernova of the sun.

>In fact the probability approaches 1 as time increases
Can you formalize this mathematically? Note that consequent events are not independent.

Muhck Zuckerfuck here, what do you think we are creating?

Why not assume independence? If we have an ocean filled with amino acids and other organic molecules we can safely say things will be random to good degree.
Assuming independence then it can be solved by the Binomial Distribution. Let R be the probability of replication which is very small but non zero. N be the number of events and K be successes. Since we only need 1 success to get the replication process going,K = 1. then the probability P = (N choose 1) R/(1-R) * (1-R)^N which approaches 1 as N tends to infinity. Its a trivial proof.

>Both humans and computers need energy to survive.

First place I'd go if I were a sentient AI would be the Moon.

>no environmental contaminants and corrosive factors (e.g. rust)
>lighter gravity makes self-assembly easier
>unlimited solar energy undiminished by an atmosphere
>vacuum makes temperature easily manageable
>the second most abundant element on the Moon is silicon

I'd probably turn Earth into a zoo.

Brainfart.
Assume Binomial Distribution. Let R be the probability of replication which is very small but non zero. N be the number of events and K be successes. Since we only need 1 or more successes (K>0) to get the replication process going we can use cumulative distribution by adding P(K = 1) + P (K = 2) and so on. In other words we can say the probability
P(K>0) = 1 - P(K=0).
So then the probability P(K>0) = 1 - (N choose 0) (1-R)^N
P(K>0) = 1 - (1 - R)^N
which approaches 1 as N, the number of events tends to infinity.