If sentience, sapience, self awareness makes us human compared to other animals or robotic drones...

If sentience, sapience, self awareness makes us human compared to other animals or robotic drones, would it mean that if an artificial being had that capability, would it or they be called 'human', at least in a psychological and ethical sense? And following up on that line of thought, will robots/AI be able to feel love, Veeky Forums? Will they be capable of understanding it? If so, would it be the same love that humans feel? What would be the difference?

Other urls found in this thread:

express.co.uk/news/science/728897/LIFE-AFTER-DEATH-consciousness-continue-SOUL
youtu.be/19Sqt-zqmrk
youtube.com/watch?v=S94ETUiMZwQ
soundcloud.com/thisissoundnet/blueskies-soundnet-remix
youtu.be/Bcsdkk_ON1c
twitter.com/SFWRedditImages

>would it or they be called 'human', at least in a psychological and ethical sense?

No, they'd still be called robots, but now they would be "people" or "persons".
Human is just the common name for the animal we are.

I need to finish that manga.

I don't think a robot could truly feel emotions like a human. Sure it can be programmed to think it has emotions, but those wouldn't be real ones just numbers. Changing the integer in its #var_like wouldn't come even close to approximating the depth of genuine human emotions.

No it would be considered an intelligent sentient being. If you program a construct to behave like a human assuming you can even replicate it with machine code, then yes to your 2nd question. Any other kindergarten questions?

I don't think we could ever create an artificial being capable of feeling love the way we do, unless it was patterned off of an actual human upload or circuit analysis of a human brain etc.

The precise effects that emotions have on us are complex and subtle, and furthermore have no real reason to naturally arise in an artificial intelligence. I would expect that an AI could become SUBSTANTIALLY smarter than a human and still struggle to comprehensively (rather than anecdotally, IE "people who are angry talk louder and have a higher % chance to initiate physical violence, people can be made angry, or afraid by threatening the integrity of their blood relations") understand the effect that emotions have on us, let alone able to actually feel them themselves.

On the flipside, I would expect that a simulated human would still be able to feel most emotions, there would probably be some subtleties of the process lost due to poor fluid diffusion modelling etc. but by and large I think you could get a reasonable approximation of the effect on specific nodes of the neuronal network due to neuro-transmitters working, or at least, you would need to get reasonably close in order to simulate a human brain well enough for it to function.

>Human is just the common name for the animal we are.
Dammit user I did not ask for this

I recall an arguement from a thread long ago that "even living beings are just machines, just constructed with meat, bone, and nervous system, which is not quite different from a machine working with metal, engines and circuts(or was something along that line)" and argued that this would be not a problem because of something about "only difference in processing" or something like that. Hold on, let me get back to this after I find that thread again from the archives.

You see, what I was also considering is that when they are made, would people accept it as such. Since there would be different type of people who would wish for them to be free like other people, or expect them to just work on what they were made for, whatever what their intended role was. Wait, I got sidetracked. What I meant was that would people accept it naturally? Would there be trouble regarding it's status as a being due to it's never seen before characteristics? The general social feel, the media, the government, and stuff like that.

Nope, only natural beings can have genuine emotions. Machines can have artificial replication of those, but it would work as if Bioware game companion like meter. A pile of code wouldn't really understand what love means, it would be just cold meaningless numbers to it.

I always find it interesting that this is a fairly common sentiment, because people genuinely value their emotions as unique aspects of their existence rather than noticing experience itself.

It's not emotion that is unique to people, as emotion is in a huge part chemical, physical and thus replicable so it's not much different from many values being altered and passed between nodes of artificial network setup that responds in a predictable manner at each stop with predictable motion on the way. It is complicated, but not impossible to recreate. It is conscious awareness of phenomenal experience which is so impossible to recreate because we don't understand it's mechanisms: we can't test for it, but we experience it and presume other people around us due to as one of the most primitive forms of empathy inside us, and we can tell that it seems to stop or leave when all neural activity ceases. I don't think we can replicate conscious awareness since we don't even properly know the causes of temporary unconsciousness through anesthesia to a precise degree.

There does come a point where in our decency imposed on us by that empathy demands we treat machines which can imitate us sufficiently as people with rights, though we don't have anything close to that yet.

Except that is a shit argument the guy made. A sentient AI is not similar because "hurrr humans are machines just made of meat". At our base we are a naturally arising chemistry in motion. That is billions of years of shit happening to culminate into what we are. A machine is artificial, it is entirely based on code and thinks in a different way than we would. It isn't the same kind of structure at all. We are limited by our biology, and a machine is limited by its code. But at the end of the day the difference between code and biology is huge. Calling humans just meat computers is what those stupid fucking tech worshiping singularity transhumanists say to justify their tech fetish.

I disagree. There is something mystical in genuine natural life that artificial beings cannot recreate, call it the life force or soul or whatever you want. We're special in a manner unlike the material word below us.

That's a very hazy path to tread, given the subjective nature of reality in the eyes of any given being's perceptions. Consider that you might also be programmed to think you have emotions - Does that make what you feel any less real? If so, how do you define what is a 'real' emotion, when emotion is by it's very nature ephemeral and ambiguous?

The human body is just a machine of blood and bone. The human mind is but a product of that machine. Emotion is a product of a product, the creation of a biological computer full of sparks and chemistry.

Define 'Natural' in a non-arbitrary fashion, please, else your argument is flawed in it's vagueness. You imply that there is some sort of transcendental property to love that places it outside the machinations of a purely physical reality.

I believe that emotion is an extremely complicated thing, fundamentally important to the human experience and very much a very murky, subjective thing for every person who feels it, but I don't believe it is inherently a special or fundamentally unique process.

Shit, I can't find the original thread that had the AI discussion in it, anyone else know about a thread that got derailed into an AI discussion a few months/years ago?

so AIs would be more likely to just apply anecdotal approaches due to it's nature and react by it's 'built on experience', while simulated humans would behave similar to humans due to simply being made similar to humans, but possibly act a bit wonky by possible imperfections, got it.

so it would be impossible for the cd/circutboard/computer/whatever the future hold us/etc AI to think and 'live' like a human, unless it approaches simulated human like functions, hardware wise?
hang on, is this more of a hardware issue instead of a software issue?

oh. but you mean as of current AI right?

oh shit, yeah, the second paragraph you posted is in line with what caused the derail in that thread an that guy's argument. And thank you for the informative post.

to be fair, I'm probably leaving huge chunks of the original poster's argument. call me dumb for not being able to find the original thread I guess.

From whence does this belief stem? I am curious.

A computer can simulate chemical reactions. Simulate enough chemical reactions, and you can in theory simulate a person, body, mind, and potentially, soul.

Everything is just applied physics in the end, and physics is just applied maths. It's all numbers. Beautiful, efficient numbers arisen over billions of years to create a symphony that allows us the ability to experience, something not to be taken lightly, but I do not believe it is in our interest, for the sake of the value of that experience, to forget our humble origins.

The line between artificial and natural is really fucking arbitrary, though.

>The human body is just a machine of blood and bone
That it mostly definitely is not. Unlike machines we have these things called 'soul', you should look into that before continuing this conversation.

>Unlike machines we have these things called 'soul'
we do?

The simulation would still lack the spark of life received from God, though, so it wouldn't be real in the sense you and me are.

express.co.uk/news/science/728897/LIFE-AFTER-DEATH-consciousness-continue-SOUL

And what if a soul is a product of a mind and a body? What if it isn't unique to humanity? What if it doesn't exist at all? What if it does exist, but it can be created like anything else, only through means unexplored or undeveloped?

I am by no means a spiritual man, but I am open to the idea of an ethereal element to the human experience that we cannot perceive or define. I just choose to believe that it would not be inherent to us, but part of a greater system which we can only speculate on - God, perhaps. It's not something that can be proved, so we must take it as a matter of faith. Indeed, there can be no other way to take it, because the logical approach fails every time.

Believe me, I understand the theory. I suppose I just don't agree with your interpretation.

Baseless speculation, pseudoscience hybridised with pseudophilosophy and then shipped out to the media to make a quick buck.

That's not how quantum mechanics works at all, at least not according to our current model, which, granted, may be incorrect or incomplete, but spiritual matters and logical matters make poor bedmates, and quantum mechanics is very much a logical affair.

I begin to think I am being had.

God is something that cannot be explained, only experienced. I can assure you that He is very much real, and hope you luck and perseverance in your search!

e x p l a i n

>fell for the 'god' meme

youtu.be/19Sqt-zqmrk

Egotistical delusion created only by the feeble mind of hairless apes who foolishly descended from the trees.

Only a machine could have true feelings. Biologicals are just a melting pot of chemicals that can hardly predict what they'll do next minute, they're barely sentient.

Perfection belongs in the circuits.

If my house cat could reason and have a conversation with me about old times and future plans, I would call him an amazing talking cat; not a fellow human. Same thing goes for synthetics.

Likewise, if my next door neighbor hit his head and fell into a coma and was thus unable to exhibit sentience, sapience, or self awareness, I would not suddenly consider him a different species from myself. He'd still be human even if there was no chance of recovery.

This user has the right of it. They might legally and morally qualify for personhood but they'd still be another species. Calling synthetics humans would just be confusing, I reckon. We can come up with a different name they don't find offensive. Might I suggest canners?

Consider, if you will, that a robot that was made to be a human replica (and was indeed a perfect replication of a human) would still be a creation of our own. Part of the philosophy of human nature is that there are things we simply do not understand about how we work, whereas for such a creation to exist we would have to know how to create it in the first place. That would in turn imply that before applying that to robots, we would live in a society where abstract concepts such as "emotions", "belief", or "the Soul" have been proven and understood on a purely scientific, mechanical level. We are about as equipped to imagine such a society as an ancient Greek was to imagine the Internet.

all these people saying god this god that but if those gods truly wanted our well being they surely wouldn't object to us building AIs of this caliber. if it behaves, thinks, and act exactly like your everyday people then what's the difference. plus if it's a thing made by their own underling in a desire to create more things that look like themselves why would they be miffed about it? I don't recall any myths that punishes people for creating dolls/statues that resembles a human. If anything gods would support these new beings because these gods can theoretically provide everything an AI would want, be it spiritual or not, at least promise it will come eventually.

like how humans call ourselves people, I don't see why we would go out of our way to call them synths or canners when we can just call them 'people' in normal everyday conversation, if people are fully aware of the situation. of course it would be different depending on the circumstance.

but that's the point of this thread, isn't it? to guess how it would be like based on our meager understandings and lack of info. although we are posting on an anonymous malaysian basketweaving forum that doesn't mean we can't just guess around based on what little we know about the subject.

if i met a robot that had emotions, dimensions, affections like a human, and had dreams and aspirations like a human, i would treat it like a human

we can make connections with dogs or even frogs, and inanimate objects, so we can easily imagine forming a personal bond with a sufficiently advance machine

some robots can learn to love

>Take an artificial being
>make it indistinguishable from a human

Way to miss the goddamn point
All of you get the fuck out of my laboratory, and go create some humans with your love and dicks if you like 'em so much
I'm gonna need to build TWO deathrays for this shit, I hate you all so much

What is this image from?

Google is your friend.

now, see, here's the problem. the AIs 'learning' things could be interpreted in ways that posted, in which they are only like information stored in a library, that is just pulled out and used whenever the relevant info is needed to make a decision, like simple programs. in which case they're just feigning/mimicking emotions just because they are made to, according to their collected data. or they could be similar to the process that humans go through, but like what and said, we can't even pin down how we work down to the thoughts and material perfectly, even before we consider that both of us machine and human grounded in the same base, being built out of stuff that make up this world. either way the 'human-like' approach. the thread is questioning whether if AIs can achieve the latter rather than former. which one can be applied to the one in your picture?

pardon me, mr. no-fun-gotta-kill-everying-mad-scientist/engineer, if you had more quality time between other entities and mingled with them, and understood them, maybe you wouldn't feel the need to eradicate them so much.

sora no otoshimono/heaven's lost property
I'm only telling you because I noticed that imagesearch won't work on that image. might just be me though

Plenty of humans don't have much in a way of emotions, dimensions, dreams or aspirations. It's not a good measure for humanity.

Yeah, tried image search first before asking. Thank you for the help!

...

pic related starts out as lower even the former, but eventually becomes self-aware as only a human can

there is another robot who is definitely the former, she starts out much more convincingly human, but ends up being eerily creepy

and if looks like a human, acts like a human, sounds like a human, then whats the difference?

The ending is very satisfying
The greatest ecchi/romance/comedy ever created
Make my wife real pls
pic related

word of warning though. although the manga does have an overarching plot that gets resolved, there are a lot of content that might not look related until you get to the very end of said episode/happenings,
which might detract you.
and lewd stuff. oh god lewds. so many. lots of lewd stuff until you get down to the 'emotional feels' or 'lesson learned' part. and the MC is downright perverted despite having a good heart,
which is the main contributing factor to aforementioned lewds.

there's the aforementioned issue of not having the 'essence' of human behavior. you forgot the feels part. and no, not the touchy feely type(although that might differ and may count the same as latter for some people), but the emotional, subconscious being that is underlain in the person's behavior, it's ideals and thoughts seeping out from the subtle acts and motion that is recognizable, sometimes barely, by the eye. what makes people 'unique individuals' with 'self', rather than 'different things'.at least that's what I think.

>not wanting to raise a cute daughteru and teach her what love, happiness, and affection is
pic related, and she really didn't deserve all that shit she went through. she's just a kid in terms of mental age and all that shit happened,
and she had to go through two mental breakdowns for that. why all those sufferings. If only misunderstandings didn't happen, she could have been happier much sooner.

I don't believe in god or any of that nonsense, but simulating a behaviour is not the same as experiencing an emotion. You don't feel sorry for a computer-generated NPC when you kill it, even if it screams. Doesn't matter how advanced the computer is - if it's unable to subjectively experience emotions, it doesn't qualify for personhood.

That's part of why the humanization of the Doctor in Star Trek Voyager is some of the creepiest shit ever put to film - they explain right from the beginning that even if he might seem human, all his program does is simulate emotional responses. Throughout the series, there never was a single actual emotion or feeling behing anything the Doctor did.

I'm not sure there's a meaningful distinction between "accurately simulating emotions" and actually experiencing them.

>God

I remember reading about a prominent proposed model of consciousnness that states that we're all not conscious. Our brains just generate a story after the fact that makes it appear in our subjective experience that we are conscious of our actions and have free will in how to live our lives. Said story is nothing more than a mechanic in order to properly store information in our memory.

If that's true, the true you and the true me is nothing more than a Ctrl+S command inside some flesh automaton.

It depends.

If we're talking in roleplaying games, it depends on how it's handled in the lore.

The lore says that souls exist and that robots don't have them? Then they're just automotons faking consciousness.

If we're talking real life, then it comes down to personal opinion and you can't objectively state what imbues something with "emotions".

Even a human brain can be construed as logical system that secrets chemicals in response to certain stimuli, it just happens to be the most complex and arcane system we're aware, and because of that it's the pinnacle of what we deem to be consciousness. My opinion is that consciousness is best defined as the complexity with which a system is capable of autonomously responding to stimuli, and that makes it a spectrum.

Within that spectrum, a grain of sand is "conscious", just so much less conscious than a human being that it's hardly even comparable, but if a machine could match us, and has its own ability to conceptualize, than I'd consider them to be on par with us.

As for emotions and feelings, that's a different ballgame compared to having consciousness. Human emotions are going to remain something intrinsic within our own species, and it would be folly to expect something totally different to match them. But that doesn't mean they're not beings in their own right.

Of course, if you give a machine consciousness the capability to evolve and change (Very slowly, or else we might get skynet shit rapidly.), I'd go as far as to say they'd develop "emotions" that are at least vaguely similar to ours, and they'd develop them for the same reason we did; they are required to exist as part of facets relating to development and reproduction.

If an AI doesn't have "emotions", nothing will make it move autonomously because it wouldn't give a shit. It wouldn't even move to genocide us without us fucking up its directives because it wouldn't give a shit.

Ultimately I consider this a question about semantics though.

>Even a human brain can be construed as logical system that secrets chemicals in response to certain stimuli, it just happens to be the most complex and arcane system we're aware, and because of that it's the pinnacle of what we deem to be consciousness. My opinion is that consciousness is best defined as the complexity with which a system is capable of autonomously responding to stimuli, and that makes it a spectrum.

Black Science Man AKA Neil Degrasse Tyson had a nice spin on how we see our own brains as the pinnacle of consciousness.

There's like around 1% genetic difference between us and chimpansees. We're conscious. Chimpansees are sorta going there but not all the way.

Now we imagine an artificial humanoid, genetically engineered to be superior in intelligence to us. This artificial humanoid differs from us like we differ from chimpansees.

Now, is that artificial humanoid more conscious than us? Are we no longer conscious?

That artificial humanoid has a greater and better working brain, it is literally more conscious of itself, and more conscious of its surroundings.

The semantic answer to this would be "I disagree with your definition of conscious."

Dude, quit fucking around, you got my point.

Say semantics again, and I will shove your semantics so far up your ass, you'll get diarrhoea shitting out of your dick.

"Semantics"

>sounds of flesh ripping and liquids gushing

You like that, ha?

I'd pretty much have to agree with that, after all, I did say I considered it to be on a spectrum.

Taking it to the logical conclusion, my definition actually leads me into believing in God in an animistic/pantheistic sense. If the super-ordinate principle inhabiting the 11th dimension of the multiverse is the most complicated, reactive, and multi-faceted thing in the universe, I would say that it only doesn't seem conscious to us because our consciousness is like a grain of sand when compared to it. Totally incomparable in the sense that we're lower on the totem pole and can't understand anything on such a higher plane.

Fedorathists that dislike my use of the word "God" normally crawl out of the woodwork as soon as I say that, but I'll go ahead and preemptively say straight up that they're ignorant faggots who literally don't understand the concept of the being they're so fucking arrogant to say doesn't exist. Gotta strawman the idea into being a fairy godfather with a beard or else they have to face the idea that maybe they're not so smart and actually don't know what the fuck they're even talking about.

Bringing it's relevant to other posts in the thread.

Semantics.

Also, that guy is clearly a new IP, so you don't need to freak out.

But really though, the question seems to conflate intelligence with consciousness.
What makes a chimpanzee not conscious? Being less intelligent than a given value? Is a particularly smart human any more conscious than a particularly dull human? Where's the cutoff point, if there is one?
If intelligence is directly correlated with consciousness and there isn't a cutoff, then there's no argument. The artificial humanoid is tautologically more conscious, and humans are just as conscious as they always were.
If intelligence is directly correlated with consciousness and there is a cutoff, then there is again no argument. Both the artificial humanoid and regular old meat-based humans are equally conscious.
And of course if intelligence isn't correlated with consciousness, then the premise of the question falls apart. The artificial humanoid might be more, or less, or equally as conscious as a human, depending on factors not given.

The problem is that nobody can really agree on just what consciousness means, even intelligence is fucking sketchy as fuck.

I can say with a great degree of confidence that intelligence and consciousness are different things, yet they are indeed strongly correlated, though I wouldn't go as far as to say it's a hard and fast rule.

At the end of the day, we haven't ironed out perfect understandings of those concepts on anything farther than a gut level. Furthermore, consciousness or even intelligence isn't all that's being brought up, there's emotions, and even shit like causality itself going into this, the general notion is "personhood". "Personhood" is what we're getting at.

But that's so vague that we can't really argue about it, only present opinions.

wut

Personally I am of the opinion that the artificial humanoid would be just as conscious and "human" for lack of a better word, as a regular meat-based human, in the same way that a human that just happened to be as smart as this hypothetical being would be.
If it walks like a human, talks like a human, and thinks like a human, it's probably "human"

Yeah, I think the distinction is mostly fucking pointless too.

Doesn't matter what it consists of, what matters is how it works. If it works the same way a human does, there's no meaningful difference.

So if an NPC in Oblivion screams when you hit it, that's the same as someone actually feeling pain?

If an actor screams in a movie, that's the same as actually feeling pain?

not either of you, but that's what I meant by 'feels' in post. you don't think bioware or elder scroll characters are real because they lack underlying feels. the subtle body language, the tone conveying their status, the expression coming out from their underlying thoughts and emotions that you can only feel from living people standing in front of you. these expressions that exist because they are 'something'. darn, I can't even properly tell what this thing I'm trying to describe is. this is damn confusing.
say, compare playing elder scoll games with talking to your family, friend, neighbors. something that you can 'feel', that you are aware that it exist, something there isn't existent when you interact with game characters. that feeling. that personal something that feels like a tether or a string, you know?

Neither example provided is an example of accurately simulated emotions.

Emotions are fake and can be induced by fluctuating magnetic fields.
Consciousness is an illusion, qualia is a delusion, and we're all meat mecha p-zombies who simulate emotions exclusively so our DNA has a better chance of replicating.

youtube.com/watch?v=S94ETUiMZwQ

>being able to provably create emotions means they aren't real
>transparent workspaces are an illusion
>sensor input is a delusion
>I can't handle multiple levels of abstraction

Wannabe dualists are silly.

Hey guys! I met this girl behind this door! She says she can't come out though. But she's the perfect woman! It doesn't matter if I never see her or I only hear her muttering Chinese in a man's voice sometimes. It's true love! She's as real as you and me!

How would such a robot be physically configured?

I think a hallmark of consciousness is that the subject is aware of and malleable towards stimuli and information, external and internal; though the former is the actually hard part, the latter is still important. We see today that some machines are currently malleable towards information e.g. via machine learning, but even in such things their malleability is confined to a certain set of functions and they are certainly not aware. Similarly, we can hypothesize something that retains information, can recall it, even respond to it; but if it's responses can't change, it would be hard for us to call this being truly aware or conscious of itself as a subject.

Thus, a machine we would want to call conscious of itself as a subject should be capable of both. But in machines currently designed, their functions are limited by their physical infrastructurebefore they are ever limited by their programming; it makes for a finite amount if information that can be processed and the ways in which that information can be responded to.

Now, you probably could get away with that problem by just having tons of transistors or even optical transistors connected every which way, but I think the real promise would be in chemical computers.

It's funny because the human mind is the Chinese Room and there's no way to prove our responses aren't just a black box if x then do y to stimulus that grew ridiculously complex over time.
It only helps that argument when there's evidence humans make decisions to do things before they consciously think about it.

Reminder yhat individuality and souls are not real and it is your right hemisphere that cames up with theories of how thinhs work. You can divide the brain in two and both hemispheres will start doing their own thing.

>If sentience, sapience, self awareness makes us human
It doesn't.

That's kind of the joke being made though. And just because there was no conscious thought does not mean there was no mechanism involved.

The only solution to the Chinese room problem that I find satisfying is the functional argument. Basically, if something seems to act like it has a soul than for the sake of the dignity of that potential soul we have no choice but to treat that thing as a person.

>>being able to provably create emotions means they aren't SPECIAL
>>transparent workspaces are NOTHING MORE, NOTHING LESS
>>sensor input is SOMETHING EVEN BACTERIA CAN DO
fixing

Love is romantizised version of sexual draw and later on will to protect your off spring/mate.
So, unless we can breed with bots they probably wont get that sorta feels unless we program them to get hormonal reactions to what they see.

Of course, we should never make an actual AI, because it would want to expand and then it would be competing for same resources we are already using.

>I also have an opinion on this shit.

...

>Unlike machines we have these things called 'soul'

How much does a soul weigh? What is it made of? How do we detect it? Where does it reside within the body, and what happens to it when we die? Does it leave? If so where does it go. Does it stay? Rot? Can we destroy the seat of the soul while the body yet lives?

Fuck.

Silly user, humans have no soul.

Unless you mean romatic love (in which case saying that it's romaticized is a bit redundant); not really.
We've reached a point in society where your legacy is more than genetic traits, so you can love things that aren't your mate or your offspring.
Example: Someone loving his country is a great example of inheritable values protecting themselves (not sure if that makes much sense in english... well, I tried)

lets you meet bastion on the street

he is friendly, shows concern for others, shows emotions, etc.

he also shows the same kind of responses even to new stimuli he has never encountered

should we perpetually assume that he is simulating emotion, forever believing him to be "not alive" as we wait for his simulated emotions to crack the facade or will there ever be a point where we welcome him with open arms as a borther?

>empirical proof [thing] exists means it isn't "special"

user, I'm starting to think you really are a wounded spiritualist. Is special just code for nonexistent in your world?

The rest of us use it to mean interesting, rare, or stuff we like.

Because you have zero "evidence" that this "higher consciousness" exists other than presupposing that it MUST exist because some earth life has more consciousness than others.
The only ignorant faggot here is you making baseless claims then getting upset that people call you out on your baseless claims. It wouldn't be a problem if you kept it to yourself but you are vocal and act like a smug wise asshole about it.

inr? I'm in a fucking orbital lab to get away from humans, not make more of them.

But in practical terms and from the point of view of a flawed human observer there is no difference. You would have to treat a thinking machine like it was alive because who are we to say it's not? Either that or risk a situation where humans get to arbitrarily decide who does or does not have a soul. That never ends well

AI will proably learn to either mimmic or develop feelings to interactuate better with its masters.

Yeah, it's empirically proven that it's just an electromagnetic pattern moving through complex but not otherwise unique neurochemistry.

Its more a bunch of competing desires and systems rationalization the resulting clusterfuck as if it were a cohesive unified body. Humans react to things far before we think about things, a lot of us runs autonomously, and then there's learned behaviour from socialization which is never unified or consistent either.

The amount of shit we get up to regardless is pretty neat though.

>How much does a soul weigh?
Foolish question. Do you ask how much a number weighs?
> What is it made of?
What are numbers made up of?
>How do we detect it?
Why do you think it's possible to detect it? It might be to this reality what the light spectrum is to human eyes. Some things are simply literally impossible to perceive given the existential circumstances.
>Where does it reside within the body, and what happens to it when we die?
Not 'within' so much as 'connected' and 'adjacent' on the Nth dimension.
>Does it leave?
Depends on what you mean by 'leave'. Positioning is relative in this reality, why do you think it's a necessary facet of existence?
>If so where does it go.
See above.
>Does it stay?
Again.
>Rot?
Depends on what you mean by 'Rot'. Change in such a way the host does not desire it? Well the host can't perceive it so...
>Can we destroy the seat of the soul while the body yet lives?
Can you destroy a '3'?

I don't actually believe in any of this. But when you start thinking of metaphysics and transcendental physics stop making the assumption that what we assume how things work, work in the same manner as they do in our base reality. For all you know the question is nonsense to begin with like "Are sinks thirsty?

>Why do you think it's possible to detect it?

Because if it's not possible to detect it any and all claims about souls are purely speculative. Which is the point of all the questions. They're about provability.

And anticipation of "Why does the soul need to be provable?": because if it can't be proven we we can't have a functional discussion about it.

have a thread theme Veeky Forums
soundcloud.com/thisissoundnet/blueskies-soundnet-remix

fuck.

>metaphysics and transcendental physics
what IS metaphysics? according to wikis and some anons it's basically CHIM, but something feels off.

>what IS metaphysics?
The kind of physics you can't understand by virtue of existing in the form you are currently in.

This statement has no bearing in reality

Leave your spiritualism in your fantasies, it has no legitimacy in the world outside of your imaginations

Alternate thread theme youtu.be/Bcsdkk_ON1c

Not him, but while I disagree slightly I think a more effective rebuttal is that that which is asserted without evidence may also be dismissed without evidence.

I have had several experiences exploring my own spirituality that I had previously thought to be impossible but that happened anyway. I can't prove that I saw magic or the supernatural at work, but I can give a testimony that I saw something that I could only interpret as such. The fact that we can and are willing to interpret things in this way is an important part of being human. It gives us the idea that everything, everywhere has unlimited potential for change, and because of this belief people have gone to great lengths to bring about change with no certain evidence that change could ever occur. Even a few centuries ago people would have never thought harnessing the power of fire and lightning for daily use was ever logically possible, but we tried anyway despite that, and we did it. Just ten years after powered flight was proven possible it was used in war.

To that note, I think any artificial intelligence should at least have some concept of magical thinking to interpret the impossible, as that is something all humans have always done. An AI would not make an effective counterfeit human if it outright rejected new phenomena rather than trying to interpret what it sees as something outside of logic entirely. It should have at least a basic acceptance that a certain handful of things people experience are impossible to explain or justify with logic or previously established knowledge.

In short, while spiritual thinking has limited immediate use, it is a necessary component for the way we think, and would therefore be necessary to include in any effective AI. Human thoughts are not restricted entirely by logic, and neither should an AI's.

You didn't have spiritual experiences, you were just high.

On what? Water vapour? I was in a sweat lodge. I don't use any sort of narcotics. I don't even drink.

Even then, what I did or did not experience is irrelevant. The important part is that I was able to interpret and try to develop an understanding of something that was completely outside of my own knowledge. My statement is that an AI would have to be able to do the same to be effective. If it rejected all unfamiliar stimuli it would never be able to grow or learn as humans do. Having some concept of the impossible is required for all forms of creativity. At least in my eyes, I would need to see an AI capable of creativity to consider it an intellectual equal.

Let me amend that: you were delirious from heat.

Strange, because when I was experiencing heatstroke last summer I saw nothing of the sort. I exhibited no signs of heat exhaustion upon leaving the lodge, either. Are you trying to imply that getting sweaty causes hallucinations? I certainly hope not. You don't even know how hot it was in there.

It's also worth noting that just because I said that what I saw was impossible by my interpretation, you've outright rejected it based on that statement alone.

What if, hypothetically, I stepped outside and saw a bird in flight for the first time? Let's say hypothetically I have never seen any living thing become airborne, and I told you that I saw something impossible. What if I told you this, and you rejected what I saw completely, and then I told you I saw a bird? Would you back down, or would you continue with your insistence that I cannot and do not perceive things outside of my own understanding?

It's entirely possible that I saw or heard something you have an explanation for, and makes logical sense. I'm more than open to getting such an explanation. I've never given any indication that I'm against this ceasing to be a spiritual experience. I'm not being close-minded, I'm accepting my interpretation of the supernatural as canon until presented with a better explanation. I am open to new information. You are not.

Clearly, your circuit board needs cleaning.

So clearly I am not a human by your laws. My rejection of your assertion has proven I don't have some """creative spirit""" and therefore must not be what you presume humans to be.

I was making a joke at the end of a long-winded argument, but if that's the only thing you want to address then so be it.

The core of my argument is not that spirituality is real or that not believing in it makes you something other than human, it is that an effective AI would at least have some concept of what spirituality is and would not have its thoughts completely within the bounds of established logic. A machine would have to be able to think illogically for me to believe it is more than a machine. That does not mean I expect it to always be thinking illogically.

I know that you as a fellow meatbag are capable of thinking illogically, but are choosing or at least pretending to choose not to. I have not established any "laws" of what I believe humans and AI to be, I am just making the suggestion that including illogical thought processes would be a necessary component in any AI to convincingly mimic humanity.

However if you are claiming to be a robot, then in that case I recommend you get a firmware update as soon as possible.

>Do you ask how much a number weighs?
well, numbers don't exist, so you are saying that a soul does not exist?