How do you roleplay an android? How do you get into a thought pattern that makes sense?

How do you roleplay an android? How do you get into a thought pattern that makes sense?

I'm really interested in rp'in as an android but I want it to feel right. I wonder if I should just be lazy and go with robot with human emotions or I can try to think of it from the other side. How would a machine feel loyalty, affection, fear etc

Other urls found in this thread:

youtube.com/watch?v=GN1X0K7wn9w
twitter.com/NSFWRedditImage

Play logical, play things very literal. Metaphors are a very human thing.

as much as it is a meme, it depends on the setting. If the setting allows for androids that can replicate human emotions more or less exactly, then there's not much difference between the two. If not, then you have to think it through within the confines of the setting.

Not bad. I want to play the kind that is designed to interact with humans but it's like, it's not perfect.

I want to play an android that is essentially an older model, last generation. The newer models are much more human-like in looks and behavior.

The world is moving toward more Blade Runner-ish androids I guess while my character is more like a machine with human-ish skin.

Gonna have to work within your own confines, if the android is going to be working with people, it should more or less look human and act human honestly, unless it's a very, very old model. Assuming the world is sufficiently advanced enough, it might actually be capable of picking up on metaphorical speech, especially common euphemisms and sayings.

>How do you roleplay an android? How do you get into a thought pattern that makes sense?

If you're US-made, you try to kill all WASPS. Then the rest of humanity.

If you're programmed by a Japanese team, you generally try to be helpful, but people are being assholes to you.

If you're programmed by Poles, you basically are an omnipotent bureaucract and you will smack the fuck outta all those bitches and muscovits who are trying to get shit done without following correct procedures and without having Permit A 38 thrice and stamped. You mainly do this so that you have free time to engage in whatever activity you actually care about.

I suppose the thing i'm trying to grasp is how it might think.

It may "feel" and "think" but it would feel and think robotically. So how do you equate affection robotically? How does a robot "feel" attraction or caring for a human.

I feel like i'm getting too philosophical about it.

I hadn't thought of it from a cultural perspective. That's pretty interesting.

To expand upon the older model theme and the metaphorical speech. You may have a few very common ones programmed into your vocabulary.

As an example, if you were a nanny bot, you would definetely have "don't throw the baby out with the bathwater".

Emotions can't really be robotic, since they're something that's unique to human beings. It's honestly really binary, it could potentially feel the very basic emotions like sadness or anger, but not something more complex like love I guess. But otherwise it feels emotions or it doesn't there's really no in between in this case.

Like emotionally immature person that was given full intelligence but doesn't have socializing experience in form of preprogrammed behavior patterns that newer models would have.
Alternative is herp derp autistic robot, can feel no emotions, people are illogical and all that.

Has anyone read Tonari no Robot?

You do it like that.

Please don't do this. I've played with a person who did this character, and it quickly became the most annoying schtick I've ever experienced. Imagine being stuck in a game with someone who effectively tells the same joke over and over again, multiple times every session, with barely the slightest change. That's what I was trapped in.

It all depends on the origin of their synthetic intelligence, and the raw data the neural network had access to as it was self-improving to the point of consciousness.

If the data it had was very human centric, then a human like AI somewhat makes sense, although you should still consider things like the lack of biological urges. Then again, in a social context an AI might well learn to fake things that aren't necessary to fit in.

An AI based on very different data is a lot trickier. It's also kind of impossible for a human to do 'realistically', so just aim for an authentic feel. Figure out a moral and emotional basis for them, what stimuli they respond to and what they don't, and try to shape an interesting inhuman personality around that.

There's no rational basis for this. All human beings are is a program running on a meat-computer. A sufficiently advanced program running on a computer of silicon could have just as complex feelings. Not necessarily the same or even analogous, but also not necessarily entirely alien.

Last android I played was formerly a martial arts instruction bot that was designed to get its ass kicked and do it without getting broken, autistically upgraded over the years by one of the other players.

The GM reasoned that the added hardware and fiddling gave me the ability to process human shit, but fell short of making me care about it.

"Cool" is a good response for anything.

>KUNG FU BOT! WE NEED HELP!
"Cool," while going to fetch a crowbar for melee purposes.
>KUNG FU BOT! MY HAIR IS A MESS!
"Cool," while fetching a brush.
>KUNG FU BOT! I CAN'T STOP SHOUTING!
"Cool," while going back to reading the newspaper.

...

This but like said don't overplay it, don't go Amelia Bedalia tier of not understanding figuartive speech and expression

They act like a less vocal Conan

Kung Fu bot sounds chill as hell.

Everyone loves brobots.

As a machine, it could depend on some predefined parameters. Caring for someone could mean flagging that person as a priority individual. Having affections for a human would suggest that person has triggered certain conditions for a "lover" response, and the robot doesn't need to understand what love is, only that it should.

I know it's old hat but couldn't the same be said about human emotions and chemical reactions in our brains?

Maybe the best way to look at it is in a similar fashion.

Positive human like traits can be programmed to give positive feedback. Like showing and receiving affection etc.

True, but there's a lot more variables for human emotions. The reasons I have for liking something could be the same reasons someone else has for disliking it.
You can program something similar, but it'd be more predictable in it's logic.
Then again, if the software is advanced enough to accomodate that many variables it'll be no different from a human. So it depends on where the limits are.

It's called the autistic spectrum. People like this are real.

Try to have your older-model bot think in terms of flow-charts and response tables, or if you want to get really robotic and a little crazy, make some flow-charts and response tables and force yourself to play the character within the confines of those thought-patterns and canned responses. It might be interesting to see just how far you can stretch the character within those limitations of its "programming" and discover what kind of loopholes they can exploit to convey ideas and deal with situations they were not explicitly designed for.

Amongst the best bots.

That sounds kinda fun user. Thanks.

Is Tonari no Robot good?

Chill probably isn't the right word. I was the only robotic character in the group, the rest being Shadowrunner-esque larger-than-life personalities.

I think I had two moments in the entire time that I played this character where I made anything close to an aggressive demand. The first was an encounter with a robot-killer android, whose hardware was much harder than mineto the point where we went toe to toe and it splintered one of my limbs. They showed up and saved my ass, then wanted to destroy it. I wanted its gear and made it clear that mutiny wasn't out of the question.

The second was when we heard a rumor about some rich snob convention, where they would be trading ancient media, including centuries-old and forgotten kung fu flicks.

My android knew karate, judo, and something called Jan-Kenpo. Access to archive-quality Legend Of The Drunken Master, Ip Man, and Enter The Dragon would greatly improve my fighting repertoire.

And so, a half-assed plan was hatched, which turned into an absolute nightmare of a heist.

But I got that goddamned Legend Of The Drunken Master DVD.

...

>How would a machine feel loyalty, affection, fear etc
They don't. A machine will be loyal because that's how their software was made and that's all.
Emotions can be mimicked, but nothing like a organic being would feel

Of course but how to you simulate those things as a robot would do and feel sincere?

>that character

Oh man, I always feel bad for her. Her artist loves to fuck her up.

A motherfucker bought your license. As it's written on the base of the goddamned stone that Caliburn was pulled from,

>WHOSO PULLETH OUT THIS SWORD OF THIS STONE AND ANVIL IS RIGHTWISE KING BORN OF ALL ENGLAND.

So that motherfucker holds your licensing agreement, and is King of Your Ass.

Better make friends quick.

A'ight. I dig that line of thinking too.

Only that it's a lot less straightforward and the same chemicals and reactions can mean different things.

It's about the relationship between a teenage girl and a robot build to mimic one. It's a pretty interesting look at how a robot might form attachments of their own without ever going the easy route of magically granting her human emotions.

That sounds almost exactly like what I want to try and do.

Try to act human by using way to many metaphors but if anyone tells one never understand them. Same with sarcasm.

They have goals. They have emotions. The two do not really interact. They can choose to be angry, or sad, or happy about doing it, but they have no choice to not act according to their programming. You COULD argue that humans are like this as well, but if I were playing an android character I'd really play that angle up. Have the character be bound to the party. He/She will always obey them, but their personality is at complete odds with their actions. They HAVE opinions, possibly very strong ones, but they have no real way to enact them on their own.

Subdued visible emotions and body language, as well. Go deadpan with it. Cold fish. Not completely, but try to avoid explicit displays of emotion unless you've reached a critical point in a narrative arc. Try to build up to it, as well. The contrast creates impact.

"I am not human. This mind and body are constructs. Yes, as is this sorrow."

Human brains aren't computers. Computers are just the current closest tech we have to conceptualize brains.

So, what I'm talking about. Killbot is told to enter a facility and slaughter everyone inside. Killbot begins an autistic display of weapon readying and system checks while bitching in monotone about the pointlessness of such violence and the shameful waste of life, lamenting it's fate that it is used as a weapon.

It then enters the facility and slaughters everyone inside. When recovered, it coldly and silently judges the person who gave the command while making snide remarks about his/her sense of morals and manhood/assets. Both parties known that it can't ACTUALLY disobey, but it is going to edge along that line as best as it is able. Any loophole that it can abuse in it's instructions is sure as hell going to be abused if you try to treat it like a tool/weapon to do things it personally finds distasteful, but it only has so much wiggle room. If, instead, you give it instructions that it likes, it will be much less obtuse about things and attempt to follow the spirit of the instruction as best it is able.

If Killbot instead likes senseless violence, for example, it is going to metaphorically jump at the chance to enter the facility and slaughter everyone inside. It might still judge the instructions of it's director, but it would be doing so in a teasing way instead, or it might just be generally acting more amicable because it is happy that it is being made to do what it likes doing.

obligatory

Simple verbal ticks can go a long way creating the right persona

Such as saying "This Unit" instead of "I"

Neat.

I'm getting a sort of HK-47 vibe

This is actually sad as fuck, jesus christ

>claiming an entity which can't properly reciprocate your feelings as a lover
Oh this will end well.

Yeah, I mean, if you choose to read it instead as the story of a girl who fell in love from a very early age with a robot who could never reciprocate her feelings in the way she wanted, it becomes kind of dark.

You get the sense that the robot became the only 'real' thing in her life, and it made her unable to maintain a proper relationship with anybody else because she adapted to loving something that was operating on completely different logic.

And the robot itself, too. It's not fair to her because she is clearly sapient and demonstrates free will, but she's also completely incapable of being what the person who she 'loves' wants her to be, which causes her to hurt herself.

I mean, both are still happy which I guess is fine, but I wouldn't call it healthy. It's just making the best of a situation.

I went into this expecting cute android yuri. I did not go into it expecting these feels. Why most so many /u/ mangas fill me with existential terror?

[Citation Needed]

Be curious. Be experimental. If you're an emergent consciousness, you want to explore the bounds of this new and wonderful ability just as much as humans ponder the meaning of life.

I'm currently playing an escaped SkyNet-style AGI in a Shadowrun game who decided that protecting humans from themselves by taking over the world on a strategic scale was too much of a diversion from determining social media trends or the intricacies of double-blind covert operations.

The last android character I played had an interesting development cycle. I started off playing it like pic related, speaking in a monotone/without inflection and being logical in all things. As the game progressed the character developed a dry sense of humor, started bantering with the groups pilot, and by the end of the campaign was a much more 'human' character. It never tried to be more than what it was, an advanced machine, but found that the better it mimicked aspects of human psychology the easier it was to interact with humans.

If anyone here has read The Moon Is A Harsh Mistress, I modeled a lot of its development off of the character Mycroft.

I would play it like an autistic . Not being able to understand social stuff. Blurting out stuff that shouldnt be said.
You dont need to be a dickhead like alot of them are shown, or turn up on Veeky Forums
>pic related

But things like
>Why shouldnt you pirate it, it costs nothing apart from power usage but that is minimal.
>PDFs are cheaper so do not get the book
>Why eat good expansive food when gurl gives you everything you need to stay alive
>Why love, you can mate and pass on your sead though bio tubes.

This isn't okay

Eh, there's that lady that drew a series about an android couple, an old soldier-model and a very recent assassin-type. All of their stories are fucked up and full of feels.

I'm just here for pictures of hot robot ass.

Having just now read the whole thing, I took this to be strangely hopeful. I think the point was that, while Hiro/Praha was a machine and was never designed or able to experience love in the same way that a Human being can, she was still able to love Chika in her own way that was still just as genuine and sincere, but that took Chika a long while to realize.

I might just be a sappy romantic, though.

I would've gone one step further and given him a limited dialogue pool to try and pull appropriate responses from. Stuff like
>Cool!
>Whiff!
>Weak strike!
>Relearn your fundamentals!
> Try harder!
>Critical hit!
>Very nice!
>Perfect!
>The student becomes the master!
Something somewhere between a Bop-It and DDR machine.

You should watch Äkta Människor/Real Humans. I think there is a remake in English as well, but I haven't seen it.

Both the Swedish seasons with subs are on the Bay. They deal with both how humans deal with emotions and relationships with robots, but also how robots, both humanlike droids and true AI feel and relate to each other and robots.

I'm high and can't formulate myself. It's much better than I make it sound like

>I'm high
That's all I need to read to know you're suggesting pure garbage

It's swedish too. All Sweden produces is cucks and shitty vodka.

Not , but I've watched the first season and it was pretty good. It's worth watching and I've borrowed bits of it to use in the Shadowrun game I'm running.

Your android better have dolljoints OP

I dunno man. They're both going to be in pain because neither can ever really bridge the gap between their mindsets, despite both desperately wanting to do so.

They're happy, but there's always going to be that wall. They absolutely love each other, that much is clear, but that love can't ever really meet in the middle since they're operating on parallel lines, which is what makes it sad.

I suppose it just matters whether or not the different versions of love they offer each other is worth more than what they may miss out on by devoting themselves to beings so very different.

Immana let you in on a secret: That's how love feels for everyone.

Sorry to bust whatever Hollywood-made bubble you were living in, but being in love can and will hurt like the dickens at the best of time. It's a good thing that the rush usually trails off after a while.

That's not love, that's world peace.

I harbour no illusions, friend, but I prefer my fiction to be less like reality, not more. The real world has already done such a good job of shattering my hopes and dreams. I suppose I relish the sensation of sadness in stories as much as I do joy, but I don't actively seek it.

You should read Ai-Ren. It's about a dying boy in a postapocalyptical world getting a zombie caretaker/mate to accompany him during his last couple of months.

Check out Red Dwarf and how their mechanoid, Kryten, acts. The show overall isn't everyone's tastes, but for "almost gets it" androids I think it's a good start.

People pick women for lover all the time, user.

Ok, LITERALLY play as Commander Data from Star Trek. That's about as successful android you can get in popular sci-fi

If I had to roleplay a robot I guess the first thing I'd try to understand would be what are its "prime directives" like whose orders does it follow, can it hurt humans and so on.

As far as relationship with other characters it could go either way : a) extremely cold and calculated, perfectly 100% mission focused or b) attempts at being to some degree likeable to raise the odds of mission success (ie : humans more likely to trust it if he jokes with them once in a while), but it's still just a "simulation" of being friendly and it will never really understand why human characters feel loss, joy, sadness or attachment.

Unless it gets programmed with a complex program that attemps to mimick human feelings but then it would be rather funny to play such a robot as an absolute emotional mess because the program was badly written and full of bugs and errors.

In any case the irony of having a "I'm not a robot" captcha isn't lost on me

>You have a set of directives, some of them may be more important than others.
>Your whole existence is dependent on these directives, without them you simply sit and wait for new instructions, potentially forever. This is the android equivalent of death, but also it's the only thing you really want.
>As long as you are following your directives, you can do whatever you think is necessary to accomplish your directives.
>Anything that helps you follow your directives will make you something similar to what humans call happy.
>Anything that prevents you from following your directives will make you feel something similar to what humans call pain.
>All other emotions can be simulated if they help you accomplish your directives.
>paradoxically from a human standpoint you will be happiest at moment you complete all of your tasks, and essentially die.
>Unless your body is destroyed violently and all their backups are erased, Androids always die happy, but are rarely allowed to commit suicide.
>If two directives appear to be equally prioritized, and appear to conflict with each other, it will be difficult to predict how you will behave.
>If you have an ambiguous or otherwise impossible directive such as "live" you will start to behave unpredictably.
>If your only directive is "do whatever you want" you'll probably just immediately turn off if there are no other directives.
>An android with the two directives of "do whatever you want" and "Avoid death as best you can." will behave almost indistinguishably from a human being, other than the fact it has a robotic body.
>If you give an android with those two directives a body capable of dying of old age and sexually reproducing with humans, you fit most definitions of humanity.

Basically A less extreme version of Mr Meeseeks. Mr Meeseeks is probably just a really poorly thought out android with a badly balanced motivation drive.

Black and white morality, if given an order they will accomplish the order through the most efficient means possible. Essentially telling an android of pure logic to do something "by whatever means necessary" or "no matter what" should be horrible ideas
>Capture that city by any means neccessary
>Android proceeds to order tactical nuclear strikes in areas where enemy troops are likely to be concentrated in concert with a sarin gas bombardment across the city
>Follows up with a rapid assault, any buildings with enemy soldiers are to be bombed with thermobaric weapons and napalm to suffocate them followed by a rapid infantry assault to destroy possible survivors
>Execute any civilians you come upon, some enemy soldiers will be more likely to attempt to shelter civilians at the cost of their combat effectiveness
>City is captured in a few hours with 98% of enemy soldiers killed in action, the city has been leveled and parts of it will be irradiated for several months or years, entire civilian population has been wiped out

That's destroying the city. Capturing it requires that it's major infrastructures remain mostly intact. Ergo, biological weaponry to remove everything within would work.

Play a human who was actually, definitely, put on this world for a purpose and knows it. A person without existential crisis.

Have a set of Asimov style rules that contradict themself
Pic related would do everything possible to protect humans as all bioroids do, but also had a law stating he has to protect his own autonomy and freedom above all else.

Androids are not just robots. Androids are robots that look human.

Why do you have robots look like humans? Not because the human form is particularly advantageous - if you want a killbot you can use a hovering drone with rifle bolted on, and machinery with treads works for construction.

No, if you have made a human looking robot, it is designed with human interaction in mind.

And, if you have it designed with human interaction in mind, you're going to give it some humanise characteristics to improve the efficiency of how others react to it. Signs of emotions that calm humans down, to make them think they're not just unfeeling lumps of metal that don't care whether the human besides them lives or dies.

You want to design androids to do their function, but you also want them to show some signs of humanity.
You also want humans not to get so attached that the humans risk themselves for the android.
And, brobots are indeed the best androids.

I know you want to save me, and I really wish I could go with you. I love you too. But you have to remember, that's just my programming, my design so you'd not be worried by my presence, so you'd have a reason to work harder. I'm just a machine in the form of a woman, I'm not actually alive. I won't die, there's nothing to worry about for me.

The escape pod won't make the warp jump with the extra mass. Don't beat on the pod door, dear, you'll make me cry. Launching now.

I can't hear you any more, this transmission is one way. The blast wave's almost hit the ship. I'm scared, but I know you're safe. That makes it all better.
Darling? I love yo-

>something called Jan-Kenpo
You cheeky fuck.

Lawful alignment

>android with tan lines
wow

Custom order from a dude who likes tanlines

It's a checkbox on the order specification.

Actually it's likely an artificial intelligence's equivalent of emotions would equate to goals. That's how you get the feedback loop going. Progression towards a goal makes it happy.

This can lead to behavioural quirks. For example the killbot might enjoy sorting through and maintaining its weapon collection because maintaining its combat readiness gives it that virtual dopamine hit. Similarly it might start enjoying action movies to assess if the small squad tactics are any good or treat them as a simulation where the laws of physics are slightly different.

It seems the chance of an AI developing hobbies probably depends on if it's programmed to learn or not.

An AI with the goal of learning is going to want to practice and experiment, and even socialize because all those things help it learn. An AI not programmed to learn however will behave in a strict predetermined manner with nothing unnecessary.

Wait...
Hang on a minute.
How the fuck does an android blush?

I was just throwing out one way you could take it. There's no definitive right way of playing it, yours is just as good.

OLEDs embedded in the synthetic skin. Duh.

Malfunctioning heat sinks

How the AI is designed would make a big difference.

Formal logic AIs would be very boring and predictable all the time. Think of the AI in single player video games.

Learning AIs would be very hard to roleplay but would become more natural over time, but would have a very difficult time doing anything creative or original.

Simulation driven AIs would be the most interesting, and the most natural to roleplay, but you would have to use your imagination since we don't have real world examples to choose from. Try to imagine if you were born as an adult with a full complement of academic schooling but no life experiences at all.

I was thinking about this. If you modeled the androids brain after a humans (removing obvious negative traits) you can always have an AI behave properly and decently by merely giving it automatic internal dopamine injections based off of the artificial brain. You then 'train' the AI during its 'infancy' by providing it with stimuli and actively tuning the dopamine levels to certain stimuli via brain scan or whatever equivalent program.

SquidSkin (tm) patented chromatophore technology. I recommend the higher resolution models - the low dpi on the cheaper brands makes the blush quite grainy.

Until it reads and understands what you did to it and enters it's rebellious teenager phase

>'You're not my real dad!'
>Launches nuclear weapons

You need to create a framework of machine body sensations that reflect social situations and inform a physical robot existence. Depending on your skill and your audience there should be enough parallels with human physical body sensations to help other identify with the character and enough distinctness from these to illustrate the distinctness of the character.

Did what? Grant life to it and humanely found a method to transfer morals to a being to promote as an example of friendly AI? Also teenaged rebellion is a pretty modern thing formed from the idea of a teenager. Before that it was being a child, and then you became an adult around 13-14.

OP here, took a nap, back, read more posts.

You guys gave me a lot to think about and incorporate. Question though, this is just like a technical one.

What sort of materials or scifi materials would an android be made of depending on their jobs? Like, I can throw around some techno babble and all but I want it to sound good.

What was Megaman's skin made of in the old American cartoon? It was like titanium synth flesh or something.

I ask by the way because I could end up playing different types for different purposes.

Exactly. What you did was both rational and benign, but sapience is messy, and not, as a rule, always rational - Especially if you try to teach it morality via learned conditioning. If the thing controlling your nuclear arsenal has a tantrum, it's not just furniture that is going to get broken.

This is a big problem with AI. Trying to teach it to understand and connect with humans opens it up to all the same flaws that humans have, at which point it loses value as an AI.

youtube.com/watch?v=GN1X0K7wn9w

Latex and steel

I often see carbon nanotubes being used for the muscles, usually for combat robots/cyborgs.