Question

Question.
What would happen if I took a human brain, map it down to the tiniest neural network, to the tiniest neuron and simulated that very map with a sophisticated computer software able to express every aspect of a neuron cell and its connection with others cell?
Would the simulation be sentient? Would it be intelligent?
Would it be human?

Other urls found in this thread:

phet.colorado.edu/en/simulation/neuron
twitter.com/NSFWRedditVideo

Same as asking what if we could make earth 2 times bigger. We can't and well never be able to, so the point is moot.

You'd have built an exact replica of the universe so yeah it'd be all of those things.

If functionally equivalent, I'd say sentient and intelligent. "Human" is something for the lawyers to argue over.
You'd also need to "simulate" at least part of the rest of the body. Hormones and other biochemical produced below the neck have effects above the neck.
You should also provide sensory input and output to keep your brain from cracking due to sensory deprivation.

>We currently can't do something
>therefore we will never be able to do that thing

Youd have to give it inputs like ours. It wouldnt br sentient without these. It wd need a body, internal organs and a world to explore. Or the architectures meaningless.

Asked this exact question a few months ago, except I asked what's stopping us from om doing it. Allegedly it's too complex for a classical computer. Interested to see if people have a different take this time.

This is literally how deep learning works, and no it won't be sentient sentience requires inputs (senses) and outputs controlled by a subconcious mechanation, an internal monologue created within the network itself and long term/short term memory. If you build a network with all of these things it will at least appear to be sentient (those whether it really is would be Philosophy and not science.)

Literal brainlets

this

it's not too difficult it's an ongoing field and we are making huge progress just this last year DeepMind implemented a true short term memory allowing it to quickly learn and dominate human players in games. Now the work is moving to storing important patterns from short term memory into the long term so that it can reuse the networks knowledge instead of relearning every time.

aka. it needs a myriad of overwhelming stimuli that directs his thoughts, behavior and literally every word it utters.

yup, we have proven, again, that free will is one of the ridiculest notions about humans in its whole history.

Freewill is a thing enough to get you to be happy and reproduce, anything else belongs in Philosophy so kindly fuck off to there.

why are you fuckoffing me? just because i pointed out that a full simulation of a human would be a literal predictable mechanism?

Because it wouldn't be "a literal predictable mechanism" it would have certain predictable traits but the idea that your brain makes no decisions on its own is ludicrous and false.

you take a computing system, brain or whatever, feed it with all types of feedback, internal and external, it will react to both non stop, and each action will have been prompted by the one that happened just beforehand. show me the free will there.

also, decisions on its own sounds like the brain making decisions in the void. nope, cant make it in the void, it needs input. and, when theres input, theres a predictable response. like when you shower a plant with light, it just reacts, and will do 1000 times the same way.

Yes but the human brain isn't a plant.

Backpropagation and recurrency make the data you input moot because the brain takes that data, breaks it into smaller data and then runs over it again and again. The number of variables required to make such a prediction are so large and the outputs from any given input so wide in variation that it would almost be akin to starting human history at 10,000 BC and thinking you can say with certainty it will end up where it is now.

It won't, you can't even run the American Civil War through a simulation and get the same result twice in a row (similiar results sure, but not the same.) So it isn't pulling thoughts from the void but it is certainly creating it's own understanding of the input and giving output's based on that understanding. The human brain is a learning and adaptive machine and thus requires a small amount of what we would call free will to thrive.

>Yes but the human brain isn't a plant.
thats just your brain talking, feeling pompous and sophisticated about itself. and it isnt.

>"This is literally how deep learning works"
>Calls others brainlets
Role-player.

nope I'm literally doing this shit everyday (although I won't larp and pretend to be a top 100 or even work for a prestigious company, just do a lot of neural networks for analysis.)

How can my brain talk to me, trick me and make me feel this way if it doesn't have freewill?

Again,

>it's hard to simulate
>therefore free will exist
fallacy

>How can my brain talk to me, trick me and make me feel this way if it doesn't have freewill?
it has its own ways of coping with a disolving ego. its part of the program, it doesnt need "free will" to run, either.

>How can my brain talk to me, trick me and make me feel this way if it doesn't have freewill?
uhh since when does any of that require free will??

That isn't really what I said, I said a simulation will give a different result almost everytime (due to probabilities you would get the same results a few times if you ran the simulations millions of times) thus it can be determined that the human brain does make decisions thus does have freewill. Just think about the last time you had to piss and didn't go immediately, or the last time you were hungry and didn't eat. Without freewill you would just do these things (and suicide would not be a thing.)

It needs freewill to make arbitrary decisions, at least to a point that any further discussion on the matter is philosophy

It requires the brain to be able to make it's own decisions based on input, and determine the output it wants to give which is freewill.

If you seriously think that meme learning has anything to do with how the brain works then at the absolute best you're a code monkey messing around with tensorflow while calling yourself a """researcher""" on the internet.

No dumbass I write artificial neurons (with soma's, axon and dendrites) connected by Synapses that fire depending on convulotion and make decisions in the abstract based on past data. This is how the human brain works just on a very large scale, the OP said a small neural network which is what deep learning uses.

>tensorflow
Sounds like you are the one larping now as that's introductory shit that you'll find by googling "beginning machine learning with neural networks"

Ffs is everyone but 2 people on Veeky Forums a larper who doesn't know shit?

>No dumbass I write artificial neurons (with soma's, axon and dendrites) connected by Synapses that fire depending on convulotion and make decisions in the abstract based on past data
Oh really? Got any research papers related to that?

wow you really are retarded if you think that is an extreme simulation or not the norm

Nah I just know it's bullshit you heard once from someone equally uninformed and now parrot. Or do you seriously believe that dot products and sigmoids constitute valid models of neurons?

>Nah I just know it's bullshit you heard once from someone equally uninformed and now parrot.

Guess I'll let my university know they hired a bunch of uninformed proffessors!

>seriously believe that dot products and sigmoids constitute valid models of neuron

Your ignorance is again showing user, since I specifically said I was not talking about perceptrons but neurons actually modelled after what neuroscience says. Sigmoids rely on little more than a regressive binary tree (albeit with some recurrency in some cases that can lead to human like effects.)

But I do know that even sigmoids can be reversed and create what appears to be new thoughts based on their weighings, so even then they do kind of work in a similiar way (to a single part of the brain, not the whole thing.)

Again I am talking about a small (~1-10000 neurons depending on the data) network of neurons and that is all OP asked for. But I then went on to say that places like DeepMind are actually trying to replicate the brain and stated how.

So, quit projecting your ignorance on to someone who comes here to actually talk about shit they know and not shit they read on wikipedia like yourself (appears to have, at least.)

Don't you feel even the least bit ashamed about spewing so much bullshit about something you clearly know so little about? Literally everything you just wrote is nonsense.

you don't understand the first thing about the subject. your thoughts that lead to the "decision" are just as out of your control as your need to piss. and the same would happen in a simulated brain.

There's research showing your sense of self and the way your brain works are influenced by the rest of your body, and unless you can emulate that, you won't be the same.

well, the self IS the body. without a body theres this conscience floating in a void, recurring into itself in a second after being prived from self sensory input, going real crazy. body sensation tells the brain youre actually confined in some space thats inserted into some bigger space, without that, the mind just assumes it is the world and god.

So the Chinese Room Experiment

>show me the free will there
It's doesn't obey to external factors so it's free.

>appears to be sentient
this is the modt retarded philosophy bullshit there is ot there, just like that chinese room thought experiment which literally doesnt make a point.
There is no difference between appeqring to be sentient with sufficient accuracy and being sentient.

phet.colorado.edu/en/simulation/neuron ?

Yes, it would be more or less the same. It's impossible to do it though, it would be easier to rebuild it.

I started med school because I thought I genuinely wanted to help people and to learn more about the human machine but now I realize that all I thought it would be, it really isn't. Classroom mates are really just self-absorbed man-children and stacies that think it's cool to be a doctor like the ones in tv (damn you Grey's anatomy), professors think that their subjects are the same they studied in 1970 not considering all the research and new discoveries made in the last 50 years which made the study programs a shit-ton of work and now, near to the end, I am afraid I have learned nothing at all and that I won't be a really good doctor.
I'm just passive about it now and though I still try to do my best (just because, I think), I'm just not that happy about it anymore.

What are some Veeky Forums methods to deal with stress?

Sorry, wrong thread

Not true. You could programme a machine to have a sentient like reaponse to every possible situation it gets in the world.

You are literally a retard. A unfunny troll
Disinformation agent fuck you

Whats predictability got to do with free will? Unpredictable outcomes are just as bad for free will.

The guys still right about free will. And dont use loaded words like "understanding". Arguably a machine with no free will can fo so depending on definition whuch im sure you werent thinking of one until i just asked.

> Just think about the last time you had to piss and didn't go immediately, or the last time you were hungry and didn't eat.

Nothing to do with free will. You just havent factored in psychological variables producing these non intuitive outcomes.

> and determine the output it wants to give which is freewill

Brainlet.

but if we stitch omeone's eye into our brain, will we eventually be able to see through that eye?

>neural network

These are a meme.

free will is logically impossible
now lets continue with OP's question

>Would the simulation be sentient?
no
>Would it be intelligent?
yes
>Would it be human?
no