Anyone able to give me a good explanation as to how a living organism can be a model of its environment (or atleast...

anyone able to give me a good explanation as to how a living organism can be a model of its environment (or atleast sensory inputs) and why it has to be?

Other urls found in this thread:

ncbi.nlm.nih.gov/pubmed/18365164
goodregulatorproject.org/images/A_Primer_For_Conant_And_Ashby_s_Good-Regulator_Theorem.pdf
rsif.royalsocietypublishing.org/content/10/86/20130475.
cbcl.mit.edu/cbcl/people/poggio/journals/bertero-poggio-IEEE-1988.pdf
math.nsc.ru/LBRT/u2/Survey paper.pdf
ncbi.nlm.nih.gov/m/pubmed/28163801/
ncbi.nlm.nih.gov/m/pubmed/18365164/
twitter.com/SFWRedditGifs

Well there's the good regulator theorem.

>how a living organism can be a model of its environment
wat? do you mean how brains learn a model of the environment? or are you talking about indicator species?

Thats not true, whos idea is that? It sounds dumb and i need it explained to me
anyways, you are looking for this
ncbi.nlm.nih.gov/pubmed/18365164

dude, someone proved mathematically that a good regulator is a model of the system it regulates. goodregulatorproject.org/images/A_Primer_For_Conant_And_Ashby_s_Good-Regulator_Theorem.pdf . this explains it in easy terms. we can assume this applies to living organisms since they regulate their entropy (i.e. try not to die.)

i mean that for every state of an organisms ecological niche, the organism responds with a corresponding states so that for every environmental state there is a corresponding state in the organism (state when thinking in terms of statistical physics microscopic distributions). in simple terms you can say this means that an organism has a reaction (even if this means keeping still) that keeps the organism alive (puts an upper bound on its entropy)

note, a model is simply a mapping so that if you have two systems, every individual state of one system corresponds to a state in the other system. in a sense, modelling can be explained solely in terms of correlations. the ideal map has a one-to-one correlation. but you can say that a system goes toward a mapping if it reduces the entropy of its correspondence to the system that it maps (in informational terms though informational entropy is isomorphic to physical entropy mathematically)

>(External/Internal)state stress * heartrate and some chemical composition in matrix or equation form
...ok, maybe..

(for every situation it has a reaction)

huh?

what you talking about?

>for every state of an organisms ecological niche, the organism responds with a corresponding states so that for every environmental state there is a corresponding state in the organism

This isn't a very good model. Organisms don't just map sensory inputs to behavior, they have an internal state too, otherwise the "function" mapping input to behavior wouldn't even be a function since an organism can observe the exact same inputs on different days and yet produce different behaviors each time. So then it's like "well okay, what's the structure of this hidden state?" And then you start to realize that your model is general to the point of being vapid. In other words, you're not really saying anything.

If you want to read about a slightly better model of an agent, look into partially observable Markov decision processes.

this is actually a fantastic criticism because it leads on to the free energy principle. mapping the organism to the environment is an ill-posed problem. in the free-energy principle, variational bayes is used to alleviate this using whats called a mean-field approximation. the interesting thing is that the mean-field approximation aligns very easily to our ideas of specialization in the brain. so we can say that the brain is a very good way of solving this problem youve just showed. ill give you some quotes from karl friston, the originator of the free energy principle and one of the founders of neuroscience and you can see if this aligns to your criticism.

"The problem the brain has to contend with is to find a function of the inputs that recognizes the underlying causes. To do this, the brain must effectively undo the interactions to disclose contextually invariant causes. In other words, the brain must perform a nonlinear unmixing of causes and context. The key point here is that the nonlinear mixing may not be invertible and that the estimation of causes from input may be fundamentally ill-posed."

"The corresponding indeterminacy in probabilistic learning rests on the combinatorial explosion of ways in which stochastic generative models can generate input patterns. In what follows, we consider the implications of this problem. Put simply, recognition of causes from sensory data is the inverse of generating data from causes. If the generative model is not invertible then recognition can only proceed if there is an explicit generative model in the brain. This speaks to the importance of backward connections that may embody this model."

"This ontology is often attended by ambiguous many-to-one and one-to-many mappings (e.g. a table has legs but so do horses; a wristwatch is a watch irrespective of the orientation of its hands). This ambiguity can render the problem of inferring causes from sensory information ill-posed"

also i dont mean mapping sensory inputs to behaviour, i mean mapping internal states to hidden states. its just that sensory inputs is the only way our internal states have access to hidden states. i interpret that youre saying its difficult to map hidden states to internal states on a one to one basis. (and i say that in the sense that internal states produce behaviour.) look at friston's idea of markov blankets being essential to life rsif.royalsocietypublishing.org/content/10/86/20130475.

but in any sense, the organism mapping the environment i think stands because it makes common sense that an organism must be able to consistently react to any situation it finds itself in, in order to survive. the mapping thing can be viewed as an analogy for " in any situation i find myself in, i have a set of behaviours that will maximise my chance of survival"

also i note that organisms must tend toward being a good regulator but dont have to be perfect, or else natural selection wouldnt really work. itsa gradient ascent toward an ideal point, defined by natural selection and competition. interestingly, the price equation describing evolution in a population is isomorphic to bayes theorem which means that evolution naturally maps to the idea of optimising models (ultimately defining natural selection)

interesting that no one wants to respond to genuinely interesting biological questions on sci... the best you get is "have whites got a better iq than blacks" and "is evolution real?". either no one on sci has a genuine interest in science or they arent equipped to engage in genuinely interesting questions about what life really is. people just want controversy,

Because you come off as someone who read the Wikipedia article on this stuff and decided they were an expert.

the problem of choosing these reactions can subsequently be naturally aligned to our conceptions of risk/ambiguity in decision making as well as things like exploration/exploitation. without the problems described in mapping an organism to its environment, we wouldnt get these phenomena of risk or exploration, and these processes happen sequentially coz we obviously have computational costs. theres a limit to the extent we can sample our environment and our models of it.

so what kind if discussions do you want? is Veeky Forums really a place of discussion, or solely a substitute for social interaction for certain types of people?

its annoying that people pretend theyre so smart or insightful but they dont talk about anything interesting. this is a genuinely interesting discussion and only one person has given an interesting post (thats even before i replied to anything).

and you can search the free energy principle or anything related on wikipedia and see if its related to anything ive said. you can even search up the good regulator theorem and see if ive explained it in the same sense as the wikipedia article.

i can only assume that things start to sound like wikipedia when people use technical language about a topic that others don't know about.

just interesting how much this website talks about brainlets.

this is the good post i meant

and btw im only talking about biology in this sense, i cant pretend to know about some of the physics/mathematical things on here. hence i refer to iq and evolution.

life does not simply model its environment, it interprets meaning from it.
read what i linked

i have the same problem, talk about this with me
i hope you arent a psued

Animal's main food source are plants: animal has teeth naturally designed to grind plant material. Animal traverses hard ground: animal has hooves for shock absorption. Everything an animal possesses is a biological adaptation to its environment. Teeth, size, hair, speed, taste, blah, blah, blah.

thank you user, i learned something today

meaning is modelling the environment isnt it? just in an abstract sense? the point of the mean field approximation is that rather than interpreting every individual situation, you can extract certain regularities and apply a combination of them to novel situations. this is pretty much the same as how you talk about interpreting meaning. this turns an intractable problem in to an easier one, in terms of mapping your environment to behaviours that help you survive. e.g. rather than trying to react to and learn about every individual person which has a unique sensory input, you can recognise the abstract concept of a person and approximate what they tend to do.

this gets a (you)

whats your problem?

another way to state the problem is through the ship of theseus example in philosophical thought which essentially says there is no objective way to distinguish between discrete states in the natural world (atleast macroscopically). we ultimately come into contact with novel sensory input all the time. we see new people we've never seen before. its easier to approximate what people as a category tend to do rather than learn about every individual person. what i mean in the ship of theseus context is that you can categorise people in an infinitesimal set of ways potentially and trying to model the world in this way is inefficient and computationally far too difficult for the brain. so we create approximate categories and extract relevant features of our input to make predictions. this is in essence, a large part of why interpreting sensory input is so ill-posed. it is also ambiguous in otherways. the meaning of sensory input always depends on the history that came before (which obviously isnt always observable), things like height and distance are entangled e.g. we interpret both of these features by how high up an object is in our visual field so ultimately height and distance can only be interpreted contextually which in the combinatorial nature of our world, is too large of a problem to solve in a case by case way.

if anyone thinks my viewpoint about how difficult it is interpreting sensory data is just nonsense, then, as well as referring to karl fristons papers netweem 2002 and 2005 which give a glimpse at it, you can look at papers about computational views of vision which as well as influencing modern neuroscience, have ultimately influenced machine learning and neuroscience; looking at vision as an ill posed problem. look at poggio, one of the most influential visual scientists who was a collaborator with the great david marr.

cbcl.mit.edu/cbcl/people/poggio/journals/bertero-poggio-IEEE-1988.pdf ; an introduction.

math.nsc.ru/LBRT/u2/Survey paper.pdf

interestingly, ill-posed problems are arguably why we do science. otherwise you could take the data of any individual person doing a cognitive test and make inferences about it. we can't do that; we take big random samples and control variables so we can make inferences from data which is by itself ambiguous. you can give any person a cognitive test but you can't interpret it outside of the context of an experiment because there is no one-to-one mapping between the data from an individual cognitive test result, and the causes that led to that result. interactions.

another example : how do you differentiate someone who's become homeless because they spent all their money on drugs, or because they were kicked out by abusive parents. its a non-linear problem and even if it is solvable, it take alot of computational power to distinguish.

"For example, consider our visual perception. It is known that our eyes are
able to perceive visual information from only a limited number of points in the world
around us at any given moment. Then why do we have an impression that we are able
to see everything around? The reason is that our brain, like a personal computer, completes
the perceived image by interpolating and extrapolating the data received from
the identified points. Clearly, the true image of a scene (generally, a three-dimensional
color scene) can be adequately reconstructed from several points only if the image is
familiar to us, i.e., if we previously saw and sometimes even touched most of the objects
in it."

extrapolation -> abstraction -> meaning.

interestingly, variational bayes relies on variational calculus equivalent to principles of least action which among many problems in physics (e.g. related to entropy) are involved in the highest levels of how we conceptualise the world (e.g. standard model). is principle of least action something that can underlie our understanding of everything, if it can apply to gauge theories in physics, is this just another gauge theory but in biology? a rule for dissipative systems that regulate their entropy?

Jeremy england is another interesting citation.

a weird implication of this is that though its obvious the categories we put on the world are sufficient for our use, there are actually no unique solutions. and in a sense, this complements subjectivity how we can categorise things in different ways depending on the context. this is, in a sense, similar to the species problem in biology. species aren't objectively differentiatble. there is no unique solution to defining species. yet the definitions or categories still help us look at or make predictions or solve problems in biology in a certain way

yes

This thread is so pretentious

fuck off brainlet.

big boys unwanted

>meaning is modelling the environment isnt it? just in an abstract sense?
Yes. And no. Meaning is modeling the relation of signs that represent some externality to the interpreter. So while it is an abstraction of the environment, the environment being abstracted is itself, a subjective construct.
>the point of the mean field approximation is that rather than interpreting every individual situation, you can extract certain regularities and apply a combination of them to novel situations
The point of devolping a biosemiotic is to describe how life works in General(big G) On every scale of analysis, (genetic cellular, organismal, ecological, ect)
We are talking about the same mechanism using different paradigms.
>the meaning of sensory input always depends on the history that came before (which obviously isnt always observable), things like height and distance are entangled e.g. we interpret both of these features by how high up an object is in our visual field so ultimately height and distance can only be interpreted contextually which in the combinatorial nature of our world, is too large of a problem to solve in a case by case way.
Check this out
ncbi.nlm.nih.gov/m/pubmed/28163801/
More on this later.

>Meaning is modeling the relation of signs that represent some externality to the interpreter.
*meaning is modeling signs that represent the relationship of some externality to the interpreter.*

Good show. Species is a hueristic, glad you recognize that. The ontological answer to the species problem is that all life is the same thing.
you read this yet? ncbi.nlm.nih.gov/m/pubmed/18365164/
Wanted someone to discuss something interesting with me. Figured that was the best way to motivate them to do that.

Baby engineer can't handle thinking beyond memorizing his calc formulas

you actually havent said shit. drop your bastard semiotics

no the ontological answer is ambiguity you simple cunt

Pseudo-intellectuals gonna pseudo-intellectualize.

I haven't said shit?
Or you haven't interpreted shit?

You are not wrong. I couldn't think today. Maybe it will be better tommorow if I can control myself around my substances.