Is there a simple dynamical system whose output asymptotically approaches its future input?

Is there a simple dynamical system whose output asymptotically approaches its future input?

Other urls found in this thread:

en.wikipedia.org/wiki/Attractor#Fixed_point
youtube.com/watch?v=e3mG7wHXXUk
twitter.com/NSFWRedditVideo

Your mom bouncing on my cock

a perfectly rigid stick with a velocity input and zero mass.

same thing

aren't those called sinks? feel like I remember that from diffeq

en.wikipedia.org/wiki/Attractor#Fixed_point

Stated another way, is there a simple dynamical system that eventually comes to predict its own inputs?

a person eating his own shit

no, because you can apply any goddamn function of time you want to the system. There is no way for the system to predict what you're going to apply.

It could still work but the system would not be very simple.

Then can you prove a theorem establishing a lower bound on the complexity of the system necessary as a function of the complexity of the source of the input?

The Mamallian brain is good at this

>>It could still work but the system would not be very simple.
no it could not. The input to the system could be from an inverse model of the system that always applies the worst possible input to the system such that the system can never predict the input. Of course in the real world we can do things to predict our input, like 3d scanning the ground so we know what a car tire is going to hit.

That might not do it, you lose some matter each go thru, that's why the human centipede wouldn't work

A perfect intelligence

massless cock dicklet detected

Why would a mammalian brain predict it's own inputs? That would mean having a prediction of the activity of every retina in your eye, every pressure sensor in your skin, every hair in your ear, etc. etc. all the time. Seems incredibly costly to me.

At best I believe that the mammalian brain compresses this information and verifies the relative probability of a particular sensory state against some other compressed representation of historical norms.

Its the leading hypothesis of how the brain works though and theres evidence for it. The reason why it would is that theres an inverse problem which is intractable when inferring what sensory states mean. This means that the brain needs prior constraints to make inferences about sensory input - contextualised predictions manifest.

>the brain needs prior constraints to make inferences about sensory input
Sure.

>contextualised predictions manifest.
How so? I understand that good regulators must model their environment, but the nature of that model need not be generative at the level of raw sensory inputs.

No because the input is outside of the system. You need to constrain the domain of the input for this question to make any sense at all. For instance, if your input domain is sine waves then yes such a system exists, but if the input is just random numbers then no.

The idea os sensory inputs dont map one-to-one to their causes (in the outside world)

E.g. objects appear to us from many different angles, the light frequencies coming off an object are affected by ambient properties as well as the object itself (colour constancy), the vertical position of an object on your retina is a cue to both physical height from the floor and/or distance. Sensory inputs have natural ambiguity to its meaningful causes. There must be functions determining how sensory inputs create this ambiguity (e.g. the rules of ambient occlusion) and the brain must encode the parameters or whatever of these functions (and also the underlying causes) in order to solve an inverse problem. You naturally get a generative model and being able to produce predicted sensory inputs from causes is arguably the only way you can optimise the model, calculating an error so that tou can update your model.

I also think the fact you can imagine or dream in very flexible ways in many diffeeent scenarios is actually suggestive of a geneeative model where the brain can generate its own input.

Did you mean -> ex ante or ex post?

youtube.com/watch?v=e3mG7wHXXUk

I mean that the generative model is simply by virtue that the brain is encoding the relationship between inputs and causes so causes can be inferred from inputs and inputs from causes i.e prediction.

It relates to the good regulator theorem in that the world is too complicated and intractable for rhe brain or organism to learn or encode an isomorphic model of the world.

An easier way is to parition the world into simpler independent factors that capture any underlying correlations or shared variance that can be found in the data/input.

Only problem as seen in prior post is that causes map to input in an ambiguous way... they "mix" as to be uninvertible and must be unmixed using the constraining parameters as mentioned before which is essentially the function of the thalamo-neocortical part of the brain.

I think this gives rise to our nature as conscious perceivers. This inferential process think we would just be input-output machines without consciousness.

>constraining parameters

how about defining those dynamic parameters as "generating" parameters since it's always a running process, there's no beginning and no end you could determine

Do you mean in the real world or in math? Because things like image processing use acausal systems all the time.