Reminder that someone LITERALLY posted this on /agdg/:
>A floating point means that the number has, well, a floating decimal point. Its decimal point is at any arbitrarily chosen position. (the position is not RANDOM, just ARBITRARY -- it's chosen based on the needs at that time, but those needs can be anything.)
>Usually, computers have a set number of digits to work with, but can put the point wherever because the math still works out fine. So the difference between 1.2345 and 123.45 is not that big, computationally, in base 10. (computer math is done in base 2 because le ebin binary may mays actually having root in real computation despite most binary converters and the like being totally arbitrary in how they interpret things; computers use base 2 because they only have "On" and "Off" to work with as numbers)
>However, to the user, a moving decimal point can represent a large difference, because the significant figures you can use are necessarily limited.
>Obviously to us, a computer has LOTS of digits to go through, but let's say that we have some fairly small number of them, like 5.
>If I want to calculate physics stuff on a cartesian plane streching out to "Very Far" units on each side in 2d. an object can be at a very precise position near the center, like [1.0082, 2.5598], but if it's far from the center, I star to lose precision. 10 units out, and I can only go to 3 places. 100 units, and I can only go to 2 places, like [650.25, 554.47]. If it's more than 99998 units out, I lose whole individual units as I start to having to define things like 1.0000x10^5 for being at 100000 in any coordinate (but not just htat, but now using much more memory because I have to store "1.0000" and "10" and "5" instead of just one number.)
Fucking look at it. Look at it and laugh.