Grad School for Computer Engineering

Look up binary bytes. Historically in CS we used kilo-, mega-, giga-, etc. as multiples of 1024 rather than 1000 because it's evenly divisible by 8, and we use 8-bit bytes fairly universally. Later, companies annoyingly switched to 1000 for the lay-person and more realistically to make their numbers slightly bigger -- the same way ISPs intentionally conflate bits and bytes, using a capital B to describe bits when in the industry it is common convention that capital is bytes and lowercase is bits.

There's a more modern convention where if you're using multiples of 1024, you say binary bytes, and abbreviate to "kibibytes"/KiB, "mebibytes"/MiB, etc. I have mixed feelings about it.

>Believing 1KB =/= 1000 Bytes
>Not knowing that 1 KiB = 1024 bytes

1kB=2^10 Bytes=1024 Bytes.

It literally just depends on what industry you're working in. If you're working with lower-level stuff you're probably more likely to see the KB=2^10 bytes convention whereas if you're in a non-traditional software industry people are probably going to say KB=10^3 bytes.

en.wikipedia.org/wiki/Kibibyte

I figure if the drivemakers want to stick to SI prefixes and inflate their numbers, the only proper response is to create a new prefix so that no ambiguity exists.

There is a different prefix. kb = 1000 bytes
kib = 1024 bytes

I really hate the ambiguity too. I also prefer the 1000 bytes since it keeps calculations cleaner.

I am aware. I'm saying that adopting KiB as the standard for any serious work outside of drivemaking makes more sense.

Come on nigger are you really this stupid?

Doing anything with Networks is absolute hell for keeping track of the different conventions. Marketers will advertise data speeds in powers of 10, but almost anything regarding network protocols works in powers of 2.