Memory Cards Have “Less Space” Than Advertised, Here Is Why

Have you ever experienced that moment where you insert your 16GB memory card into the camera only to discover that this new, freshly formatted card is a nickel short of 15GB? Or a 32GB card turning to 29.8GB once installed?

Memory Cards Have "Less Space" Than Advertised, Here Is Why

Have you ever wondered where those GigaBytes are hiding? The truth is that they are not hiding at all.

It has to do more with the way card companies (and hard drive companies too) decide to annotate their products.

In English Kilo means one thousand (10001 = 1,000), a Mega is a million (1,0002 = 1,000,000) , a Giga is a billion (1,0003 = 1,000,000,000) and so on (Tera, Peta, Exa, Zetta & Toyya). This system is called the SI units system.

In Computerish, however, the numbers are a bit different: A Kilo means 1,0241 = 1,024, a Mega is 1,0242 = 1,048,576, a Giga is 10243 = 1,073,741,824 and so on. This is called the Binary units system.

So there is a difference in what Kilo, Mega and Giga means and that difference is getting bigger the “stronger” the prefix is.

For Kilo, the difference is only 2.3%, for Mega it is 4.6% and for Giga it is 6.8% – see a pattern here?

Back to the memory cards.

Memory cards manufactures choose to use the SI system to denote cards sizes. Our computers and card readers use the binary system for size calculation and here is where the missing Bytes are.

Of course, the card companies are covered, they do mention this fact on their sites (in a small asterisk, or with hover text that is revealed when you hover over a small asterisk). Here are screen shots from three leading cards and hard drive manufacturers, though they are not the only one to use that practice:

Sandisk:

Memory Cards Have "Less Space" Than Advertised, Here Is Why

Lexar:

Memory Cards Have "Less Space" Than Advertised, Here Is Why

Seagate:

Memory Cards Have "Less Space" Than Advertised, Here Is Why

If you followed the math, you probably realized that the toll this calculation method in taking gets bigger the bigger the data units are. So while the toll on a 1GB memory card in way smaller than on a 1 tera hard drive. Have a look at this table to sum things up:


 Size SI
units size
Binary
Size
Delta
(%)
Delta
(GB)
512 Mega 536870912 512000000 4.6 0.02
4 Giga 4294967296 4000000000 6.9 0.27
16 Giga 17179869184 16000000000 6.9 1.10
64 Giga 68719476736 64000000000 6.9 4.40
1 Tera             1,099,511,627,776             1,000,000,000,000 9.1 92.68
4 Tera             4,398,046,511,104             4,000,000,000,000 9.1 370.71
1 Peta     1,125,899,906,842,620     1,000,000,000,000,000 11.2 117253.43
         

Now What?

Now, I think it would be fair if we politely asked memory card makes and hard drive makers to switch to binary so they will be better aligned with the way we use them.

  • Nosamluap

    http://en.wikipedia.org/wiki/Kibibyte addresses this, using ‘kilo’ for binary is wrong.

  • DrP

    *asterisk!

    • http://www.diyphotography.net/ udi tirosh

      yes, yes, thanks*

      *asterisk != Astrix :)

  • ZF

    For the Record, this has been the way all storage capacity (including Floppy and Hard Drives) are denoted. It used to be fun arguing with store owners that don’t know this, and getting them to give you a free “upgrade” because of false advertizing.

  • thomas

    If i remember correctly, there was a class action lawsuit about this very issue back in the 90′s. Since then media companies had to put the asterisk. I find it funny that eventhough they follow the binary methodology, they were still making cards using the SI numbering system (128, 256, 512, 1024).

  • Patrick

    Go back and change history. It’s an 8-bit byte so the math has worked this way forever and for all technologies. There is no conspiracy or lies here. It’s just the way it happened. Too bad it didn’t all start with a base-10 byte but it didn’t.

    • Septic

      I’ll try and make this short… Two major computer peripheral
      types use SI (base 10) units: Storage and Networking devices. The generally accepted reason is both were initially developed prior to overwhelming use of binary. The HDD was introduced in 1956 and both binary and decimal formats were used. An example is the IBM 650 (1953-1962) which used decimal units for data and addressing. Adding confusion to uncertainty in storage is that no storage device will ever hold as much data as printed on the label. Different file formats hold hidden data EXT (Linux) and NTFS (Windows) have hidden meta data associated with every file, users usually encounter this difference when dealing with file permissions. A lesser known example is that NTFS files may also hold Alternative Data Streams. All File systems reserve space for address look-up tables and the addresses themselves. Storage blocks were generally 512 byte logical blocks but most modern large drives are now partitioned at 4096 byte blocks to keep address tables lookups performance reasonable to the larger files sizes.
      However if a file is 513 (or 4097) bytes two blocks will be used with the second block containing 1 byte of data and the remaining is unusable unless the file grows larger.

      One last note… Whenever a computer displays a decimal character it does that only for us humans. Binary, Octal, Hexadecimal are closely related. Decimal is a conversion. 10000 (binary) = 20 (octal) = 10 (hexadecimal) = 16 (decimal). 100 (decimal) = 64 (hexadecimal) = 144 (octal) = 1100100 (binary). If you ever wondered why IP addresses go up to 255, that’s because 255 decimal is FF hexadecimal, the next hex digit is 100.

  • Dr.101010001111000011100

    For Kilo, the difference is only 2.3%, for Mega it is 4.6% and for Giga it is 6.8% – see a pattern here?

    nope,but I would if it was 6.9%

    • Camil

      But this really depends of the significant number used, thus 6,8% is a maybe not false.

  • jceeetle

    MacOS X no longer uses binary to report storage sizes. It uses the SI system. A 32GB card is reported as 31.9GB. A 2TB drive is reported as 2TB capacity.

  • Jeffrey

    This is a new piece of information for me. Thank you

  • David

    The capacity will be less than that due to bad blocks on the card. This can be up to 2.5% (25/1024 blocks) according to the chip manufacturers.

  • Orki

    I used to work in a flash memory company so you can take my word for this…

    The underlying truth is that the card manufacturers aren’t trying to rip you off at all. The number of bits internally on the memory chip is at LEAST the binary Giga i.e. 34,359,738,368 bits per “32GB” card (often a bit more) but they aren’t ALL available for you to use because of several reasons:

    A) Flash memories stress the production process to the max and are huge area chips so they often (10-20%) have bit,line,column or sector defects right out of the fab. These defects are screened in post-production testing and the whole defective sector is marked “bad” and either replaced by dedicated spares or simply mapped out of the valid address space (like a bad sector on a hard disk).
    This requires keeping some sectors “on the side” to use instead so all cards have the same available capacity.

    B) During the lifetime of the card, especially if it was torchered with many write cycles and high temperatures some defects may appear that were not there in the first place. The controller on the card that manages the flash memory chip can detect these errors and add these locations to the bad sector table it manages. The data is moved to one of the sectors that was “kept on the side” and you don’t feel a thing.

    C) Said controller runs its own firmware, file allocation table and all the management data which are stored on the flash chip and require some space too.

    D) Finally, suppose you keep some files on the card indefinately while other memory locations are write-erased many times, those latter locations will wear out much faster than the former. To average this out and also to re-write existing data in order to “refresh” it (write strong ’1′s and ’0′s again) the controller runs what is called a “wear leveling” algorithm.
    In the wear leveling algorithm, the card controller, while being powered but not doing anything useful, moves old files to new (erased) locations in the memory chip thus refreshing the bit state and also allowing the former location to free up and wear-level like the rest of the bits. It then proceeds to update the new file location in the file allocation table so you don’t realize a thing!
    To be able to move files around even if the card is “100%” full of cat videos, some sectors are also “kept on the side” for this purpose.

    Since all these sneaky tricks require putting several percent of the memory on the side, marketing figured an asterix with a lame excuse is simpler than a 2-page-long blog comment.

    I think so too.

    • joe_average

      well said! complete agree, from what I’ve learned over 20 yrs with pc’s.

  • Tony Guillaro

    Too much math for me

  • Bapu

    I require a 2GB data card which can store more than 1.84GB data. How can I get this type of card