Nibbles-n-bits-n-bits-n-bits-n-bits-n-…

Okay, having addressed 64-bit hardware the question remains, who cares? What does it actually mean to you?

Well, nothing really. For all the end-user knows the system could be 8-bit or 500-bit or whatever internally, and it doesn’t matter. Linux or Windows or whatever you use will still look, feel, and run the same way. The only people it really makes a big difference to are the programmers, but in a roundabout way it does eventually wind up affecting the end-user.

What effect does the number of bits of the processor have anyway? Primarily, it sets the limits within which the programmers have to work. A 32-bit integer can hold values up to just over four billion (or +/- 2 billion if you want to include negative numbers) and likewise a 32-bit memory address can cover four gigabytes of memory. Large numbers, but not quite large enough in many cases. 32 bits is not enough to hold the population of the earth. Hard drives and even individual files easily exceed the four gigabyte mark, making 32 bits insufficient for uniquely identifying a specific point within the drive or file. Some programs, like games and databases, want huge amounts of memory and demand is of course constantly growing.

These limits aren’t fundamental though; there are already ways of handling values larger than 32 bits even though the CPU itself is limited to 32. Hard drives for example can be broken down into ‘blocks’ of data. If you have a 512-byte block and then use two separate numbers to specify which block and then the offset within the block, you can now access up to 2 terabytes of information instead of just four gigabytes. If your program needs more than four gigs of memory, it can dump portions of data out to disk and then read them back in later as needed instead of trying to keep it all in memory at once. If you need to deal with numbers larger than four billion, you can take two 32-bit numbers and ‘merge’ them by doing the appropriate math on both of them, and there are math packages and compilers that will hide the grubby little details for you. Workarounds like this will let 32-bit platforms deal with situations of any size, regardless of the low-level 32-bit limitations.

Why bother going to 64 bits then if 32 bits will work just fine? Well, it’s a matter of practicality. Workarounds may exist, but they have disadvantages. Faking out larger-bit math by using multiple numbers is much slower; adding a 32-bit number to a 32-bit number is a single instruction on a 32-bit processor, but adding a faked-64-bit number to a faked-64-bit number will take multiple, slower instructions, whereas on a true 64-bit processor it’s back to a single fast instruction again. Operations like multiplication and division are even worse. All of the extra memory and disk management necessary to avoid exceeding the four gig memory boundary increases the complexity of the code, inevitably leading to more bugs, and disk will always be slower. It would be far, far faster and more reliable to be able to pull a whole 10-gig database into memory than having to deal with it in chunks.

Most of these issues still don’t really affect home users much; Joe Blow doesn’t have a 10-gig database he needs to access at tens of thousands of requests per second, Grandma doesn’t have to be able to count every person on the planet for her mailing list, and most other uses of 64-bit numbers are isolated enough that applications just deal with them in their own way via the workarounds. It is however, only a matter of degree. The amount of data we handle *will* grow, the need for higher limits will grow with it, and increased use of workarounds will make software slower, buggier, and more painful to write, and eventually there will come a breaking point where it’ll just be easier to convert to a 64-bit platform instead of trying work around the 32-bit limits.

This is pretty much what happened in the big conversion from 16-bit to 32-bit systems; 16 bits imposes much smaller limits, but enough workarounds existed that it was practical to continue using 16-bit systems and OSes for quite a while after 32-bit ones were available. Eventually though, they were too much of a pain in the ass to use and maintain, and everyone made the jump to 32 bits.

Is a jump to 64 bits imminent then? Well, maybe… 32 bits and the workarounds will be good enough for a lot of people for a long time to come, but some limits are getting close. The main limit of interest will likely be on memory, and some programs like games are already pushing those limits. EverQuest on my system easily uses up my entire 512 megs of RAM, putting it right within an order of magnitude of the limit, and it’ll only get worse as games become more detailed.

Fortunately, the jump shouldn’t be that painful. With the Athlon64s and Opterons already out now and with full 32-bit compatibility, many people will be buying them just as a general I-want-faster CPU upgrade regardless of their 64-bitness. People can continue to run and write 32-bit programs, and when companies make the shift and start releasing 64-bit programs, many people will already be able to run them.

The 64-bit future looks promising so far, at least.

Leave a Reply

Your email address will not be published. Required fields are marked *