The Case Against 64-bit Builds

I wrote a version of this some time ago as an argument against what I considered to be an ill-considered push within the company I worked for to 'modernize' software on a particular high-availability high-reliability product line, said push seeming not to take into account the considerable costs and side-effects of such an action on the somewhat monolithic products in question. The argument doesn't necessarily apply well to software in general, but should at least be considered wherever you have a choice of build size.

Leaving aside esoteric number-crunching applications, where the actual size of 64-bit integers (by default) is required, or at least advantageous, the main reason to go to a '64-bit' build is to gain additional address space. So-called 32-bit builds offer a maximum 4-GB address space, which is often smaller than this when realized on a platform. (For performance reasons PPC and Intel Linux platforms often only give processes 3GB of address space, and MIPS Linux platforms only give processes 2GB of address space.) It isn't all that hard anymore to craft a large application that starts bumping its head on the ceiling in a 32-bit process. Systems these days often have more than 4GB of RAM, so a single process cannot use all of it even if it wanted to.

64-bit processes offer the advantage of effectively unlimited address space, more than is physically possible anyway, and so give you the advantage of being able to use all the RAM that is available. (The exact same argument that was used for 32-bit processes not all that many years ago.)

Paradoxically, this same 'advantage' can also be a pretty substantial disadvantage! To wit:

IMHO, the Engineering effort to convert to a 64-bit build might be better spent on slicing into multiple 32-bit pieces. That would offer the following real, and potential, advantages:

Return to Site Home