Building space is a matter of physical and theoretical reality
For a few decades, a standard data center would center around one or more buildings which housed servers. Data centers would typically have rows and rows of metal shelves called racks. A server is created and then racked within a server room. The amount of power and heat which these server rooms create will be surprising to most people encountering it for the first time. Most people are aware that computers generate heat. Anyone with a laptop has discovered that actually sitting with it on their lap might not always be the best idea.
However significant a laptop’s heat might be, it’s still fairly insignificant when compared to that of a desktop system. The comparison between laptops and desktops is somewhat similar to what one finds when comparing desktops and servers. When one groups hundreds of servers together into a single room than the heat created can be shocking. In fact, server rooms of any reasonable size will usually require industrial cooling systems that make the area feel like a refrigerator. The immense cooling requirements of a traditional data center are one of the reasons why the entrances to larger server rooms can be somewhat difficult. Server room restrictions aren’t just about security, even if that does factor in. The sealed doors of a data center are less about security and more about the fact that they require perfectly sealed and climate controlled environments to function.
The reason why there’s such a steady climb in heat production between different types of computers has to do with the power of the processors within them. A server is often the end result of some amazing innovations in chip design. It’s certainly true that there’s a heavy convergence between servers and desktop design. Convergence shouldn’t be taken to mean that there’s a one to one match between hardware on the server and desktop side.
Minor or major variations might show up between home computers and larger scale servers at any given time. In the end, differences between the two platforms needed to be evened out, though. The process of maintaining some level of hardware compatibility between servers and desktops means that innovation on one side tends to push results on the other as well.
A balancing act between desktops and servers often meant that servers ended up with far more power than they actually needed. Servers weren’t running the cutting edge high tech video games or multimedia presentations which people relied on their home computers for. Servers might be serving out data associated with higher processing costs. Lack of need to actually render those types of content meant that there was a lot of CPU power being underutilized within any given server. If there’s one universal truth about engineers, though, it’s that they will quickly seize upon any new tool. In this case, the extra processing power and additional memory within a server became a tool to push for further innovations. What engineers eventually came up with is virtualization.
Virtualization is a complex subject which might lead to a discussion all of its own. For the sake of pithiness, one can think of virtualization as the use of software to partition off parts of a processor for individualized use. If a single server has enough unused power than the software on it can essentially run a second server within the larger whole. As technology advanced this would become far more than a single extra server instance on any given server. Most commercial data centers would seize on the idea and use it as a standard business practice. The exact terminology used with virtualization tends to vary on a location by location basis. But it’s quite common for data centers to sell dedicated systems which amount to a single server. Data centers might then sell multiple variants on virtualized servers in order to best match their client’s needs. The main difference will usually come down to how much of a server’s hardware is actually dedicated to an individual virtualized package.