A struggle for size in the server space
When one looks at micro Instant data centers it’s natural to first compare them to standard data centers. It’s also one of the easiest ways to really see the extent of innovation which is coming from the micro-server space. In large part, because the term server space can be rather apt. People within the industry tend to take space for granted. A typical data center is unique for the fact that it’s one of the few areas which sees extra room for servers as a normal business expense.
One should always remember that a data center is still a business like any other. Even if extra space can be seen as a normal operating expense it doesn’t mean that data centers are unwilling to leap at the chance to lower those costs. Getting more servers into existing space is something which most people in the industry are quite eager to reach for. In trying to lower costs they’ve been able to come up with some quite innovative ideas. One of the first big server related accomplishments came about due to the nature of technological advancement in chip design.
The main processing systems on servers aren’t strictly uniform. Processors have tended to rely rather heavily on x86-64 architecture since the very early days of the industry. Use of the internet rose within homes on an x86 based architecture. The servers meeting the needs of home users tended to use quite similar processors as well. Reliance on a single processing architecture might not seem to matter when it comes to size. Even non-x86 based computers of the time had a fairly similar size factor to their more standard cousins. To understand the nature of computer size you have to consider the fact that servers and desktops are very different things. It’s true that a server and desktop environment will often have more similarities than differences. Most operating systems on the desktop even have near direct mirrors on the server side as well. The way these common elements work within each environment tends to be quite different. A desktop system is heavily invested in many different areas related to the end user when compared to servers.
A desktop tower has sound cards, graphic cards and any number of other peripherals which target a single user experience. Servers are geared to technology which scales according to the number of users connected to them. One or two people will usually be in charge of actually working with and supporting the server’s hardware within a data center. The people working directly on a server’s hardware should be considered more of a support team than as direct users.
The actual user base of servers tends to range from hundreds to hundreds of thousands of people connecting from remote locations via networks. A server’s emphasis on remote use usually leads to a very different focus when it comes to hardware. Even something like a screen to display data is secondary to the overall setup of a server. Data centers typically just roll around specially built monitors to connect to servers in the rare case where they need a direct connection. A data center’s view of hardware is usually tightly tied to software within servers. The emphasis on software over hardware in servers will later be seen as one of the primary reasons why migration to micro data centers can be accomplished so easily. The rising power of processors was simply starting to act as an unused surplus within data centers. Surplus power is wasteful, and no successful industry can abide waste for very long.