Virtualization 101, From Data Center To Desktop

Background

In the last few years Virtualization has gone from a buzzword to a platform of choice in the world of Information Technology. Consumers already reap its benefits on a day to day basis as Banks, Internet Service Providers, online merchants, and many other companies have adopted the technology all the while transparent to the end user. The reasons why virtualization is attractive to companies are many but, to put it succinctly, it all boils down to efficiency which then translates to lower cost and higher availability.

In an article on the Intel/IBM sponsored VirtualizationConversation.com, HotHardware.com's Editor in Chief, Dave Altavilla reinforces this point:

"...server consolidation to provide efficiencies in power consumption, maintenance and other overhead costs, has become critical. There are lots of other areas where virtualization reduces costs and provides efficiencies, including cooling, application/OS testing and associated man hours, as well as reduced backup, security and OS software licensing fees."


Just as the mighty transistor has shrunken down to sub-micron sizes over the years, so goes the data center. Less server sprawl does indeed equal more efficiency when virtualization is employed.

The computer you are presently reading this article on is comprised of hardware in the form of a main board, processor, memory, graphics card, ethernet card, etc, on which an operating system is installed. That operating system (OS) can be Windows, Linux, or Mac OS (or ?). To enable your operating system to be installed on this hardware, drivers are either provided in the OS or required to be loaded for items that are not in the OS developers' HCL (Hardware Compatibility List). Each device requires a specific driver for the OS you are using to enable its functionality. Over top of the Hardware and Operating System you then have applications that are installed enabling you as the end user to perform the tasks you require. The applications are developed by software companies to work on specific platforms just as the operating system is to work on specific hardware.  If you should decide to upgrade to a new computer or yours fails, the installation process will have to be done again but more than likely with dissimilar hardware thereby necessitating different drivers, and perhaps a new OS, if yours is no longer in production.

Then there is the application compatibility to deal with as well.  Imagine having to deal with this times 3000 servers with various hardware, operating systems and applications, not to mention your company's livelihood depending on all of these machines being up 99.9% of the time.

Before virtualization, if your company needed an application that only ran on Unix, you bought a new server. If you needed to run an application that couldn't coexist with another on the same OS, you bought a new server. This method of operation would usually equate to a room full of servers tasked to do one or two main functions using maybe an average of 20-40% of resources such as CPU, and Memory. Now we have wasted space, electricity, and server resources and in the event we were to lose a critical hardware component, the effected server would be down until a suitable replacement could be found. If the server was near end of life (EOL), good luck trying to procure it.

Related content