Logo   Banner   TopRight
TopUnder
Transparent
Virtualization 101, From Data Center To Desktop
Transparent
Date: Dec 12, 2008
Section:Misc
Author: HH Editor
Transparent
Background
In the last few years Virtualization has gone from a buzzword to a platform of choice in the world of Information Technology. Consumers already reap its benefits on a day to day basis as Banks, Internet Service Providers, online merchants, and many other companies have adopted the technology all the while transparent to the end user. The reasons why virtualization is attractive to companies are many but, to put it succinctly, it all boils down to efficiency which then translates to lower cost and higher availability.

In an article on the Intel/IBM sponsored VirtualizationConversation.com, HotHardware.com's Editor in Chief, Dave Altavilla reinforces this point:

"...server consolidation to provide efficiencies in power consumption, maintenance and other overhead costs, has become critical. There are lots of other areas where virtualization reduces costs and provides efficiencies, including cooling, application/OS testing and associated man hours, as well as reduced backup, security and OS software licensing fees."


Just as the mighty transistor has shrunken down to sub-micron sizes over the years, so goes the data center. Less server sprawl does indeed equal more efficiency when virtualization is employed.

The computer you are presently reading this article on is comprised of hardware in the form of a main board, processor, memory, graphics card, ethernet card, etc, on which an operating system is installed. That operating system (OS) can be Windows, Linux, or Mac OS (or ?). To enable your operating system to be installed on this hardware, drivers are either provided in the OS or required to be loaded for items that are not in the OS developers' HCL (Hardware Compatibility List). Each device requires a specific driver for the OS you are using to enable its functionality. Over top of the Hardware and Operating System you then have applications that are installed enabling you as the end user to perform the tasks you require. The applications are developed by software companies to work on specific platforms just as the operating system is to work on specific hardware.  If you should decide to upgrade to a new computer or yours fails, the installation process will have to be done again but more than likely with dissimilar hardware thereby necessitating different drivers, and perhaps a new OS, if yours is no longer in production.

Then there is the application compatibility to deal with as well.  Imagine having to deal with this times 3000 servers with various hardware, operating systems and applications, not to mention your company's livelihood depending on all of these machines being up 99.9% of the time.

Before virtualization, if your company needed an application that only ran on Unix, you bought a new server. If you needed to run an application that couldn't coexist with another on the same OS, you bought a new server. This method of operation would usually equate to a room full of servers tasked to do one or two main functions using maybe an average of 20-40% of resources such as CPU, and Memory. Now we have wasted space, electricity, and server resources and in the event we were to lose a critical hardware component, the effected server would be down until a suitable replacement could be found. If the server was near end of life (EOL), good luck trying to procure it.
Transparent
Escaping the Limitations
Using virtualization we remove the ties that bind us (to specific hardware). Imagine if you can, that the operating system and its applications are not bound to the hardware. Instead there is a basic software layer installed on the hardware that you can then put your choice of operating system on, to include multiple instances whose numbers are only limited by the amount of processing power, memory, and disk (or storage) space you have available. If and when your needs outgrow the current hardware, you can then add another server and simply move as many of your "Virtual Machines" as needed to it in real-time, without having to reload or experience significant (if any) downtime.

The server hardware at this point, through the implementation of virtualization, acts as a container for your various virtual machines which consist of the operating system and applications.  Handling the task of communicating requests back and forth (I/O) for hardware resources that power the virtual machine is the Hypervisor. The Hypervisor, sometimes called virtual machine manager (VMM) is the software that allows multiple operating systems to share a single hardware host and its hardware resources such as memory, hard disks, CPU, etc. The hypervisor can perform many functions such as dynamically allocating memory or CPU processing power to virtual machines or guest operating systems as needed, thereby enabling the optimal use of the hardware resources.

Johan De Gelas in another article on IBM/Intel's VirtualizationConversation.com explains the hypervisor this way:

"To create several virtual servers on one physical machine, a new software layer is necessary: the hypervisor, also called Virtual Machine Monitor (VMM). The most important role is to arbitrate the access to the underlying hardware, so that guest OSes can share the machine. You could say that a VMM manages virtual machines (Guest OS + applications) like an OS manages processes and threads."


Virtualization is accomplished in one of two widely adopted implementations:
  1. A host operating system is installed on the hardware with a virtualization application (Hypervisor) loaded on it.
  2. A virtualization software implementation (Hypervisor) is loaded directly on the hardware.
Option two is of course the most efficient use of hardware resources as there is not a host operating system to draw on them. Once the choice of virtualization implementation is deployed you then can create a virtual machine(s) with your choice of OS as supported by it. Various virtualization software vendors accommodate different albeit wide selections of guest operating systems. When installing a supported guest OS, the virtualization software does not require the installation of additional drivers we are so accustomed to.  Instead it simulates an accepted base utilizing components from the operating system's HCL.



Along with the different flavors of virtualization software you have a multitude of different monitoring, disaster recovery, backup, conversion (from hardware to VM or from Image to VM) utilities, and load balancing applications available in the burgeoning virtualization software market.

Transparent
Virtualization Benefits Abound
The inherent benefits of virtualization will be realized from concept and migration all the way to the maintenance cycle. Once you have established your company's needs, have determined the hardware required and loaded your virtualization product, the migration of existing servers can be accomplished in many ways. Various backup products resulting images, such as those created by Symantec's Ghost or Acronis True Image, can be converted to virtual machines that are ready to be transferred and placed into production. Many Data Centers perform live migrations converting hardware dedicated servers directly to Virtual Machine images that are then placed in production.

Virtual Machines controlled by the same host machine are not restricted to the same subnet (or network group), as the use of virtual switches and VLAN tagging are supported by most implementations. This affords an even greater level of control in the flow of data and its access by other systems or users.

Virtual Machines can also be run from storage arrays that have high speed copper or fiber interconnects to the virtualization host servers. With the highly dynamic nature of today's storage arrays' space allocation and failover capabilities of their multiple controllers per unit, volume size adjustments can be made on the fly and availability is further enhanced when in real-time a failed controllers array is assumed by another functional controller, until repair can be made.

Templates can be made to create additional virtual machines for roll-out as needed. Entire software development environments can be rolled out and started at once from predefined templates.

At the heart of all of this would be your virtual machine infrastructure management and monitoring software. This software can be setup to control failover, load balancing, and much more. If a hardware server went down it could roll-out a VM from template or backup and put it on another hardware node (server instance) to assume the production responsibilities of the VM on the failed hardware. Should server resource usage be unusually high on a particular hardware node, the infrastructure software would move VMs off that node to another as needed to maintain production levels.

In case of natural disaster and for business continuity, an entire server infrastructure could be kept in space at a remote network operations center (NOC) that would assume the functionality of the failed infrastructures role within minutes.

Transparent
Virtualization Trends
These days, the hot topic is "desktop virtualization". What are the realized benefits and are we actually ready for company-wide adoption? There are many variables when you enumerate your workforce's applications by department. Accounting, marketing, sales, personnel, engineering, and others all need applications specific to the tasks they perform. Of course there will always be the requisite email, calendaring, word processing and other office applications that are required across the enterprise.  Not to be discounted is the impact it will have on your end users and whether it is a more viable option than other products that are prevalent in this sector such as Microsoft's Terminal Services and the various Citrix offerings.

 


The decision to use one platform over the other will boil down to many factors including resource usage, performance, bandwidth needs, features, and cost per instance or licensing.  A big hurdle is the inability, at this point, to provide the Graphics Processing Unit (GPU) power needed for CAD and 3D applications through a virtual session. Presently virtualization software is designed to give virtual machines a virtual GPU that supports VGA and basic high resolution 2D graphics. Developments are being made by VMware to cross this graphics boundary using the "SVGA3D" protocol, an adaptation of the Direct3D API that provides a low level interface to video card 3D functions and hardware acceleration, which in turn enable the rendering of complex or 3D images to the monitor.


 



Without question, Virtualization is already trickling down to devices that you would have never imagined could need or use this technology. Case in point is VMware's Mobile Virtualization Platform which will allows faster time to market of new phones by shortening the development process required to install mobile applications on new models. By removing the hardware layers' dependence on specific software you will be able to select your platform of choice or easily switch between multiple virtual machines perhaps running Windows Mobile, Palm OS and Blackberry OS all on the same phone to communicate with different servers, as required by varying company requirements. VMware hopes to have this technology in the mobile marketplace by late 2009 or early 2010.

Given the popularity of Virtualization and the current rate of adoption on all fronts, from the largest of enterprises to small businesses, rest assured this technology will be moving into even more areas while being improved upon in the sectors it now dominates. If you take a peek at technical job sites you will see the demand for virtualization professionals is high which would seem to indicate its proliferation is bound to increase. Information Technology as we know it is indeed heavily embracing the virtual model. 


Content Property of HotHardware.com