The concept and use of virtual machines is not new as the computing industry has been using them for over 40 years. IBM was probably one of the first to use virtual machines; they started using them in 1967.
In recent years there has been a lot of noise and hype around virtual machines for PC and server computers. There are several reasons for taking the virtual machine route and there is an argument that this will be the way for computing in the future. The key business reason for moving to virtual machines is to improve efficiency and reduce cost. Over the last 10 years in particular the capability of computer hardware has increased more quickly than the demands made by business PCs and servers. In practice a single business server is idle (doing nothing) over 90%; it is just “ticking over” heating the room.
Virtual computing literally enables several PCs and servers to operate on one (hardware) computer with little or no change to the user’s computing experience.
The virtual machine concept can at first be difficult to grasp, but once you understand it you will wonder what the initial difficulty was. This is an important concept that also helps to explain some of the “smoke and mirrors” of the computing industry and the internet.
A Windows virtual server is just a Windows server. The only difference is that the virtual server sits on a “hypervisor”, not directly on the computer hardware. The hypervisor is a layer of software between the hardware and the operating system of the server; the hypervisor makes the operating system think it is running on x86 hardware, it provides the operating system with everything it needs, such as processor, memory and network connections. The diagram attempts to illustrate the concept. The hypervisor enables multiple servers and computers to be run on the hardware, it allocates resources to each server as required. Servers can be consolidated onto fewer hardware machines.
@ 2013 Proven Virtual Servers c/o Infosysco