apple sitting on pile of books

When you think of the beginning of Server Virtualization, companies like VMWare may come to mind. The thing you may not realize is Server Virtualization actually started back in the early 1960s and was pioneered by companies like General Electric (GE), Bell Labs, and International Business Machines (IBM).

The Invention of the Virtual Machine

In the Early 1960’s IBM had a wide range of systems; each generation of which was substantially different from the previous. This made it difficult for customers to keep up with the changes and requirements of each new system. Also, computers could only do one thing at a time. If you had two tasks to accomplish, you had to run the processes in batches. This Batch processing requirement wasn’t too big of a deal to IBM since most of their users were in the Scientific Community and up until this time Batch processing seemed to have met the customer’s needs.

Because of the wide range of hardware requirements, IBM began work on the S/360 mainframe system designed as a broad replacement for many of their other systems; and designed to maintain backward compatibility. When the system was first designed, it was meant to be a single user system to run Batch Jobs.

However, this focus began to change on July 1, 1963 when the Massachusetts Institute of Technology (MIT) announced Project MAC. Project MAC stood for Mathematics and Computation but was later renamed to Multiple Access Computer. Project MAC was funded by a $2 Million grant from DARPA to fund research into computers, specifically in the areas of Operating Systems, Artificial Intelligence, and Computational Theory.

As part of this research grant, MIT needed new computer hardware capable of more than one simultaneous user and sought proposals from various computer vendors including GE and IBM. At this time, IBM was not willing to make a commitment towards a time-sharing computer because they did not feel there was a big enough demand, and MIT did not want to have to use a specially modified system. GE, on the other hand, was willing to make a commitment towards a time-sharing computer. For this reason, MIT chose GE as their vendor of choice.

The loss of this opportunity was a bit of a wake-up call for IBM who then started to take notice as to the demand for such a system. Especially when IBM heard of Bell Labs’ need for a similar system.

In response to the need from MIT and Bell Labs, IBM designed the CP-40 mainframe. The CP-40 was never sold to customers and was only used in labs. However, it is still important since the CP-40 later evolved into the CP-67 system; which is the first commercial Main Frame to support Virtualization. The Operating system which ran on the CP-67 was referred to as CP/CMS. CP Stands for Control Program, CMS stands for Console Monitor System. CMS was a small single-user operating system designed to be interactive. CP was the program which created Virtual Machines. The idea was the CP ran on the Mainframe and created Virtual Machines which ran the CMS; which the user would then interact with.

The User interaction portion is important. Before this system, IBM focused on systems where there was no user interaction. You would feed your program into the computer, it would do its thing; then spit out the output to a printer or a screen. An Interactive Operating System meant you actually had a way of interacting with the programs while they ran.

The first version of the CP/CMS operating system was known as CP-40, but was only used in the lab. The Initial release of CP/CMS to the public was in 1968, the first stable release wasn’t until 1972.

The traditional approach for a time-sharing computer was to divide up the memory and other system resources between users. An Example of a time-sharing operating system from the era is MultiCS. MultiCS was created as part of Project MAC at MIT. Additional research and development was performed on MultiCS at Bell Labs, where it later evolved into Unix.

The CP approach to time-sharing allowed each user to have their own complete operating system which effectively gave each user their own computer, and the operating was much more simple.

The main advantages of using virtual machines vs a time-sharing operating system was a more efficient use of the system since virtual machines were able to share the overall resources of the mainframe, instead of having the resources split equally between all users. There was better security since each user was running in a completely separate operating system. And it was more reliable since no one user could crash the entire system; only their own operating system.

Portability of Software

In the previous section, I mentioned MultiCS and how it evolved into Unix. While UNIX is not running virtualized operating systems, it is still a good example of application from another perspective. Unix is not the first multi-user operating system, but it is a very good example of one and is one of the most widely used ever.

Unix is an example of Virtualization at the User or Workspace Level. Multiple users share the same CPU, Memory, Hard Disk, etc… pool of resources, but each has its own profile, separate from the other users on the system. Depending on the way the system is configured, the user may be able to install their own set of applications, and security is handled on a per-user basis. Not only was Unix the first step towards multi-user operating systems, but it was also the first step towards application virtualization.

Unix is not an example of application virtualization, but it did allow users much greater portability of their applications. Prior to Unix, almost all operating systems were coded in assembly language. Alternatively, Unix was created using the C programming language. Since Unix was written in C, only small parts of the operating system had to be customized for a given hardware platform, the rest of the operating system could easily be re-compiled for each hardware platform with little or no changes.

Application Virtualization

Through the use of Unix, and C compilers, an adept user could run just about any program on any platform, but it still required users to compile all the software on the platform they wished to run on. For true portability of software, you needed some sort of software virtualization.

In 1990, Sun Microsystems began a project known as “Stealth”. Stealth was a project run by Engineers who had become frustrated with Sun’s use of C/C++ API’s and felt there was a better way to write and run applications. Over the next several years the project was renamed several times, including names such as Oak, Web Runner, and finally, in 1995, the project was renamed to Java.

In 1994 Java was targeted towards the Worldwide web since Sun saw this as a major growth opportunity. The Internet is a large network of computers running on different operating systems and at the time had no way of running rich applications universally, Java was the answer to this problem. In January 1996. the Java Development Kit (JDK) was released, allowing developers to write applications for the Java Platform.

At the time, there was no other language like Java. Java allowed you to write an application once, then run the application on any computer with the Java Run-time Environment (JRE) installed. The JRE was and still is a free application you can download from then Sun Microsystems website, now Oracle’s website.

Java works by compiling the application into something known as Java Byte Code. Java Byte Code is an intermediate language that can only be read by the JRE. Java uses a concept known as Just in Time compilation (JIT). At the time you write your program, your Java code is not compiled. Instead, it is converted into Java Byte Code, until just before the program is executed. This is similar to the way Unix revolutionized Operating systems through its use of the C programming language. Since the JRE compiles the software just before running, the developer does not need to worry about what operating system or hardware platform the end-user will run the application on; and the user does not need to know how to compile a program, that is handled by the JRE..

The JRE is composed of many components, the most important of which is the Java Virtual Machine. Whenever a Java application is run, it is run inside of the Java Virtual Machine. You can think of the Java Virtual Machine is a very small operating system, created with the sole purpose of running your Java application. Since Sun/Oracle goes through the trouble of porting the Java Virtual Machine to run on various systems from your cellular phone to the servers in your Data-center, you don’t have to. You can write the application once, and run anywhere. At least that is the idea; there are some limitations.

Mainstream Adoption of Hardware Virtualization

As was covered in the Invention of the Virtual Machine section, IBM was the first to bring the concept of Virtual Machines to the commercial environment. Virtual Machines as they were on IBM’s Mainframes are still in use today, however, most companies don’t use mainframes.
In January of 1987, Insignia Solutions demonstrated a software emulator called SoftPC. SoftPC allowed users to run Dos applications on their Unix workstations. This is a feat that had never been possible before. At the time, a PC capable of running MS-DOS cost around $1,500. SoftPC gave users with a Unix workstation the ability to run DOS applications for a mere $500.
By 1989, Insignia Solutions had released a Mac version of SoftPC, giving Mac users the same capabilities; and had added the ability to run Windows applications, not Just DOS applications. By 1994, Insignia Solutions began selling their software packaged with operating systems pre-loaded, including SoftWindows, and SoftOS/2.

Inspired by the success of SoftPC, other companies began to spring up. In 1997, Apple created a program called Virtual PC and sold it through a company called Connectix. Virtual PC, like SoftPC allowed users to run a copy of windows on the Mac computer, in order to work around software incompatibilities. In 1998, a company called VMWare was established, and in 1999 began selling a product similar to Virtual PC called VMWare Workstation. Initial versions of VMWare workstation only ran on windows, but later added support for other operating systems.

I mention VMWare because they are really the market leader in Virtualization in today’s market. In 2001, VMWare released two new products as they branched into the enterprise market, ESX Server and GSX Server. GSX Server allowed users to run virtual machines on top of an existing operating system, such as Microsoft Windows, this is known as a Type-2 Hypervisor. ESX Server is known as a Type-1 Hypervisor and does not require a host operating system to run Virtual Machines.

A Type-1 Hypervisor is much more efficient than a Type-2 hypervisor since it can be better optimized for virtualization, and does not require all the resources it takes to run a traditional operating system.

Since releasing ESX Server in 2001, VMWare has seen exponential growth in the enterprise market; and has added many complimentary products to enhance ESX Server. Other vendors have since entered the market. Microsoft acquired Connectix in 2003, after which they re-released Virtual PC as Microsoft Virtual PC 2004, then Microsoft Virtual Server 2005, both of which were unreleased products from Connectix at the time Microsoft acquired them.

Citrix Inc, entered the Virtualization market in 2007 when they acquired XenSource, an open source virtualization platform which started in 2003. Citrix soon thereafter renamed the product to XenServer.

Published Applications

In the early days of UNIX, you could access published applications via a Telnet Interface; and later SSH. Telnet is a small program allowing you to remotely access another computer. SSH is a version of telnet including various features such as encryption.

Telnet/SSH allows you to access either a text interface or a Graphical interface, although it is not really optimized for graphics. Using telnet, you can access much of the functionality of the given server, from almost anywhere.

Windows and OS/2 had no manner of remotely accessing applications without third-party tools. And the third-party tools available only allowed one user at a time.

Some Engineers at IBM had an idea to create a multi-user interface for OS/2, however, IBM did not share the same vision. So in 1989 Ed Lacobucci left IBM and started his own company called Citrus. Due to an existing trademark, the company was quickly re-branded as Citrix, a combination of Citrus and Unix.

Citrix licensed the source code to OS/2 through Microsoft and began working on creating their extension to OS/2. The company operated for two years and created a Multi-User interface for OS/2 called MULTIUSER. However, Citrix was forced to abandon the project in 1991 after Microsoft announced it was no longer going to support OS/2. At that point, Citrix licensed source code from Microsoft and began working on a similar product focused on Windows.

In 1993 Citrix Acquired Netware Access Server from Novell. This product was similar to what Citrix had accomplished for OS/2 in that it gave multiple users access to a single system. Citrix Licensed the Windows NT source code in from Microsoft, then in 1995 began selling a product called WinFrame. WinFrame was a version of Windows NT 3.5 with remote access capabilities; allowing multiple users to access the system at the same time in order to remotely run applications.

While developing WinFrame for Windows NT 4.0, Microsoft decided to no longer grant the necessary licenses to Citrix. At this point, Citrix licensed WinFrame to Microsoft, and it was included with Windows NT 4.0 as Terminal Services. As part of this agreement, Citrix agreed not to create a competing product but was allowed to extend the functionality of Terminal Services.

Virtual Desktops

Virtual Desktop Infrastructures (VDI) is the practice of running a users Desktop Operating system, such as Windows XP within a virtual machine on a centralized infrastructure. The Virtual Desktop Computers as we think of them today are a fairly new topic of conversation. But are very similar to the idea IBM had back in the 1960’s with the virtual machines on their mainframe computers. You give each user on the system their own operating system, then each user can then do as they please without disrupting any other users on the system. Each user has their own computer, it is centralized, and it is a very efficient use of resources.

If you compare MultiCS from back in the 1960’s to the IBM Mainframes, it would be similar to comparing a Microsoft Terminal Server to a Virtual Desktop infrastructure today.

The jump from Virtual Desktops on Mainframes to Virtual Desktops as we know them today didn’t really happen until 2007 when VMWare introduced their VDI product. Prior to this release, it was possible for users in a company to use virtual desktops as their primary computers. However, it wasn’t really a viable solution due to management headaches. The introduction of Virtual Machine Manager from VMWare, and similar products from companies like Microsoft and Citrix has allowed this area to grow very rapidly.

Summary

Computer Virtualization has a long history, spanning nearly half a century. It can be used for making your applications easier to access remotely, allowing your applications to run on more systems than originally intended, improving stability, and more efficient use of resources.

Some technologies can be traced back to the 60’s such as Virtual Desktops, others can only be traced back a few years, such as virtualized applications.