Containers: The Next Generation of Virtualization?
Although virtual machines (VMs) have been revolutionary for computing infrastructure, container technology may be on the verge of disrupting the disrupter. Companies with operations in the cloud are excited about containers because of superior resource allocation, deployment speed, and environment flexibility. Rackspace’s [RAX] CTO, John Engrates, is describing containers as the next generation of virtualization.
Prior to server virtualization, the client-server model required companies to purchase and maintain many servers, each one used for just one application. In that model, server utilization was as low as 5-15% of its total processing power.
Virtualization works to maximize the utilization of computing environments, such as servers, by running many isolated applications at the same time. While the concept dates back to late 1960s IBM [IBM] mainframes, it went mainstream in 2001 with VMware’s [VMW] server virtualization software.
Server virtualization relies on a software component called a hypervisor, which allows the partitioning of a server into independent VMs. These VMs store their own operating systems (OS) and virtual copies of the hardware, each independently functioning as a virtual server. VMs can then be used to run workloads like email, databases, and web servers.
Server virtualization allows companies to put many independently functioning VMs on a single server, increasing server utilization to 80% and cutting capital and operating costs by as much as 50%. In the public cloud, server virtualization is removing the need for companies to own servers at all, as they can rent VMs from Infrastructure as a Service (IaaS) companies who own and virtualize servers at scale.
Recently Docker, a startup that has received $40M in Series C funding, has unsettled VMs with its open-source container solution. It has found good success at companies such as RedHat [RHT],Google [GOOG], Microsoft [MSFT], and Amazon [AMZN]. Although containers have been around for more than a decade, Docker’s open-source offering is making containers accessible to all developers by simplifying their construction and deployment.
Containers offer OS level virtualization, which means that each container runs on its own abstraction of the machine’s OS. Critically, each container behaves as though it has its own devoted operating system, even though it has not stored it, a key differentiation from VMs as an OS often consumes more memory and storage than the application it is hosting. Containers’ efficiency in resource allocation allows 10x more applications to run on a server compared to VMs, and 100x more than traditional client servers, as shown below.
As developers move to smaller applications, adding features and functionality to their platforms, containers’ light weight and agility relative to those of VMs will become increasingly important. Due to less underlying complexity, applications run two times faster and launch in seconds as opposed to minutes compared to their performance on a VM. With Docker, containers also are highly flexible in testing and deployment, allowing easy transport from personal computers to servers to VMs, an aspect that developers love.
While there are many benefits to containers, the primary drawback is a lack of robust security, a prime reason that companies have yet to abandon VMs. Google runs containers on top of VMs to address these security concerns.
Containers may not replace VMs entirely in the short-run, but they already may be decreasing demand for VMs, as they pack ten times more applications into the same amount of space. VMware is suggesting that containers and VMs are optimized when working together, as VMs provide security and diversity while containers offer speed and flexibility.
Is VMware beginning to cannibalize its core business? It may not have a choice. As for the server hardware industry, another tenfold increase in the number of applications per server, on the heels of the last decade’s tenfold increase thanks to VMs, is problematic.