Are you VMweary?

Scott McAllister
6 Minute Read

A Brief History of Enterprise Computing

In the evolution of enterprise computing, implementing server virtualization became a standard for operations teams. Enterprise IT teams would build out internal, colocated, or hosted environments—racking servers, wiring up storage and networking, then spinning up virtual machines (VMs) for development, testing, and production. This was the initial standard architecture.

Then came the shift to cloud. As self-service tools matured, enterprises started moving away from private data centers—whether on-prem or colocated—and into cloud platforms like AWS, GCP, and Azure. This first wave of cloud adoption was mostly lift-and-shift: taking existing apps running on VMs and dropping them into VMs in the cloud.

Next came microservices, containers, and Kubernetes. Enterprises began modernizing legacy applications and building new ones for Kubernetes—but they were still running it on VMs in the cloud. Now, with Kubernetes capable of running directly on bare metal—either on-prem or in the cloud—and with virtual clusters and virtual nodes in the mix, the need for VMs starts to look unnecessary. The layers are flattening, which leads to the question–”Are you still running Kubernetes on VMs out of necessity, or is it a habit that's no longer serving your needs?”.

Let’s take a closer look at the pieces of our infrastructure and where they come from to see if we can find an answer to this question.

A Brief History of Computing Architecture and Virtual Machines

Virtualization began in the 1960s with IBM mainframes, where virtual machines were developed and used to abstract away the physical hardware of the computers and allow multiple operating systems to run concurrently. IBM coined the term “hypervisor” to describe this hardware virtualization. For decades IBM used the technology primarily for research until the introduction of virtualization support for x86 around the turn of the century. At that point virtualization coupled with the development of multi-core CPUs led the way to more broad adoption and paved the way for modern cloud computing.

The core function of a virtual server environment is the deployment of a hypervisor to create and manage virtual machines by abstracting the underlying physical hardware—such as CPU, memory, and storage—and allocating these resources to multiple, isolated guest operating systems. Two primary types of hypervisors evolved:

Type 1, or "bare-metal" hypervisors run directly on the host's hardware and not on top of an operating system, offering high efficiency and security. They are the backbone of enterprise data centers and cloud infrastructure with products like VMware ESXi, Microsoft Hyper-V, and Xen.

Type 2, or "hosted" hypervisors run as an application on top of a conventional operating system, providing a more accessible platform for desktop virtualization and development, with examples like VMware Server and Workstation, and Oracle VirtualBox.

A Brief History of Cloud Computing

Using this virtualization technology, several vendors–such as Amazon, Microsoft, Google–built out expansive infrastructures to support their own products and platforms. They quickly realized they could rent out unused portions of their infrastructure to customers who desired the benefits of a world-wide dynamic platform that could serve their own applications. With these offerings cloud computing was born, and well adopted by enterprises.

This transition was driven by several key factors.

  1. Cloud computing allowed enterprises to offload costly and complex on-premise data centers, freeing them from the burdens of hardware procurement, maintenance, and upgrades.
  2. Cloud platforms simplified developer self-service by providing tools and interfaces that allowed developers to quickly set up scalable computing environments for development and testing, accelerating innovation.
  3. Moving to the cloud reduced overall costs and risks associated with managing physical hardware, offering flexible, dynamic pricing models and robust disaster recovery options. This evolution has enabled businesses to focus on application development and strategic goals rather than infrastructure management.

VMs simplified the transition from on premise (private data center) deployments to the cloud. Teams were able to lift their VMs running in their data centers and essentially deploy them to the cloud, keeping their existing systems intact.

A Brief History of Microservices

Software development has also evolved over the last several decades. While computer hardware became more accessible through virtualization–allowing more processes and users to access machines at the same time–architectural advances in software development helped developers build more versatile and maintainable applications. One such advancement was the shift from building software with large tightly-coupled monoliths to separating functionality into independent microservices.

Teams have adopted service oriented architectures (SOA) to separate out responsibilities from a variety of needs such as code maintainability, application stability, and scalability.

Imagine your code base as a large pile of bricks. Keeping the bricks secured and all together in one large pile requires a lot of effort. Each change or movement has the potential of affecting the entire pile.  

If we separate the large stack into smaller piles the load of bricks is easier to manage. Individual bricks are easier to find and fix and there is less potential of lost or damaged bricks when making those changes.

Software teams use SOA for similar reasons. Smaller services are easier to maintain and less risky to change. Bugs are easier to find because there is less code to search through. Services are reusable and combine easily with other services. Teams can work on different parts of the system in parallel because they have well-defined service boundaries.  

A Brief History of Containers and Containerized Microservices

Adding to those boundaries, teams can run those services in containers. A container is a piece of software that packages up code and all the dependencies necessary for the code to run. While similar in function, containers are not virtual machines. Containers share the host OS and are much more lightweight, while VMs include their own OS and are heavier and slower to start.

Containers might seem like a recent innovation, but their roots trace back to the 1970s with Unix systems using tools like chroot to isolate application code. These early implementations provided basic isolation but lacked portability, limiting their broader adoption. Over time, advancements like Linux namespaces and cgroups enhanced container capabilities, setting the stage for modern containerization. The introduction of Docker in 2013 was a pivotal moment, offering a user-friendly interface and standardization that made containers accessible to developers and enterprises alike.

A Brief History of Kubernetes

Kubernetes came onto the scene in 2015, born at Google, inspired by their internal system to manage containers at scale. As containers grew in popularity, teams needed a way to automate deployment, scaling, and resilience—and Kubernetes became the open-source answer to that problem, organizing its resources in clusters.

Teams were already running their applications on hypervisor-based virtual machines, so it was only natural to continue by deploying their Kubernetes clusters into the same environment.

But while VMs solved a lot of problems, they also introduced a layer of abstraction that, today, isn’t always necessary—especially in containerized, Kubernetes-driven environments.

Containers were already encapsulated environments that contained application code and all the needed dependencies, which in many cases makes running those clusters of containers on VMs needlessly redundant and inefficient.

To gain back some efficiency–and VM licensing costs–teams have explored the option of running their Kubernetes clusters on bare metal–machines that do not operate with virtualized hypervisor layers.  

A Brief Glimpse into the Future of Enterprise Computing

Which leads to the question that is the title of this post. Are you VMweary?

Are you running Kubernetes on VMs out of necessity or habit?

History shows a number of examples where civilizations used new technology in the “old way” out of habit. In the early days of the Gutenburg printing press the first printed books were produced with type faces designed to mimic handwriting.

Early automobiles were designed and shaped just like boxy horse-drawn carriages, only without the horses. It took years before their design became more aerodynamic and efficient.

The current state of software infrastructure seems to be at a similar crossroads. Are we trying to solve modern platform problems with legacy thinking?

To answer that, let’s take a step back and reexamine how we got here—why Kubernetes on VMs became the default choice for many, and whether those reasons still hold up today.

How it started

  • Familiar. We were already running your infrastructure on VMs and have been satisfied with the results. When new technologies come along we instantly think of how it will run on our virtual machines, either in our own data centers or on a public cloud. So when we adopted Kubernetes we instantly spin up a VM because it is familiar.
  • Scalable. VMs are also very easy to scale! With the push of a button we have another instance of a server in minutes.
  • Tenant Isolation. Running each Kubernetes cluster in its own set of VMs provides more than just namespace isolation. Each cluster has its own “machine” and access to all of the machine's resources.  

How it’s going

  • Expensive. Running VMs has gotten expensive. Not only with the recent price hikes from vendors, but often those pricing models may not align with the dynamic needs of Kubernetes workloads.
  • Underutilized. From organizations we’ve talked to we’ve discovered it’s a common practice in the industry to spin up lots of virtual machines running smaller clusters that may never get utilized.
  • Redundant Complexity. Redundancy is good for data storage, but what about functionality in our systems? One of the promises of VMs was easy scalability, but Kubernetes also offers easy replication and load balancing. Or how about isolation for multiple tenants? VMs provide that isolation, but at what cost?
  • Performance Overhead. A virtualization hypervisor layer consumes infrastructure resources that could be used by workloads in your application. Also, containers that need to perform GPU offloading for intense data analytics processing could be frustrated by going through virtualization to get to the GPU.

How to Rejuvenate Your Cluster

Admittedly, that’s a lot to think about. And, to be clear, there are still plenty of use cases where running applications in virtualized environments makes sense. The exercise here is to think about why you’re running your Kubernetes clusters on VMs. Are there technical benefits, or is it out of habit?

If it’s more of the latter, consider the performance enhancement and cost savings of running Kubernetes on physical servers, also known as bare metal. In his post, "What does your infrastructure look like in 2025 and beyond?", Saiyam Pathak provides a great look into some of the patterns we’re seeing and how we can utilize the benefits of bare metal.   Take a look at what Saiyam has to say, and join us over on the LoftLabs Community Slack workspace to share your experiences of running Kubernetes on bare metal or if you’re weary of running your K8s clusters on VMs.

No items found.

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.