Table of Contents
It’s 2025, and we’re living in a fast-moving world of technology, but is your enterprise keeping up with the latest changes?
There are only a few technologies in the industry that were truly groundbreaking, ones that have been widely adopted by sectors like finance, healthcare, and other domains, all of which rarely change underlying technology.
But let’s take a step back and look at the big picture. One of the breakthrough technologies of the past was server virtualization. Originally pioneered by IBM, it was VMware, founded in the late 90s, that brought virtualization to x86 servers and completely transformed how enterprises deployed applications.

Almost every enterprise adopted this new model and began running their workloads on virtual machines. The shift was clear: run bare metal servers, and layer virtual machines on top. This became the new standard. VMware, with its enterprise-grade support, tooling, and ecosystem, quickly became the go-to solution. Over time, organizations signed long-term contracts and built architectures deeply tied to VMware’s stack leading to significant vendor lock-in. Many enterprises today still feel stuck, believing it’s too complex or risky to move away.
But remember, it’s 2025, and Kubernetes is now more than a decade old. If your team had started making small, incremental changes over the past five years, you might have been able to transform your workload to be more Kubernetes-native and move away from dependence on virtualized server technology.
Wait… where did Kubernetes come from?
As we improved our application architectures, we shifted toward microservices. Then came containers -> lightweight, portable units for deploying apps. Docker became the de facto tooling for creating and running the containers. With the adoption of containers, the need for an orchestration engine came into picture where Kubernetes emerged as the container orchestrator and has now become the de facto standard for running containers.
At the end of the day, we all want to deploy our applications in the most sophisticated, efficient, and scalable way possible.
Here’s something wild in 2025, you can even run VMs on Kubernetes using a project like KubeVirt. But really, shouldn’t you already be breaking away from VMs and transitioning to containers?
From what I've observed (and I’d love to hear your thoughts too), many organizations are still running bare metal servers, spinning up VMs with VMware, and then deploying Kubernetes clusters on top of those VMs.

😬 Spoiler alert: This is not a good approach.
While this layered approach is still common, it adds unnecessary complexity and cost, modern architectures should aim to simplify by running Kubernetes closer to the metal or rethinking the need for VMs altogether.
It leads to massive resource waste and skyrocketing costs (have you seen the recent price hikes?).
So, what do people currently have?
From what I’ve observed in my conversations with enterprises, there are a few patterns that I have seen:
But now, many are starting to shift toward alternatives like:
- Bare metal + Kubernetes + KubeVirt: to run virtualized workloads on K8s.
- Bare metal + VMs (still using VMware) + Kubernetes: the current most widely used pattern.
- Containerizing applications to reduce reliance on VMs altogether.
But could we completely move away from VMs?
Here’s what I believe: organizations should strive for better, more modern architectures.
Moving from bare metal VMs to KubeVirt VMs is one option but KubeVirt isn’t yet as mature or stable as what VMware provides, especially in enterprise environments. That said, the real transformation comes when workloads start becoming Kubernetes-native. This shift can significantly improve resource efficiency and reduce costs in the long run.
Running Kubernetes on VMs on bare metal not only wastes resources but also incurs high licensing fees especially with VMware.
Two Common Patterns We’re Seeing
The second one usually happens due to multi tenancy challenges where people want separate clusters per tenant, environment or project but due to multi tenancy limitations and security reasons they want to keep things separate. For this there are many multi tenancy solutions based on what has been described by LearnK8s are the entire multi tenancy spectrum

(Image credits: Daniele Polencic)
Conclusion
If you're still running bare metal + VMs + Kubernetes, it’s time to consider multi-tenancy as a cleaner alternative.
And yes, I’ll plug it because I genuinely believe in it: vCluster (where I work) is a fantastic tool for this. It's lightweight, powerful, and already helping organizations optimize their architectures. So the architecture becomes -> Bare Metal + Kubernetes + Virtual clusters.

On the security side, yes vCluster can be combined with secure container runtimes like Kata Containers or gVisor to strengthen isolation. We’ve also launched our own project called vNode, which, when used in combination with vCluster, provides an even more robust and secure setup as compared to Kata or gVisor as a runtime.

We’ve even documented a real-world case study: Aussie Broadband moved from bare metal + VMs + Kubernetes to bare metal + one large Kubernetes cluster + vClusters on top and it’s working great.
If this resonates with you, or you’re dealing with a similar setup, feel free to reach out for a demo or just connect to chat! I’d love to hear how your architecture looks, what problems you’re solving, and what tools you're using. You can reach me on Slack or other social platforms.