Securing vCluster with OPA: Implementing Policy Enforcement for Virtual Clusters

Jubril Oyetunji
7 Minute Read

A common security challenge many teams face when rolling out multiple clusters is ensuring consistency in security policies. As teams and clusters grow, applying policies across all environments can quickly become cumbersome. Each new cluster adds complexity, making it difficult to maintain a uniform security posture. ‎vCluster is an open source project that allows you to create many virtual clusters on top of a host cluster.

In this tutorial, you will learn how to use OPA Gatekeeper, a policy controller for Kubernetes that allows you to define and enforce policies using a declarative language called Rego to secure your clusters as your infrastructure grows. You will explore the process of defining and applying policies to the host cluster, which will then be automatically enforced across all associated virtual clusters.

Virtual Clusters: A Recap

Virtual clusters provide true isolation, allowing teams to operate independently while sharing the same physical infrastructure. However, another significant benefit of virtual clusters is the ability to inherit policies applied to the host cluster.

This means that a set of base rules can be established for all virtual clusters, simplifying your policy management and ensuring a consistent security baseline across your entire deployment.

Prerequisites

This tutorial assumes some familiarity with Kubernetes. Additionally, you will need the following installed locally in order to follow along.

Create a virtual cluster

Begin by creating a virtual cluster. For this demonstration let’s consider the backend team wants to deploy a new API. So, you create a new virtual cluster:

vcluster create backend --namespace backend  --connect=false

By default, vCluster will attempt to connect to the newly created cluster; by setting --connect to false we can prevent this from happening as we still need to deploy resources on the host cluster, additionally --namespace ensures that the new cluster will live a namespace called backend

Output is similar to:

Verify your cluster is running as intended using:

vcluster ls

Install Gatekeeper

With a virtual cluster created, you can install Gatekeeper - your policy controller on the host cluster.

⚠️ Gatekeeper requires Kubernetes v1.16. or greater to work

To Install, verify your cluster is compatible by running:

kubectl version

Output is similar to:

Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.30.0

If your Kubernetes version is less than v1.16, check out the k8s guide on upgrading clusters.

Next, create a cluster role binding for your current user:

# Get user 
 export USER=$(kubectl auth whoami -o json  | jq .status.userInfo.username | tr -d \")

Create role binding:

kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole cluster-admin \
    --user $USER

Output is similar to:

clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created

Apply the Gatekeeper Manifest:

kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.17.1/deploy/gatekeeper.yaml

This will deploy Gatekeeper in a namespace called gatekeeper-system

Verify all resources are installed correctly using:

kubectl  get pods -n gatekeeper-system

Output is similar to:

Constraints and Constraint Templates

Before going further, let's briefly discuss constraints and constraint templates. In OPA Gatekeeper, constraints are the specific policies you want to enforce, while constraint templates define the structure and parameters of those policies.

Constraint templates provide a reusable way to define policies that can be instantiated as constraints with specific values. This allows for a more modular approach to policy management.

Securing your Host Cluster

As mentioned earlier vCluster respects policies applied to the host as such we can focus on securing the host. Thankfully, many of the most common use cases are covered in what open policy agent calls the gatekeeper library.

Let’s step through some of the most common policies you might want to apply. Begin by cloning the gatekeeper library repository:

git clone git@github.com:open-policy-agent/gatekeeper-library.git &&  cd library/general/

Disallowing NodePort

For security purposes its generally discouraged to use NodePort to expose applications in production, this is to reduce the potential vectors attackers exploit and there are only so many ports you can use before you start running into a conflict.

To do this using gatekeeper, apply the following manifests:

Apply the constraint template:

kubectl apply -f block-nodeport-services/template.yaml

Apply constraint:

kubectl apply -f  block-nodeport-services/samples/block-node-port/constraint.yaml

Verify the Policy

Connect to your virtual cluster:

vcluster connect backend 

Apply a service with NodePort:

kubectl apply -f  block-nodeport-services/samples/block-node-port/example_disallowed.yaml

Output is similar to:

Error from server (Forbidden): error when creating "block-nodeport-services/samples/block-node-port/example_disallowed.yaml": admission webhook "validation.gatekeeper.sh" denied the request: [block-node-port] User is not allowed to create service of type NodePort

Great! The policy is binding within the virtual cluster as well, lets run through a couple more examples.

Return to your host cluster:

vcluster disconnect 

Forbidding Latest

At some point you have probably used the latest tag , its quick and convenient however in production you want to discourage the use latest tag as you always want to pin the version of your images to avoid breaking changes.

To do this using gatekeeper, apply the following manifests:

Apply the constraint template:

kubectl apply -f disallowedtags/template.yaml

Apply constraint:

For this to work we need to modify the example slightly:

cat <<EOF | kubectl apply -f -
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDisallowedTags
metadata:
  name: container-image-must-not-have-latest-tag
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    namespaces:
      - "*"
  parameters:
    tags: ["latest"]
    exemptImages: ["openpolicyagent/opa-exp:latest", "openpolicyagent/opa-exp2:latest"]
EOF

The manifest above specifies what resource to target, which, in this case, is only the Pod resource. Similarly the manifest also specifies what resource to target , since you want this policy to bind across the entire cluster, you use an asterisk(*).

Verify the Policy

Connect to your virtual cluster:

vcluster connect backend 

To verify these works, deploy a Redis pod using the latest as the image tag.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: plswork
spec:
  containers:
    - name: opa
      image: redis:latest
EOF

Unlike the previous example, your output should look something like this:

pod/plswork created

However, if you check the status of the pod:

kubectl get pod/plswork

Output is similar to:

To uncover why the pods are not being created run:

kubectl describe pod/plswork

Output is similar to:

Events:
  Type     Reason     Age               From        Message
  ----     ------     ----              ----        -------
  Warning  SyncError  1s (x10 over 4s)  pod-syncer  Error syncing to physical cluster: admission webhook "validation.gatekeeper.sh" denied the request: [container-image-must-not-have-latest-tag] container <opa> uses a disallowed tag <redis:latest>; disallowed tags are ["latest"]

From the output, we see that the request was rejected because the image tag was the latest.

Finally, disconnect from the virtual cluster:

vcluster disconnect 

Trusted Repositories

In a production environment, it's important to have control over the container images being deployed. One way to achieve this is by enforcing a policy that allows only approved image repositories. In a previous blog we have covered how to achieve this using the sigstore policy controller.

To implement this policy using Gatekeeper, apply the following manifests:

Apply the constraint template:


 kubectl apply -f disallowedrepos/template.yaml

Apply constraint:

This particular constraint ensures that images from the repository k8s.gcr.io can not be deployed; for your own use case, you might want to make this point to a self-hosted registry or a repository you trust.

kubectl apply -f disallowedrepos/samples/repo-must-not-be-k8s-gcr-io/constraint.yaml

Verify the Policy

Connect to your virtual cluster:

vcluster connect backend

Deploy a pod with the image k8s.gcr.io/kustomize/kustomize:v3.8.9

kubectl apply -f disallowedrepos/samples/repo-must-not-be-k8s-gcr-io/example_disallowed_container.yaml

Output is similar to:

pod/kustomize-disallowed created

Inspect the pod:

kubectl describe pod/kustomize-disallowed

Output is similar to:

Events:
  Type     Reason     Age                   From        Message
  ----     ------     ----                  ----        -------
  Warning  SyncError  62s (x17 over 6m30s)  pod-syncer  Error syncing to physical cluster: admission webhook "validation.gatekeeper.sh" denied the request: [repo-must-not-be-k8s-gcr-io] container <kustomize> has an invalid image repo <k8s.gcr.io/kustomize/kustomize:v3.8.9>, disallowed repos are ["k8s.gcr.io/"]

Wrapping Up

Centralizing your policy management is extremely helpful in multi-tenant environments; with gatekeeper, you can ensure policies bind over your virtual clusters for greater isolation. If you don’t need the full power of gatekeeper but still need to verify your images. Check out this post that uses the sigstore policy controller.

While we demonstrated only a few policies in this post, the gatekeeper library has extensive policies; chances are you won’t need to write a custom policy as it covers a number of basic and advanced use cases.

Join our Community Slack if you have more questions!

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.