Open Credo

November 4, 2021 | Kubernetes

Exploring How Policy-as-Code and OPA Fit into the K8s World

We always read that ‘security is everyone’s responsibility’. For any organisation, big or small, security should always be the primary concern—not a mere afterthought. In terms of Kubernetes, securing a cluster is challenging because it has so many moving parts and, apart from securing our Kubernetes environment, we also want to control what an end-user can do in our cluster.

To achieve these goals, we can start with the built-in features provided by Kubernetes like Role-Based Access Control (RBAC), Network Policies, Secrets Management, and Pod Security Policies (PSP). But we know these features are not enough. For example, we may want specific policies like ‘all pods must have specific labels’. And even if we have the policies in place, the next big question is how to enforce them on our Kubernetes cluster in an easy and repeatable manner.

In this blog post, we’ll address this challenge and other questions pertaining to OPA and how it can integrate into Kubernetes.

WRITTEN BY

Alberto Faedda

Alberto Faedda

Software Consultant

Exploring How Policy-as-Code and OPA Fit into the K8s World

What Is Policy-as-Code?

Policy-as-Code (PaC) is similar to Infrastructure as Code (IaC). But, whereas in IaC you need to codify your infrastructure, in PaC, the main idea is to enforce standards and rules for specific clusters or across the entire organisation.

These rules, or policies, result from technical or legal requirements, as well as architectural decisions, and are created by various stakeholders like security engineers, product owners, developers, etc. A well-automated policy reduces the maintenance cost and attack surface. It also prevents any bad setup and configuration from being pushed to the production environment.

Once we have a policy in place, the next challenge is how to enforce it, and this is where Open Policy Agent (OPA) comes into play.

What Is Open Policy Agent (OPA)?

Open Policy Agent (OPA) is a generic policy engine used to enforce policies on various types of software systems (microservices, CI/CD pipelines, Kubernetes, etc.) and which decouples policy decision-making from the business logic. In simple terms, if you want to offload the policy decision from your service, then you can implement OPA. OPA supports Policy-as-Code using the declarative Rego language

To understand how this whole process works, let’s see how OPA handles a request.

  • The request comes to your service, which then needs to decide whether to allow or deny the request.
  • OPA processes this request and executes the policy (written in Rego).
  • OPA validates the query attributes against the policy data.
  • OPA executes the rego code for the specific data inputted and returns the result in the form of allowing or denying.

Figure 1: OPA architecture (Source: Open Policy Agent)

PaC’s Main Advantages and Uses

Using Policy-as-Code has some significant advantages:

  • You can version-control the code and keep track of all the changes.
  • Since the code is in version control, you can easily review and share it with a team member.
  • The policy is expressed in a single unified language rather than having a separate business logic for each application.
  • You can update policy code separately and dynamically without making any change in the business logic.

Organisations can use policies to:

  • Enforce deployment to have a minimum X replica count.
  • Determine which user can perform which operation on specific resources.
  • Label a namespace when needed.
  • Allow traffic only from particular subnets.
  • Allow downloading of container images only from a specific registry.

Integrating OPA with Kubernetes to Address Authorisation Challenges

To integrate OPA with Kubernetes, we will use OPA Gatekeeper as an admission controller, but before that, let’s understand two types of admission controllers. 

 

Figure 2: Kubernetes admission controller (Source: Kubernetes)

  • MutatingAdmissionWebhook: Used to modify or reject the request if it doesn’t meet the security requirements, e.g., modifying a pod to use a specific scheduler
  • ValidatingAdmissionWebhook: Used to validate the request against specific data

There are several use cases for deploying OPA as an admission controller:

  • To prohibit containers from running as a root user or checking that the root file system is mounted in read-only mode
  • So that only specific users can access resources in certain namespaces
  • So that containers can only pull images from a particular registry
  • To let you enforce OPA policies such as resource limits or labels

Open Policy Agent Gatekeeper

You can use Gatekeeper to integrate OPA into Kubernetes. Gatekeeper starts out as a Pod, and you then register it as an admission controller using an API server.

Figure 3: Admission control flow (Source: Open Policy Agent)

When a user sends a request for a resource, for example, via kubectl to the API server, after authentication and authorisation, it will send it to the admission controller. The AdmissionReview is then passed to the Gatekeeper. Now, depending on the policy configured in the form of a Custom Resource Definition (CRD), Gatekeeper will decide on and send a response to the API server.

Gatekeeper has the following CRDs:

  • ConstraintTemplate is used to enforce the constraint and scheme of the constraint using Rego logic. 
  • Constraint defines to which resources the policies should be applied.

Let’s now see an example on how to get Gatekeeper installed and enabled on a Kubernetes cluster.

Prerequisites

Before installing the OPA Gatekeeper CRD, you must have:

  • A Kubernetes cluster up and running
  • Kubernetes version 1.14 or later

Installing the OPA Gatekeeper CRD

To deploy the OPA Gatekeeper in your environment, run the command:

$ kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml

The above command will create a new namespace, gatekeeper-system:

$ kubectl get ns
NAME               STATUS   AGE
default            Active   7m14s
gatekeeper-system  Active   48s
kube-node-lease    Active   7m16s
kube-public        Active   7m16s
kube-system        Active   7m16s

Now, let’s verify that the Gatekeeper CRD has been successfully installed:

$ kubectl get crd
NAME                                               CREATED AT
bgpconfigurations.crd.projectcalico.org            2021-10-27T04:40:00Z
bgppeers.crd.projectcalico.org                     2021-10-27T04:40:00Z
blockaffinities.crd.projectcalico.org              2021-10-27T04:40:00Z
clusterinformations.crd.projectcalico.org          2021-10-27T04:40:00Z
configs.config.gatekeeper.sh                       2021-10-27T04:45:58Z
constraintpodstatuses.status.gatekeeper.sh         2021-10-27T04:45:58Z
constrainttemplatepodstatuses.status.gatekeeper.sh 2021-10-27T04:45:58Z
constrainttemplates.templates.gatekeeper.sh        2021-10-27T04:45:58Z
felixconfigurations.crd.projectcalico.org          2021-10-27T04:40:00Z

Next, verify it created a pod and service in the gatekeeper-system namespace:

$ kubectl get pod,svc -n gatekeeper-system

NAME                                             	READY   STATUS	RESTARTS   AGE
pod/gatekeeper-audit-59979b7588-rtv5q            	1/1 	Running   0      	4m39s
pod/gatekeeper-controller-manager-55c855c6fc-7c4nn   1/1 	Running   0      	4m39s
pod/gatekeeper-controller-manager-55c855c6fc-bxxph   1/1 	Running   0      	4m39s
pod/gatekeeper-controller-manager-55c855c6fc-sgptc   1/1 	Running   0      	4m39s

NAME                             	TYPE    	CLUSTER-IP 	EXTERNAL-IP   PORT(S)   AGE
service/gatekeeper-webhook-service   ClusterIP   10.96.110.62   <none>    	443/TCP   4m40s
Now that we have the Gatekeeper pod and service up and running, the next step is to define the policy.

How Do We Define Policies?

To explain how to define a policy, we will use an example policy to enforce the definition of a label team on namespace creation.
First, define ConstraintTemplate and Constraint CRD by using Rego and saving it to the template.yaml file:

apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
        listKind: K8sRequiredLabelsList
        plural: k8srequiredlabels
        singular: k8srequiredlabels
      validation:
        # Schema for the `parameters` field
        openAPIV3Schema:
          properties:
            labels:
              type: array
              items: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("you must provide labels: %v", [missing])
        }    

In the rego code above, violation [{"msg": msg, "details": {"missing_labels": missing}}]is the message displayed to the user when the policy is violated. 

provided assignment collects the labels assigned to the input object’s metadata, required assignment collects the required labels from the input and missing assignment collects the difference between the two previous assignments. If the missing assignment has any elements, then a message is created msg:= sprintf(“you must provide labels: %v”, [missing]) with the message returned to the client in the case of a violation.

Now, create the template:

$ kubectl create -f template.yaml
constrainttemplate.templates.gatekeeper.sh/k8srequiredlabels created

Check the constraint template is created

$ kubectl get constrainttemplates
NAME        		AGE
k8srequiredlabels   108s

Verify the CRD:

$ kubectl get crd | grep -i deny
k8srequiredlabels.constraints.gatekeeper.sh           	2021-10-27T05:03:53Z

Now, it’s time to create the constraint that will use the above constraint template:

apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: ns-must-have-team-label
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Namespace"]
  parameters:
    labels: ["team"]

Note: Make sure kind matches the kind defined in the constraint template.

Next, create the constraint:

$ kubectl create -f constraints.yaml
k8srequiredlabels.constraints.gatekeeper.sh/ns-must-have-team-label created

Check the constraint has been created:

$ kubectl get k8srequiredlabels
NAME            		AGE
ns-must-have-team-label   27s

Let’s verify if the policy is working as expected by creating a namespace without a team label saving it to file ns-example.yml:

apiVersion: v1
kind: Namespace
metadata:
  name: ns-example
$ kubectl apply -f ns-example.yml
Error from server ([ns-must-have-team-label] you must provide labels: {"team"}): error when creating "ns-example.yml": admission webhook "validation.gatekeeper.sh" denied the request: [ns-must-have-team-label] you must provide labels: {"team"}

Note: OPA policy will only have an impact if you try to create a new pod, it will not impact the existing pod.

If you want to look at some more policies, please check the official GitHub repo.

You can also check the Rego Playground for policy debugging. For example, in the case below, if the image is coming from an untrusted registry (other than hooli.com), it should be denied. Using Rego Playground, it’s easier to test and debug your policy online:

Figure 4: The Rego Playground (Source: Open Policy Agent)

Alternative to OPA in the Kubernetes Space

An alternative to OPA, or specifically Gatekeeper in the Kubernetes space is Kyverno which is written specifically for Kubernetes, whereas you can use Gatekeeper with other services like Linux PAM, Envoy Proxy, etc.

As we’ve seen in the example above, Gatekeeper requires the user to make use of Rego programming language for the policies definition which could become cumbersome to manage Kubernetes policies. As a tradeoff, a dedicated programming language, enables very powerful policies definitions. 

Kyverno is seen as a direct reaction to these technical demands and, because it was built specifically for Kubernetes and expresses policies declaratively, its mental model is identical to the way Kubernetes objects are described and reconciled which makes policies definition easier.

Conclusion

Using OPA, you can offload the policy decision from your service. And with OPA Gatekeeper, you can define policies to your Kubernetes cluster easily and quickly. For implementing policies in OPA, you need to learn a new language, Rego. Other open-source projects, like Kyverno, don’t use Rego and work similarly to OPA Gatekeeper. Overall, they both are promising technologies, and it’s worth learning both and make the most informed decision given the use cases.

 

This blog is written exclusively by the OpenCredo team. We do not accept external contributions.

RETURN TO BLOG

SHARE

Twitter LinkedIn Facebook Email

SIMILAR POSTS

Blog