‘Policy As Code’ — Are you kidding me ?

No, I’m NOT actually ;)

But When I heard the term ‘Policy As Code’ for the very first time, To be frank — that’s what was my feeling. I thought someone was really making some tech-joke with me.

I heard Iaas, PasS, SaaS, FaaS and also IaaC <Infrastructure As Code>since good time but PaC <Policy As Code> is the very new term for me. It’s been there for a while but unless I came across a use-case, it never hit to my small brain

As I started really getting to know What? exactly it meant, Why? and How? and When? — it started making lot of sense now. The more I read and understand, more I’m realising how crucial it is for smoother operation and control as a concept for a fine-grained access control for applications

So, here I’m sharing my understanding about Open Policy Agent and how it helps / it can help for your use-case in the SW world

I’ve a very dumb analogy <apologies if it doesn’t make sense to you :(> to prepare the ground for the context — Consider a case where you have your house-cleaning task and you gave that job to the agency. For this, you’ve given them the keys already as it’s a regular monthly/fortnightly task for them to come and clean your house.

  • Giving them the keys is like you’ve ‘Authenticated the agency’ as you know who are they and you’ve also ‘Authorised the agency’ as they can enter your premises to do the job. But is there any way you can control what they can do with that access to your house ? Is there any way you can have more control/validate over their access to your property other than just the job? — Can you imagine the risk here ?
  • Let’s take one step further — Let’s assume that you’ve set up a very good control in your house where you can provide fine-grained access controls to each section of your house such as Hall, Kitchen, Dining, Theatre Room, Bed Room, Rest Room etc., to that agency. Each of these sections are like micro-services of your application. Every time a new cleaner enters your property, you’ve to adjust your policies to provide required authorisation mechanisms such that it is no misuse — Can you imagine the complexity ?
  • If you take this further — as you’ve more properties to deal with, managing and maintenance and changing or updating these fine-grained policies is not that easy and at the same time., it is not flexible — Can you imagine the bottleneck ?

Enough of dumb or stupid analogy….

Let me try to bring this similar context in real Software Application world:

Fig — 1 : Micro-Service Based Application

As shown in [Fig — 1] Dev/Support engineer has access to each of these micro-services and he/she can do unlimited things as they are authenticated and authorised. The risk here is that the engineer always has access to these services irrespective of the existing business case or not. On the flip side, he/she can turn down any of the services any time if there is no policies put in place and can access end-user personal data as well which is again a bad situation to be in as an organisation.

In general use case to limit the access, we’ll implement the authorisation logic in each of these micro-services [Fig-1] — that way any un-intended access is limited and controlled. That sounds perfectly fine and fits the purpose. But

  • What if we want to implement more / new policies / policy changes from compliance/legal or info-sec perspective ?
  • What if we want to test those policies before rolling out to production use ?
  • What if we want to add more micro-services to this application as we are expanding the feature set to very rich ? and each micro-service is written in different software stack/programming language and many more open questions

Is there any uniform way of dealing with these policies without worrying about the software stack they built on ?

That’s where the OPA <Open Policy Agent> comes in to rescue being the General purpose policy engine. It is an open-source, general-purpose policy engine that unifies policy enforcement across the software stack.

OPA can be used to enforce polices in micro-services, Kubernetes, CI/CD pipelines, API gateways and many more by decoupling the policy decision making from policy enforcement

Fig — 2 : OPA Overview

[Fig — 2] OPA takes policy query input as JSON and checks it against the policies and data available and generates the policy decisions as JSON. Policy definitions are implemented using Rego language

Compared to a typical authorisation service responses like ‘allow/deny’, OPA policy decision output can be as elaborative as one can think of as it returns any arbitrary JSON value [string, entire JSON object and more flexible decision outputs]

OPA agents can be run as sidecar/host-level daemon [~20MB binary] sets that means if you’ve 100 micro-services instances running, you are expected to have 1 OPA agent running on each of those micro-services similarly if you’ve 100 K8S clusters, you expect to have 1 OPA agent running per each K8S cluster as an example

I’m considering 2 use case scenarios to explain where OPA is used out of lot of other possible use cases. Predominantly I’ll cover K8S more in detail though :)

  • Micro-services
  • K8S


Fig — 3 : OPA with Micro-Services

As we know, any micro-service generally gets exposed to the external world through network proxy. In a simple use case Client API Request comes to the micro-service through the network proxy and the micro-service decides whether to respond or not depending on the self-authorisation logic which is residing inside of it.

But when we implement OPA by integrating with the network proxy without changing a single line of code at the micro-service level, we give the policy decision making responsibility over to the OPA [of course, the user has to define the policies and provide the required data for OPA to decide], it follows the steps as shown in [Fig — 3]. Based on the OPA Policy decision output, the response goes to the client. Once the decision returns as ‘allow’, we can also implement ‘Response filtering’ logic as a policy to check if any sensitive information is going out


Let’s say, I want to create a pod in kubernetes, I use ‘kubectl create -f newpod.yaml’ command to do so. With this command, if my schema has no issues, I get the ‘newpod’ created for me in the default namespace and the details are stored in etcd if all goes well. The same is illustrated in the below picture as Fig — 4 [‘4-step’ process]

Fig — 4 : K8S Object Creation ‘4-Step’ Simple Process

But lot more science and logic goes behind and during the ‘2nd step’ which is illustrated in [Fig — 5] where ‘Kubernetes Admission Controllers’ come in to picture

Fig — 5 : K8S Admission Controller Phases

Let’s analyse what happens in the ‘2nd step’ and what’s going through the K8S API Server <intentionally ignoring other ‘control-plane’ components role here such as kube-scheduler, kube-controllermanager etc, as the context of those is not required for this concept>

Once the user enters ‘kubectl create xxxx’ command, the API handler handles the request and passes that to the ‘Authentication and Authorisation’ which is primarily the RBAC <Role Based Access Control> which validates who you are and what can you do within the K8S Cluster. If you are authenticated and authorised to perform this task, then it passes through this phase and moves to the next step. Primarily this step is like a binary action either ‘allow’ or ‘deny’ access etc like CRUD operations. I’ll explain how to overcome these as simply ‘allow/deny’ is not enough in our day-to-day development life cycle

For eg., we should be able to allow the user to create a pod but with enforce ‘non-root’ access or user should be able to create a deployment but with minimum replica-set as ‘3’ and max is like ‘10’ for availability or user should be able to create a service but with no node-port or external load-balancer and many more use-case scenarios — Simple RBAC is not able to serve these requirements for sure

In order to implement these checks, we need to define set of policies which can help dictate and also enforce the best practices dynamically at the time of object creation — That’s where the ‘K8S Admission Controllers’ plays major role.

After the RBAC <Authentication and Authorisation> phase, the Admission Controller process starts which primarily has 2 phases: mutating phase is executed first followed by the validating phase and it can be a combination of two in some cases or only one in some cases depending on the use case. But primarily validating phase is definitely used in most of the cases as it is the final stage before creating the object

Through ‘mutating’ phase, we can update or modify the schema to enforce the policy adherence and through ‘validating’ phase, we can ensure the schema defined is meeting best practices criteria before the action or object gets created

Let’s take a simple example for each phase — ‘mutating’ → Let’s say the user is creating a pod but didn’t assign any resource limits in the schema/yaml file. With the correct policy definition, as the request comes to this phase, we can enforce applicable resource limits such as 1-CPU core and 2Gb RAM and schema gets modified.

The same object creation request when it reaches to the ‘validating’ phase → it checks with the existing policy definition and if it fits the criteria defined, allows for the successful object creation and updates the etcd and user is allowed to move to the next steps

It all sounds good right when thinking about couple of policies and you have few users to deal with to limit and enforce practices through these policies ? Yeah, I know. I was also thinking the same :) until I explored to the next level of complexity ;)

When you think about these policy implementations at an organisation level operating K8S, this approach is not flexible enough. You have to look for ways to control what end-users can do on the cluster and find ways to ensure that clusters are in compliance with company policies and adhere to meet governance and legal requirements to enforce best practices across the organisation for few cases like:

  • All images must be from approved repositories
  • All pods must have resource limits and also ‘no’ nodePort
  • and many more

And all these need be achieved without compromising on the development agility and operational freedom — That’s where OPA plays a major role [Fig — 6]

Fig — 6 : Admissions Controllers with OPA

And the possible implementation of OPA in Kubernetes is called ‘OPA Gatekeeper’

Fig — 7 : OPA Gatekeeper for Kubernetes

Through this OPA Gatekeeper, we can allow the end-users what they want to do in the cluster but with setting up guard-rails around such that no-one can go beyond those boundaries and we’ve full control and visibility across it

The biggest advantages we get out of this OPA Gatekeeper project is

  • ‘Shift-left’ without worrying much about the end-user/s and how they are going to use the Kubernetes clusters as long as we’ve very well-defined policies adhering to the compliance, legal and info-sec requirements. It totally removes shadowing any developer for what they are doing but rather helps them follow the policies which are enforced as code. Through this enforcement, the schema or manifest needs to be adhering to the policies definition and only then the user is able to create a cluster or any other Kubernetes object/s — It is like giving the developers full freedom but with very well defined guard-rails as boundaries :)
  • It is open-source, so ‘No’ locking to any vendor or platform — rest all about this point is self-explanatory ;)

OPA Contributing companies details can be found here and the Grafana dashboard looks like below

That’s it from me about the ‘OPA — Policy As Code’ as a concept. Hope this information helps and gives an easy understanding about it

Thanks for taking time to read!

I might come up with more hands-on demo on this topic soon — Let me eye at it!


CNCF and Styra

Kubernetes Documentation

OPA Gatekeeper

lot of YouTube videos about OPA and Kubernetes

Human being First, followed by A Husband and A Father of Two Smiles — Rest is the Magic!