Before rightly getting in to today’s topic - First, let us try to understand the general SW application scenario and then to understand the same concept from Kubernetes perspective and finally we’ll understand about Why ? What ? When ? and How ? kind of analysis and implementations in K8S
In a typical SW Application hierarchy, we generally have 3-tier architecture implementation like Web Application layer which faces the end-user, API layer which is like a backend interface and then finally the ‘DataBase’ layer. As a reader, I think you are already aware of these layers and the importance of each of these and how they get inter-linked to make a fully functional SW application for the usage.
Illustrating the interactions between all these layers in Fig-1 as a simple ‘6-step process’ ~ whenever end-user sends request/s to the web-server, the web-server reaches out the api-server for the information and that in turn reaches out to the db-server. api-server fetches the required information from the db-server, passes that to the web-server and that reaches out to the end-user as per the needs.
The above explanation is very simple to kick-start our conversation. Let’s go one step further to understand the network traffic flows between these layers and ports as shown in Fig-2
Web-server is listening to the traffic at port 80 and the api-server is at 5000 and the db-server is at 3306. All these servers uses these ports to send the data back to the end-user to make the request to completion.
There are 2 kinds of Traffic flows among all these layers: 1 — Ingress [The traffic coming in to the server] and 2 — Egress [The traffic going out of the server]
For eg., The traffic coming from the end-user to the web-server <port-80> is Ingress from web-server perspective and similarly, the traffic leaving from web-server and reaching out to the api-server <port-5000> is the Egress.
The Egress traffic from web-server will be the Ingress traffic to the api-server similarly, The Egress traffic from api-server will be the Ingress to the db-server. Overall end-to-end network traffic is illustrated in Fig-3.
Depending on the direction of the originated traffic, it is identified as either ‘Ingress’ or ‘Egress’. To keep it simple, The Egress from the ‘Source’ will be the Ingress for the ‘Destination’ and vice-versa
Now, let’s take our topic further by exploring how this can be understood from Kubernetes perspective. To begin with, a simple K8S Cluster is shown in Fig-4. Intentionally, avoided to represent the Master Node in the Cluster as that is not required to understand the concept but at the background, all the control plane components have a great role in making the entire cluster operational
Each node has multiple pods and also a service and all of these gets connected through the Kubernetes Networking through IP addresses assigned to each.
Illustrated each layer of our earlier analysis in Fig-5 as pods inside the nodes of the cluster such as web-pod, api-pod and db-pod. By default, all the Pods within the K8S cluster will be able to connect to each other without any bottlenecks. All the pods are non-isolated by default and they accept traffic from any source within the cluster. By default ‘ALL-ALLOW’ depicted in the below picture
This default behaviour provides the flexibility and it is wide open too for communication but for security teams who are aiming to secure this cluster may find it troublesome and also uncomfortable with this implementation.
As shown in Fig-6, all the pods are able to connect and communicate with each other flawlessly which might sound good from a very controlled environments like Dev/Test etc, but for a typical production grade environments it is definitely an unacceptable behaviour.
Let’s say, if we want to limit the access to the db-pod only from api-pod and no other pod should be able to connect and communicate with it. Is It achievable to implement ? In our example case, web-pod access should be limited only to the api-pod — that’s where ‘Network Policy’ as an object comes in to picture to provide us the solution as shown in Fig-7
Kubernetes Network Policies
Workloads are run in pods in Kubernetes platform. Each pod can contain one or more containers that are deployed together. Each pod is routable from all other pods and also from the underlying servers
Kubernetes provides a very good control of the traffic flow at the IP address or Port level [Layer 3 or 4] through ‘Network Policies’ which enforces segmentation for applications that are deployed using Kubernetes.
Network Policies are like ‘Security Groups’ or ‘Firewall Rules’ in the cloud environment which are used to control access to different VM instances. This is one of the core strategy defined under ‘Defense in Depth’ as one of the best practice
Network policies are implemented by network plugins for Kubernetes such as Calico, Weavenet, Kube-router or Romana which used to enforce network policies. In case if you don’t have one of those plugins incorporated already in to your platform, all of your network policies are simply ignored without a warning — Be aware of that fact!
In case if you are using ‘Flannel’ as your network plugin, keep a note that as of now, it doesn’t support Network Policies. Again, there is no warning message but your network policies get simply ignored — Be aware of this fact too!
Let’s write some Network Policies now by making our hands dirty
If you are already across Kubernetes, you are aware of YAML. We use the same language to define our network policies. Below code snippet in Fig-8 defines a network policy for db-pod to allow Ingress only from api-pod as an example
Let’s say we save this file as db-network-policy-defintion.yaml as this is our policy yaml file. Just run the command ‘kubectl create -f db-network-policy-defintion.yaml’ to apply and enforce the rules — that’s it!
The Network Policy ‘Spec’ consists of below elements:
- podSelector → [mandatory] here we use the label of the pod which is the subject for this policy — in this case ‘db-pod’
- policyTypes → [optional] direction of the traffic either Ingress and/or Egress. This is optional, but best to always have it mentioned to clear the air
- ingress → mention the target pods — in this case ‘api-pod’
- egress → mention the target pods — in this case ‘not applicable’ hence not mentioned :) <we’ll cover these scenarios as move further in our exploration in this blog ;)>
- ports → [optional] protocol and port are self-explanatory
Let’s take this example little further by adding more namespaces such as DEV, TEST, production etc. as shown in Fig-9
Now the .yaml file looks like below handling the namespace — ‘production’ as shown in Fig-10
Let’s update the network policy such that db-pod allows Ingress traffic from ‘backup’ server as per the CIDR IP Block as shown in Fig-11
Let’s say if we want to allow Egress from db-pod to the ‘backup’ server on Port-80 rather than ‘Ingress’ like before — Let’s implement it as shown in Fig-12 and Fig-13
Let’s say we want to allow the Ingress from all the namespaces rather than limiting the Ingress within only ‘production’ namespace — this can be implemented as shown in Fig-14 and Fig-15
By default, NO Policies exist in a namespace and under this situation, all Ingress and Egress traffic is allowed to and from pods within that namespace
Let’s first implement a ‘Default Deny-All-Ingress Traffic’ as shown in Fig-16
Similarly, ‘Default Allow-All-Ingress Traffic’ can be implemented as shown in Fig-17
‘Default Deny-All-Egress Traffic’ can be implemented as shown in Fig-18
Similarly ‘Default-Allow-All-Egress Traffic’ can be implemented as shown in Fig-19
Best way to start security measures for the cluster is by defining a ‘Default-Deny-All-Ingress-AND-Egress Traffic’ for the required namespace as shown in Fig-20.
After this implementation, step-by-step network policies can be defined to opening up the Ingress / Egress for different pods. This way, un-intentionally nothing gets exposed to the outside world incase of any attacks at the surface level
Let’s try to clear the air for some assumptions regarding ‘Network Policies’
- Connections are always Duplex [2-way] — Look at the illustration shown below:
As shown, Once A connects to B, B can send data to A on the same connection. But it doesn’t mean that B can connect to A and send data unless Egress policy is defined for B to A
- Network Policy acts as a Connection Filter — That means, It doesn’t terminate already established connections [If you already have an existing connection between pod-A and pod-B and you implement a new blocking network policy, this object is not going to kill the already existing connection/s but only new connections from now on will be adhering to the new blocking network policy]
- We discussed enough about Network Policies but do we know ‘Is it for Pods only?’ — The Answer is ‘YES’. Network Policies enables to control the access only to and between the pods but NOT services — Please keep this in mind
- Network Policies cannot generate any traffic logs which is a bit inconvenience in case if we want to check the working status of the policy
We spent good time in learning Network Policies and the limitations or caveats and I think it’s high time to know about the ‘Best Practices’ for the same
- Always use ‘Default-Deny-All’ policy as a default to start with. It helps you to start from Zero and building up the security measures step-by-step. This approach is highly recommended
- Always Test your network policies before pushing it to production grade
To summarise, Network Policies help to provide good segmentation but do come with limitations too. As a result, the administrators/developers might end up exposing their clusters with minimal or no network isolation without being aware of these facts. So, be aware of these facts!
We can go further on to this topic by showing how we can implement the network plugins and etc., but that can be really understood well by going through the documentation and use-case-by-use-case — Good Luck!
That’s all for now about Network Policies in Kubernetes — Thanks for taking time to read!