Kubernetes Interview Questions

Prepare better for your Application developer interview with the top Kubernetes interview questions curated by our experts. These Kubernetes Interview Questions & Answers will help convert your Application developer/DevOps engineer interview into a top job offer. The following list of interview questions on Kubernetes covers the conceptual questions for freshers and experts and helps you answer different questions like the difference between config map and secret, ways to monitor that a Pod is always running, ways to test a manifest without actually executing it. Get well prepared with these interview questions and answers for Kubernetes.

  • 4.7 Rating
  • 17 Question(s)
  • 20 Mins of Read
  • 8934 Reader(s)

Advanced

When we create a job spec, we can give --activeDeadlineSeconds flag to the command, this flag relates to the duration of the job, once the job reaches the threshold specified by the flag, the job will be terminated.

kind: CronJob
apiVersion: batch/v1beta1
metadata:
  name: mycronjob
spec:
  schedule: "*/1 * * * *"
activeDeadlineSeconds: 200
  jobTemplate:
    metadata:
      name: google-check-job
    spec:
      template:
        metadata:
          name: mypod
        spec:
          restartPolicy: OnFailure
          containers:
            - name: mycontainer
             image: alpine
             command: ["/bin/sh"]
             args: ["-c", "ping -w 1 google.com"]

use --dry-run flag to test the manifest. This is really useful not only to ensure if the yaml syntax is right for a particular Kubernetes object but also to ensure that a spec has required key-value pairs.

kubectl create -f <test.yaml> --dry-run

Let us now look at an example Pod spec that will launch an nginx pod

○ → cat example_pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
  name: my-nginx
  namespace: mynamespace
spec:
  containers:
    - name: my-nginx
      image: nginx
○ → kubectl create -f example_pod.yaml --dry-run
pod/my-nginx created (dry run)

Rollback and rolling updates are a feature of Deployment object in the Kubernetes. We do the Rollback to an earlier Deployment revision if the current state of the Deployment is not stable due to the application code or the configuration. Each rollback updates the revision of the Deployment

○ → kubectl get deploy
NAME    DESIRED  CURRENT UP-TO-DATE   AVAILABLE AGE
nginx   1  1 1            1 15h
○ → kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
kubectl undo deploy <deploymentname>
○ → kubectl rollout undo deploy nginx
deployment.extensions/nginx
○ → kubectl rollout history deploy nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

We can also check the history of the changes by the below command

kubectl rollout history deploy <deploymentname>

Helm is a package manager which allows users to package, configure, and deploy applications and services to the Kubernetes cluster.

helm init  # when you execute this command client is going to create a deployment in the cluster and that deployment will install the tiller, the server side of Helm

The packages we install through client are called charts. They are bundles of templatized manifests. All the templating work is done by the Tiller

helm search redis # searches for a specific application
helm install stable/redis # installs the application
helm ls # list the applications 

Generally, in Kubenetes, a pod can have many containers. Init container gets executed before any other containers run in the pod.

apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
  annotations:
    pod.beta.Kubernetes.io/init-containers: '[
        {
            "name": "init-myservice",
            "image": "busybox",
            "command": ["sh", "-c", "until nslookup myservice; do echo waiting for myservice; sleep 2; done;"]
        }
    ]'
spec:
  containers:
  - name: myapp-container
    image: busybox
    command: ['sh', '-c', 'echo The app is running! && sleep 3600']
  1. Node Affinity ensures that pods are hosted on particular nodes.

Pod Affinity ensures two pods to be co-located in a single node.

Node Affinity

apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
affinity:
nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: Kubernetes.io/e2e-az-name
            operator: In
            values:
            - e2e-az1

Pod Affinity

apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
spec:
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: security
            operator: In
            values:
            - S1

The pod affinity rule says that the pod can be scheduled to a node only if that node is in the same zone as at least one already-running pod that has a label with key “security” and value “S1”

Reference: https://Kubernetes.io/docs/concepts/configuration/assign-pod-node/

When we take the node for maintenance, pods inside the nodes also take a hit. However, we can avoid it by using the below command

kubectl drain <nodename>

When we run the above command it marks the node unschedulable for newer pods then the existing pods are evicted if the API Server supports eviction else it deletes the pods

Once the node is up and running and you want to add it in rotation we can run the below command

kubectl uncordon <nodename>

Note: If you prefer not to use kubectl drain (such as to avoid calling to an external command, or to get finer control over the pod eviction process), you can also programmatically cause evictions using the eviction API.

More info: https://Kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/

Intermediate

Config maps ideally stores application configuration in a plain text format whereas Secrets store sensitive data like password in an encrypted format. Both config maps and secrets can be used as volume and mounted inside a pod through a pod definition file.

Config map:

                 kubectl create configmap myconfigmap
 --from-literal=env=dev

Secret:

echo -n ‘admin’ > ./username.txt
echo -n ‘abcd1234’ ./password.txt
kubectl create secret generic mysecret --from-file=./username.txt --from-file=./password.txt

When a node is tainted, the pods don't get scheduled by default, however, if we have to still schedule a pod to a tainted node we can start applying tolerations to the pod spec.

Apply a taint to a node:

kubectl taint nodes node1 key=value:NoSchedule

Apply toleration to a pod:

spec:
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"

The mapping between persistentVolume and persistentVolumeClaim is always one to one. Even When you delete the claim, PersistentVolume still remains as we set persistentVolumeReclaimPolicy is set to Retain and It will not be reused by any other claims. Below is the spec to create the Persistent Volume.

apiVersion: v1
kind: PersistentVolume
metadata:
name: mypv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain

You should be creating serviceAccount. A service account creates a token and tokens are stored inside a secret object. By default Kubernetes automatically mounts the default service account. However, we can disable this property by setting automountServiceAccountToken: false in our spec. Also, note each namespace will have a service account

apiVersion: v1
kind: ServiceAccount
metadata:
name: my-sa
automountServiceAccountToken: false

A Pod always ensure that a container is running whereas the Job ensures that the pods run to its completion. Job is to do a finite task.

Examples:

kubectl run mypod1 --image=nginx --restart=Never
kubectl run mypod2 --image=nginx --restart=onFailure
○ → kubectl get pods
NAME           READY STATUS   RESTARTS AGE
mypod1         1/1 Running   0 59s
○ → kubectl get job
NAME     DESIRED SUCCESSFUL   AGE
mypod1   1 0            19s

By default Deployment in Kubernetes using RollingUpdate as a strategy. Let's say we have an example that creates a deployment in Kubernetes

kubectl run nginx --image=nginx # creates a deployment
○ → kubectl get deploy
NAME    DESIRED  CURRENT UP-TO-DATE   AVAILABLE AGE
nginx   1  1 1            0 7s

Now let’s assume we are going to update the nginx image

kubectl set image deployment nginx nginx=nginx:1.15 # updates the image 

Now when we check the replica sets

kubectl get replicasets # get replica sets
NAME               DESIRED CURRENT READY   AGE
nginx-65899c769f   0 0 0       7m
nginx-6c9655f5bb   1 1 1       13s

From the above, we can notice that one more replica set was added and then the other replica set was brought down

kubectl rollout status deployment nginx 

# check the status of a deployment rollout

kubectl rollout history deployment nginx

 # check the revisions in a deployment

○ → kubectl rollout history deployment nginx
deployment.extensions/nginx
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

We can introduce probes. A liveness probe with a Pod is ideal in this scenario.

A liveness probe always checks if an application in a pod is running,  if this check fails the container gets restarted. This is ideal in many scenarios where the container is running but somehow the application inside a container crashes.

spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
args:
- /server
livenessProbe:
      httpGet:
        path: /healthz
  • sidecar:

A pod spec which runs the main container and a helper container that does some utility work, but that is not necessarily needed for the main container to work.

  • adapter:

The adapter container will inspect the contents of the app's file, does some kind of restructuring and reformat it, and write the correctly formatted output to the location.

  • ambassador:

It connects containers with the outside world. It is a proxy that allows other containers to connect to a port on localhost.

reference:https://matthewpalmer.net/Kubernetes-app-developer/articles/multi-container-pod-design-patterns.html

The only difference between replication controllers and replica sets is the selectors. Replication controllers don't have selectors in their spec and also note that replication controllers are obsolete now in the latest version of Kubernetes.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# modify replicas according to your case
replicas: 3
  selector:
    matchLabels:
      tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3

Reference: https://Kubernetes.io/docs/concepts/workloads/controllers/replicaset/

By declaring pods with the label(s) and by having a selector in the service which acts as a glue to stick the service to the pods.

kind: Service

apiVersion: v1
metadata:
name: my-service
spec:
  selector:
    app: MyApp
ports:
- protocol: TCP
port: 80

Let's say if we have a set of Pods that carry a label "app=MyApp" the service will start routing to those pods.

Description

Prepare better for your Application developer interview with the top Kubernetes interview questions curated by our experts. These Kubernetes Interview Questions & Answers will help convert your Application developer/DevOps engineer interview into a top job offer. The following list of interview questions on Kubernetes covers the conceptual questions for freshers and experts and helps you answer different questions like the difference between config map and secret, ways to monitor that a Pod is always running, ways to test a manifest without actually executing it. Get well prepared with these interview questions and answers for Kubernetes.
Levels