Tasks

Kubernetes v1.13 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Edit This Page

Expose Pod Information to Containers Through Files

This page shows how a Pod can use a DownwardAPIVolumeFile to expose information about itself to Containers running in the Pod. A DownwardAPIVolumeFile can expose Pod fields and Container fields.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds:

To check the version, enter kubectl version.

The Downward API

There are two ways to expose Pod and Container fields to a running Container:

Together, these two ways of exposing Pod and Container fields are called the Downward API.

Store Pod fields

In this exercise, you create a Pod that has one Container. Here is the configuration file for the Pod:

pods/inject/dapi-volume.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubernetes-downwardapi-volume-example
  labels:
    zone: us-est-coast
    cluster: test-cluster1
    rack: rack-22
  annotations:
    build: two
    builder: john-doe
spec:
  containers:
    - name: client-container
      image: k8s.gcr.io/busybox
      command: ["sh", "-c"]
      args:
      - while true; do
          if [[ -e /etc/podinfo/labels ]]; then
            echo -en '\n\n'; cat /etc/podinfo/labels; fi;
          if [[ -e /etc/podinfo/annotations ]]; then
            echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
          sleep 5;
        done;
      volumeMounts:
        - name: podinfo
          mountPath: /etc/podinfo
          readOnly: false
  volumes:
    - name: podinfo
      downwardAPI:
        items:
          - path: "labels"
            fieldRef:
              fieldPath: metadata.labels
          - path: "annotations"
            fieldRef:
              fieldPath: metadata.annotations

In the configuration file, you can see that the Pod has a downwardAPI Volume, and the Container mounts the Volume at /etc/podinfo.

Look at the items array under downwardAPI. Each element of the array is a DownwardAPIVolumeFile. The first element specifies that the value of the Pod’s metadata.labels field should be stored in a file named labels. The second element specifies that the value of the Pod’s annotations field should be stored in a file named annotations.

Note: The fields in this example are Pod fields. They are not fields of the Container in the Pod.

Create the Pod:

kubectl create -f https://k8s.io/examples/pods/inject/dapi-volume.yaml

Verify that Container in the Pod is running:

kubectl get pods

View the Container’s logs:

kubectl logs kubernetes-downwardapi-volume-example

The output shows the contents of the labels file and the annotations file:

cluster="test-cluster1"
rack="rack-22"
zone="us-est-coast"

build="two"
builder="john-doe"

Get a shell into the Container that is running in your Pod:

kubectl exec -it kubernetes-downwardapi-volume-example -- sh

In your shell, view the labels file:

/# cat /etc/podinfo/labels

The output shows that all of the Pod’s labels have been written to the labels file:

cluster="test-cluster1"
rack="rack-22"
zone="us-est-coast"

Similarly, view the annotations file:

/# cat /etc/podinfo/annotations

View the files in the /etc/podinfo directory:

/# ls -laR /etc/podinfo

In the output, you can see that the labels and annotations files are in a temporary subdirectory: in this example, ..2982_06_02_21_47_53.299460680. In the /etc/podinfo directory, ..data is a symbolic link to the temporary subdirectory. Also in the /etc/podinfo directory, labels and annotations are symbolic links.

drwxr-xr-x  ... Feb 6 21:47 ..2982_06_02_21_47_53.299460680
lrwxrwxrwx  ... Feb 6 21:47 ..data -> ..2982_06_02_21_47_53.299460680
lrwxrwxrwx  ... Feb 6 21:47 annotations -> ..data/annotations
lrwxrwxrwx  ... Feb 6 21:47 labels -> ..data/labels

/etc/..2982_06_02_21_47_53.299460680:
total 8
-rw-r--r--  ... Feb  6 21:47 annotations
-rw-r--r--  ... Feb  6 21:47 labels

Using symbolic links enables dynamic atomic refresh of the metadata; updates are written to a new temporary directory, and the ..data symlink is updated atomically using rename(2).

Note: A container using Downward API as a subPath volume mount will not receive Downward API updates.

Exit the shell:

/# exit

Store Container fields

The preceding exercise, you stored Pod fields in a DownwardAPIVolumeFile. In this next exercise, you store Container fields. Here is the configuration file for a Pod that has one Container:

pods/inject/dapi-volume-resources.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kubernetes-downwardapi-volume-example-2
spec:
  containers:
    - name: client-container
      image: k8s.gcr.io/busybox:1.24
      command: ["sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          if [[ -e /etc/podinfo/cpu_limit ]]; then
            echo -en '\n'; cat /etc/podinfo/cpu_limit; fi;
          if [[ -e /etc/podinfo/cpu_request ]]; then
            echo -en '\n'; cat /etc/podinfo/cpu_request; fi;
          if [[ -e /etc/podinfo/mem_limit ]]; then
            echo -en '\n'; cat /etc/podinfo/mem_limit; fi;
          if [[ -e /etc/podinfo/mem_request ]]; then
            echo -en '\n'; cat /etc/podinfo/mem_request; fi;
          sleep 5;
        done;
      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      volumeMounts:
        - name: podinfo
          mountPath: /etc/podinfo
          readOnly: false
  volumes:
    - name: podinfo
      downwardAPI:
        items:
          - path: "cpu_limit"
            resourceFieldRef:
              containerName: client-container
              resource: limits.cpu
              divisor: 1m
          - path: "cpu_request"
            resourceFieldRef:
              containerName: client-container
              resource: requests.cpu
              divisor: 1m
          - path: "mem_limit"
            resourceFieldRef:
              containerName: client-container
              resource: limits.memory
              divisor: 1Mi
          - path: "mem_request"
            resourceFieldRef:
              containerName: client-container
              resource: requests.memory
              divisor: 1Mi

In the configuration file, you can see that the Pod has a downwardAPI Volume, and the Container mounts the Volume at /etc/podinfo.

Look at the items array under downwardAPI. Each element of the array is a DownwardAPIVolumeFile.

The first element specifies that in the Container named client-container, the value of the limits.cpu field in the format specified by 1m should be stored in a file named cpu_limit. The divisor field is optional and has the default value of 1 which means cores for cpu and bytes for memory.

Create the Pod:

kubectl create -f https://k8s.io/examples/pods/inject/dapi-volume-resources.yaml

Get a shell into the Container that is running in your Pod:

kubectl exec -it kubernetes-downwardapi-volume-example-2 -- sh

In your shell, view the cpu_limit file:

/# cat /etc/podinfo/cpu_limit

You can use similar commands to view the cpu_request, mem_limit and mem_request files.

Capabilities of the Downward API

The following information is available to containers through environment variables and downwardAPI volumes:

In addition, the following information is available through downwardAPI volume fieldRef:

Note: If CPU and memory limits are not specified for a Container, the Downward API defaults to the node allocatable value for CPU and memory.

Project keys to specific paths and file permissions

You can project keys to specific paths and specific permissions on a per-file basis. For more information, see Secrets.

Motivation for the Downward API

It is sometimes useful for a Container to have information about itself, without being overly coupled to Kubernetes. The Downward API allows containers to consume information about themselves or the cluster without using the Kubernetes client or API server.

An example is an existing application that assumes a particular well-known environment variable holds a unique identifier. One possibility is to wrap the application, but that is tedious and error prone, and it violates the goal of low coupling. A better option would be to use the Pod’s name as an identifier, and inject the Pod’s name into the well-known environment variable.

What's next

Feedback