A modern All-In-One guide to become proficient with Kubernetes core concept and pass the Certified Kubernetes Application Developer (CKAD) exam
Hi, I'm Pius Lawal, and this course is part of my Hybrid and Multi-Cloud Developer bootcamp series.
A hybrid and multi-cloud skill is useful on-prem as well as on any cloud platform. You might learn these skills locally, or on a particular cloud platform, yet remain a ninja 🥷 in any cloud environment - Docker is a popular example.
If you like this project, but just don't have time to contribute, that's fine. There are other easy ways to support the project and show your appreciation:
- Star this project
- Tweet about it
- Reference this project in your own work
- Mention this project at local meetups and to your family/friends/colleagues
This bootcamp covers the Certified Kubernetes Application Developer (CKAD) exam curriculum plus more. In summary, you will be learning cloud application development, which is a modern approach to building and running software applications that exploits the flexibility, scalability, and resilience of cloud computing. Some highlights include:
Passing the CKAD exam with confidence should be a simple 4-stage process, all of which is covered in this bootcamp:
kubectl
and related CLI toolsFollow the Labs, that's all!
No prior experience required and its okay if you're not confident on the command-line, yet!
Each chapter contains several Labs to help you slowly build confidence and proficiency around the concepts covered. There are command snippet blocks provided to help you through the Labs - use them if you're stuck on any Lab and aren't yet confident using help
on the terminal.
There are Tasks provided at the end of most chapters with content designed to challenge your critical understanding and troubleshooting strategy of the core concepts in that chapter. These Tasks are longer and require more time to solve than standard exam questions, which makes them more difficult. Therefore, you know you are exam-ready if you can complete all 16 Tasks under 2 hours.
Nothing else, this bootcamp is an All-In-One-Guide! Simply working through this bootcamp will make you proficient with Kubernetes as well as prepare you for the CKAD exam!
The Exam Readiness Mode, where you simulate the exam by completing all 16 Tasks under 2 hours, will help you identify your weak areas. Then you simply repeat those chapters/sections, and make sure to review all links to resources from the official Kubernetes documentation, until you are confident.
If you have completed step [1] above, for example, you have completed a CKAD course prior or use Kubernetes day-to-day, etc, and just wish to dive into Exam Readiness Mode, skip to Ch15 - Exam tips.
Hey! CKAD is entry-level Kubernetes and covers the basic features and core components of Kubernetes. This bootcamp covers everything you need from NOOB setup to mastery. Preparing for the CKAD exam is a structured approach to learning Kubernetes. When you finish this bootcamp, you may choose not to pay for and sit the exam, but you will have acquired the ability to pass regardless.
In the CKAD exam, you will have 2 hours to complete 15-20 performance-based questions around the areas below.
GitHub has native TOC support for markdown files with filtering built-in. The TOC Header sticks to the top of the page as you scroll through the document.
A Unix-based environment running docker (Docker Engine or Docker Desktop).
# 1. install xcode tools
sudo xcode-select --install
# 2. install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# 3. install docker
brew install --cask docker
# powershell as administrator
# 1. install wsl2
wsl --install
# 2. install terminal
winget install Microsoft.WindowsTerminal
# 3. install docker
winget install Docker.DockerDesktop
# restart device
After device restart:
Complete Ubuntu user setup - Ubuntu terminal should auto-open
sudo nano /etc/wsl.conf
# /etc/wsl.conf
[boot]
systemd=true
wsl.exe --terminate Ubuntu
Perform Internet connection test in WSL2 by running:
curl google.com
💡 If connection fails with
Could not resolve host
, and you have a VPN program installed, see WSL2 VPN fix below
See wsl-vpnkit documentation for more details.
# powershell as administrator
wget -o wsl-vpnkit.tar.gz https://github.com/sakai135/wsl-vpnkit/releases/latest/download/wsl-vpnkit.tar.gz
wsl --import wsl-vpnkit $env:USERPROFILE\wsl-vpnkit wsl-vpnkit.tar.gz --version 2
# wsl2 ubuntu
wsl.exe -d wsl-vpnkit --cd /app cat /app/wsl-vpnkit.service | sudo tee /etc/systemd/system/wsl-vpnkit.service
sudo systemctl enable wsl-vpnkit
sudo systemctl start wsl-vpnkit
systemctl status wsl-vpnkit # should be Active
# test internet connection again
curl google.com
See Install Docker Engine documentation for more details and other distro steps.
This is also an alternative for Windows users running WSL2.
💡 If using WSL2, be sure to:
- Enable
systemd
- see the Windows users section- If installed, disable Docker Desktop integration with WSL2
# 1. uninstall old docker versions
sudo apt-get remove docker docker-engine docker.io containerd runc
# 2. setup docker repository
sudo apt-get update
sudo apt-get -y install ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# 3. install docker engine
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
# 4. manage docker as non-root user
sudo groupadd docker
sudo usermod -aG docker $USER
# 5. start a new terminal to update group membership
docker run hello-world
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A container-runtime, which relies on the host kernel, is required to run a container.
Docker is the most popular container-runtime and container-solution, but there are other runtimes like runc, cri-o, containerd, etc, However, the only significant container-solutions today are Docker and Podman
A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime.
The Open Container Initiative (OCI) creates open industry standards around container formats and runtimes.
A container registry is a repository, or collection of repositories, used to store and access container images. Container registries are a big player in cloud application development, often as part of GitOps processes.
# run busybox container, see `docker run --help`
docker run busybox
# run in interactive mode
docker run -it busybox
# run in interactive mode and delete container when stopped
docker run -it --rm busybox
# run in detached mode
docker run -d busybox
# list running containers
docker ps
# list all containers
docker ps -a
# start a stopped container, see `docker container start --help`
docker container start $CONTAINER_NAME_OR_ID
# stop a running container, see `docker container stop --help`
docker container stop $CONTAINER_NAME_OR_ID
# restart a running container, see `docker container restart --help`
docker container restart $CONTAINER_NAME_OR_ID
# delete a stopped container, see `docker container rm --help`
docker container rm $CONTAINER_NAME_OR_ID
# exit running container - container is stopped if connected to entrypoint
exit
# exit running container without stopping it
ctrl-p ctrl-q
See possible container statuses to understand more about container states
docker info
to confirm docker client and server statusesdocker run hello-world
## view kernel details
uname -r # or `cat /proc/version` or `hostnamectl`
# view os version
cat /etc/*-release # or redhat `/etc/redhat-release`, other unix-based os `/etc/os-release`
# view running processes, see `ps --help`
ps aux
# view processes, alternative to `ps`
ls /proc # to find PID, then
cat /proc/$PID/cmdline
ps aux
to review running processes on your host devicebusybox
container in interactive mode docker run -it busybox
# host terminal
ps aux
docker run --name box1 -it busybox
# container terminal
ps aux
uname -r
cat /proc/version
hostnamectl # not found
cat /etc/*-release # not found
busybox | head
exit
# host terminal
docker ps
docker ps -a
docker run --name box2 -it busybox
# container terminal
ctrl+p ctrl+q
# host terminal
docker ps
docker ps -a
docker stop box2
docker rm box1 box2
docker ps
showing STATUS ofExited (0)
means exit OK, but an Exit STATUS that's not 0 should be investigateddocker logs
CTRL+P, CTRL+Q
only works when running a container in interactive mode, see how to attach/detach containers for more details
# run container with specified name
docker run -d --name webserver httpd
# run command `date` in a new container
docker run busybox date
# get a "dash" shell to a running container, see `docker exec --help`
docker exec -it $CONTAINER_NAME_OR_ID sh
# get a "bash" shell to a running container
docker exec -it $CONTAINER_NAME_OR_ID bash
# view open ports, the commands below only work if installed in the container
netstat -tupln # see `netstat --help` - t tcp, u udp, p program-names, l listening, n port-numbers-only
ss -tulpn # see `ss --help`, alternative to netstat
nginx
containernginx
container in interactive modenginx
container in detached mode# host terminal
docker run --name webserver1 nginx
# host second terminal
docker ps
# host terminal
ctrl+c
docker ps
docker run --name webserver2 -it --rm nginx bash
# container terminal
cat /etc/*-release
ps aux # not found
ls /proc
ls /proc/1 # list processes running on PID 1
cat /proc/1/$PROCESS_NAME
exit
# host terminal
docker run --name webserver3 -d nginx
docker ps
docker exec -it webserver3 bash
# container terminal
netstat -tupln
ss -tulpn
exit
# host terminal
docker ps
docker stop webserver3
docker rm webserver1 webserver2 webserver3
Containers may not always have
bash
shell, but will usually have the dash shellsh
busybox
container with command sleep 30
as argument, see sleep --help
busybox
container in detached mode with command sleep 300
as argumentbusybox
container in detached mode, no commands# host terminal
docker run --name box1 busybox sleep 30
# host second terminal
docker ps
docker stop box1
# host terminal
docker run --name box2 -d busybox sleep 300
docker ps
docker exec -it box2 sh
# container terminal
exit
# host terminal
docker ps
docker run --name box3 -d busybox
docker ps
docker ps -a
docker stop box2
docker rm box1 box2 box3
The
Entrypoint
of a container is the init process and allows the container to run as an executable. Commands passed to a container are passed to the container's entrypoint process.Note that
docker
commands after$IMAGE_NAME
are passed to the container's entrypoint as arguments.
❌docker run -it mysql -e MYSQL_PASSWORD=hello
will pass-e MYSQL_PASSWORD=hello
to the container
✔️docker run -it -e MYSQL_PASSWORD=hello mysql
# run container with port, see `docker run --help`
docker run -d -p 8080:80 httpd # visit localhost:8080
# run container with mounted volume
docker run -d -p 8080:80 -v ~/html:/usr/local/apache2/htdocs httpd
# run container with environment variable
docker run -e MONGO_INITDB_ROOT_USERNAME=admin -e MONGO_INITDB_ROOT_PASSWORD=secret mongo
# inspect container, see `docker container inspect --help | docker inspect --help`
docker inspect $CONTAINER_NAME_OR_ID | less # press Q key to quit from less
docker container inspect $CONTAINER_NAME_OR_ID
# format inspect output to view container network information
docker inspect --format="{{.NetworkSettings.IPAddress}}" $CONTAINER_NAME_OR_ID
# format inspect output to view container status information
docker inspect --format="{{.State.Pid}}" $CONTAINER_NAME_OR_ID
# view container logs, see `docker logs --help`
docker logs $CONTAINER_NAME_OR_ID
# remove all unused data (including dangling images)
docker system prune
# remove all unused data (including unused images, dangling or not, and volumes)
docker system prune --all --volumes
# manage images, see `docker image --help`
docker image ls # or `docker images`
docker image inspect $IMAGE_ID
docker image rm $IMAGE_ID
# see `docker --help` for complete resources
nginx
container with name webserver
| less
to avoid console clutter) and review the State
and NetworkSettings
fields, quit with q
http://$CONTAINER_IP_ADDRESS
in your browser (this may not work depending on your envrionment network settings)nginx
container with name webserver
and exposed on port 80# host terminal
docker run -d --name webserver nginx
docker inspect webserver | grep -A 13 '"State"' | less
docker inspect webserver | grep -A 50 '"NetworkSettings"' | less
curl http://$(docker inspect webserver --format "{{.NetworkSettings.IPAddress}}") | less
docker stop webserver
docker rm webserver
docker run -d --name webserver -p 80:80 nginx
curl localhost | less
docker ps
docker ps -a
docker stop webserver
docker rm webserver
Always run containers in detached mode to avoid getting stuck in the container
STDOUT
html/index.html
file with some contenthtml
folder to the DocumentRoot
nginx
DocumentRoot - /usr/share/nginx/html
httpd
DocumentRoot - /usr/local/apache2/htdocs
# host terminal
cd ~
mkdir html
echo "Welcome to Lab 1.6 Container volumes" >> html/index.html
# with nginx
docker run -d --name webserver -v ~/html:/usr/share/nginx/html -p 8080:80 nginx
# with httpd
# docker run -d --name webserver -v ~/html:/usr/local/apache2/htdocs -p 8080:80 httpd
curl localhost:8080
docker ps
docker ps -a
docker stop webserver
docker rm webserver
mysql
container in detached modedocker system prune
# host terminal
docker run -d --name db mysql
docker exec -it db bash # error not running
docker logs db
docker rm db
docker run -d --name db -e MYSQL_ROOT_PASSWORD=secret mysql
docker ps
docker ps -a
docker image ls
docker volume ls
docker stop db
docker ps # no containers running
docker system prune --all --volumes
docker image ls
docker volume ls
You don't always have to run a new container, we have had to do this to apply new configuration. You can restart an existing container
docker ps -a
, if it meets your needs, withdocker start $CONTAINER
Explore Docker Hub and search for images you've used so far or images/applications you use day-to-day, like databases, environment tools, etc.
Container images are created with instructions that determine the default container behaviour at runtime. A familiarity with specific images/applications may be required to understand their default behaviours
A docker image consist of layers, and each image layer is its own image. An image layer is a change on an image - every command (FROM, RUN, COPY, etc.) in your Dockerfile (aka Containerfile by OCI) causes a change, thus creating a new layer. It is recommended reduce your image layers as best possible, e.g. replace multiple RUN
commands with "command chaining" apt update && apt upgrade -y
.
A name can be assigned to an image by "tagging" the image. This is often used to identify the image version and/or registry.
# to view image layers/history, see `docker image history --help`
docker image history $IMAGE_ID
# tagging images, see `docker tag --help`
docker tag $IMAGE_NAME $NEW_NAME:$TAG # if tag is omitted, `latest` is used
docker tag nginx nginx:1.1
# tags can also be used to add repository location
docker tag nginx domain.com/nginx:1.1
| less
and review the ContainerConfig
and Config
localhost
and a version# host terminal
docker image ls
# using nginx image
docker image inspect nginx | grep -A 40 ContainerConfig | less
docker image inspect nginx | grep -A 40 '"Config"' | less
docker image history nginx
docker tag nginx localhost/nginx:1.1
docker image ls
docker image history localhost/nginx:1.1 # tagging isn't a change
docker image rm $IMAGE_ID # error conflict
docker image rm localhost/nginx:1.1 # deleting removes tag
Although, we can also create an image from a running container using docker commit
, we will only focus on using a Dockerfile, which is the recommended method.
Build the below Dockerfile with docker build -t $IMAGE_NAME:$TAG /path/to/Dockerfile/directory
, see `docker build --help
# Example Dockerfile
FROM ubuntu
MAINTAINER Piouson
RUN apt-get update && \
apt-get install -y nmap iproute2 && \
apt-get clean
ENTRYPOINT ["/usr/bin/nmap"]
CMD ["-sn", "172.17.0.0/16"] # nmap will scan docker network subnet `172.17.0.0/16` for running containers
FROM # specify base image
RUN # execute commands
ENV # specify environment variables used by container
ADD # copy files from project directory to the image
COPY # copy files from local project directory to the image - ADD is recommended
ADD /path/to/local/file /path/to/container/directory # specify commands in shell form - space separated
ADD ["/path/to/local/file", "/path/to/container/directory"] # specify commands in exec form - as array (recommended)
USER # specify username (or UID) for RUN, CMD and ENTRYPOINT commands
ENTRYPOINT ["command"] # specify default command, `/bin/sh -c` is used if not specified - cannot be overwritten, so CMD is recommended for flexibility
CMD ["arg1", "arg2"] # specfify arguments to the ENTRYPOINT - if ENTRYPOINT is not specified, args will be passed to `/bin/sh -c`
EXPOSE $PORT # specify container should listen on port $PORT
See best practices for writing Dockerfile.
# find a package containing an app (debian-based)
apt-file search --regex <filepath-pattern> # requires `apt-file` installation, see `apt-file --help`
apt-file search --regex ".*/sshd$"
# find a package containing an app, if app already installed (debian-based)
dpkg -S /path/to/file/or/pattern # see `dpkg --help`
dpkg -S */$APP_NAME
# find a package containing an app (rpm-based)
dnf provides /path/to/file/or/pattern
dnf provides */sshd
ps
application and network utilities like ip
, ss
and arp
nmap
process as the ENTRYPOINT
with arguments -sn 172.17.0.0/16
local
and version 1.0
ENTRYPOINT
# run ubuntu container to find debian-based packages
docker run -it --rm ubuntu
# container terminal
apt update
apt install -y apt-file
apt-file update
apt-file search --regex "bin/ip$"
apt-file search --regex "bin/ss$"
apt-file search --regex "bin/arp$"
# found `iproute2` and `net-tools`
exit
# alternatively, run fedora container to find rpm-based packages
docker run -it --rm fedora
# container terminal
dnf provides *bin/ip
dnf provides *bin/ss
dnf provides *bin/arp
# found `iproute` and `net-tools`
exit
# host terminal
mkdir test
nano test/Dockerfile
# Dockerfile
FROM alpine
RUN apk add --no-cache nmap iproute2 net-tools
ENTRYPOINT ["/usr/bin/nmap"]
CMD ["-sn", "172.17.0.0/16"]
# host terminal
docker build -t local/alpine:1.0 ./test
docker run --name alps1 local/alpine:1.0
docker run --name alps2 -it local/alpine:1.0 sh
docker run --name alps3 -d local/alpine:1.0
docker log alps3
nano test/Dockerfile
# Dockerfile
FROM alpine
RUN apk add --no-cache nmap iproute2 net-tools
CMD ["/usr/bin/nmap", "-sn", "172.17.0.0/16"]
# host terminal
docker build -t local/alpine:1.1 ./test
docker run --name alps4 local/alpine:1.0
docker run --name alps5 -it local/alpine:1.0 sh
# container terminal
exit
# host terminal
docker run --name alps6 -d local/alpine:1.0
docker log alps6
docker stop alps3 alps5 alps6
docker rm alps1 alps2 alps3 alps4 alps5 alps6
docker image rm local/alpine:1.0 local/alpine:1.1
In most cases, building an image goes beyond a successful build. Some installed packages require additional steps to run containers successfully
See the official language-specific getting started guides which includes NodeJS, Python, Java and Go examples.
# host terminal
npx express-generator --no-view test-app
cd test-app
yarn
yarn start # visit localhost:3000 if OK, ctrl+c to exit
echo node_modules > .dockerignore
nano Dockerfile
# Dockerfile
FROM node:alpine
ENV NODE_ENV=production
WORKDIR /app
COPY ["package.json", "yarn.lock", "./"]
RUN yarn --frozen-lockfile --prod
COPY . .
CMD ["node", "bin/www"]
EXPOSE 3000
# host terminal
docker build -t local/app:1.0 .
docker run -d --name app -p 8080:3000 local/app:1.0
curl localhost:8080
docker stop app
docker rm app
docker image rm local/app:1.0
cd ..
rm -rf test-app
Before we finally go into Kubernetes, it would be advantageous to have a basic understanding of unix-based systems file permissions and access control.
A user identifier (UID) is a unique number assigned to each user. This is how the system identifies each user. The root user has UID of 0, UID 1-500 are often reserved for system users and UID for new users commonly start at 1000. UIDs are stored in the plain-text /etc/passwd
file: each line represents a user account, and has seven fields delimited by colons account:password:UID:GID:GECOS:directory:shell
.
A group identifier (GID) is similar to UIDs - used by the system to identify groups. A group consists of several users and the root group has GID of 0. GIDs are stored in the plain-text /etc/group
file: each line represents a group, and has four fields delimited by colons group:password:GID:comma-separated-list-of-members
. An example of creating and assigning a group was covered in requirements - docker installation for debian users where we created and assigned the docker
group.
UIDs and GIDs are used to implement Discretionary Access Control (DAC) in unix-based systems by assigning them to files and processes to denote ownership - left at owner's discretion. This can be seen by running ls -l
or ls -ln
: the output has seven fields delimited by spaces file_permisions number_of_links user group size date_time_created file_or_folder_name
. See unix file permissions for more details.
ls -l
in detail# show current user
whoami
# view my UID and GID, and my group memberships
id
# view the local user database on system
cat /etc/passwd
# output - `account:password:UID:GID:GECOS:directory:shell`
root:x:0:0:root:/root:/bin/bash
piouson:x:1000:1000:,,,:/home/dev:/bin/bash
# view the local group database on system
cat /etc/group
# output - `group:password:GID:comma-separated-list-of-member`
root:x:0:
piouson:x:1000:
docker:x:1001:piouson
# list folder contents and their owner (user/group) names
ls -l
# show ownership by ids, output - `permision number_of_links user group size date_time_created file_or_folder_name`
ls -ln
In the context of permission checks, processes running on unix-based systems are traditionally categorised as:
Starting with kernel 2.2, Linux further divides traditional root privileges into distinct units known as capabilities as a way to control root user powers. Each root capability can be independently enabled and disabled.
See the overview of Linux capabilities for more details, including a comprehensive list of capabilities.
CAP_SYS_ADMIN
is an overloaded capability that grants privileges similar to traditional root privileges
By default, Docker containers are unprivileged and root in a docker container uses restricted capabilities
❌docker run --privileged
gives all capabilities to the container, allowing nearly all the same access to the host as processes running on the host
For practical reasons, most containers run as root by default. However, in a security context, this is bad practice:
We can control the users containers run with by:
USER
command in Dockerfile assigns rootDockerfile
with the USER
commanddocker run --user $UID
# Dockerfile
FROM ubuntu
# create group `piouson`, and create user `piouson` as member of group `piouson`, see `groupadd -h` and `useradd -h`
RUN groupadd piouson && useradd piouson --gid piouson
# specify GID/UID when creating/assigning a group/user
RUN groupadd --gid 1004 piouson && useradd --uid 1004 piouson --gid piouson
# assign user `piouson` for subsequent commands
USER piouson
# create system-group `myapp`, and create system-user `myapp` as member of group `myapp`
RUN groupadd --system myapp && useradd --system --no-log-init myapp --gid myapp
# assign system-user `myapp` for subsequent commands
USER myapp
ubuntu
container interactively, and in the container shell:
test-file
and display the file ownership infoubuntu
container interactively with UID 1004, and in the container shell:
ubuntu
with a non-root user as default user# host terminal
whoami
id
docker run -it --rm ubuntu
# container terminal
whoami
id
cat /etc/passwd
cat /etc/group
touch test-file
ls -l
ls -ln
exit
# host terminal
docker run -it --rm --user 1004 ubuntu
# container terminal
whoami
id
exit
# test/Dockerfile
FROM ubuntu
RUN groupadd --gid 1000 piouson && useradd --uid 1000 piouson --gid 1000
USER piouson
# host terminal
docker build -t test-image test/
docker run -it --rm test-image
# container terminal
whoami
id
exit
# host terminal
docker image rm test-image
If a containerized application can run without privileges, change to a non-root user
It is recommended to explicitly specify GID/UID when creating a group/user
FROM nginx:1.22-alpine
EXPOSE 80
Using docker and the Dockerfile above, build an image with tag bootcamp/nginx:v1
and tag ckad/nginx:latest
. Once complete, export a tar file of the image to /home/$USER/ckad-tasks/docker/nginx.tar
.
Run a container named web-test
from the image bootcamp/nginx:v1
accessible on port 2000, and another container named web-test2
from image ckad/nginx:latest
accessible on port 2001. Leave both containers running.
What commands would you use to perform the above operations using podman
? Specify these commands on separate lines in file /home/$USER/ckad-tasks/docker/podman-commands
You can specify multiple tags when building an image docker build -t tag1 -t tag2 /path//to/dockerfile-directory
Try to find the command for exporting a docker image with docker image --help
Did you run the containers in detached mode?
You can export a docker image to a tar file with docker image save -o /path/to/output/file $IMAGE_NAME
Did you expose the required ports when creating the containers? You can use docker run -p $HOST_PORT:$CONTAINER_PORT
Did you verify the containers running at exposed ports curl localhost:2000
and curl localhost:2001
?
Docker and Podman have interchangeable commands, therefore, the only change is docker -> podman
, For example, docker run -> podman run
, docker build -> podman build
, etc.
K8s is an open-source system for automating deployment, scaling and containerized applications management, currently owned by the Cloud Native Computing Foundation (CNCF).
K8s release cycle is 3 months and deprecated features are supported for a minimum of 2 release cycles (6 months).
You can watch kubernetes in 1 minute for a quick overview
When you've got more time, watch/listen to Kubernetes: The Documentary (PART 1 & PART 2)
A local lab setup is covered in chapter 4 with minikube
Skip this lab if you do not currently have a Google Cloud account with Billing enabled
kubectl get all
Entities in Kubernetes are recorded in the Kubernetes system as Objects, and they represent the state of your cluster. Kubernetes objects can describe:
Some common Kubernetes objects include:
# help
kubectl --help | less
# view available resources
kubectl get all, see `kubectl get --help`
# create a deployment, see `kubectl create deploy -h`
kubectl create deploy myapp --image=nginx
# create a deployment with six replicas
kubectl create deploy myapp --image=nginx --replicas=6
# view complete list of supported API resources, shows api-versions and their resource types
kubectl api-resources
# view api-versions only
kubectl api-versions
# delete a deployment, see `kubectl delete --help`
kubectl delete deploy myapp
This lab is repeated in chapter 4 with minikube
Skip this lab if you do not currently have a Google Cloud account with Billing enabled
nginx
application with three replicaskubectl create deploy webserver --image=nginx --replicas=3
kubectl get all
kubectl delete pod $POD_NAME
kubectl get all # new pod auto created to replace deleted
kubectl api-resources
kubectl delete deploy webserver
kubectl get all
kubectl delete svc kubernetes
kubectl get all # new kubernetes service is auto created to replace deleted
Remember to delete Google cloud cluster to avoid charges if you wish to use a local environment detailed in the next chapter
# check kubernetes version
kubectl version
# list kubernetes context (available kubernetes clusters - docker-desktop, minikube, etc)
kubectl config get-contexts
# switch kubernetes context
kubectl config use-context docker-desktop
See Docker's Deploy on Kubernetes for more details
Note that using Docker Desktop will have network limitations when exposing your applications publicly, see alternative Minikube option below
Minikube is the recommended Kubernetes solution for this course on a local lab environment. See the official minikube installation docs.
# 1. install minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube
rm minikube-darwin-amd64
# 2. start a minikube cluster
minikube start
# 1. install minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x ./minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
rm minikube-linux-amd64
# 2. install minikube prereqs - conntrack
sudo apt install conntrack
sudo sysctl fs.protected_regular=0
# 3. start a minikube cluster with the latest kubernetes version and default docker driver
minikube start
# if [3] doesn't work, e.g. vpn issue, etc, try `--driver=none`
# sudo minikube start --driver=none
# 4. change the owner of the .kube and .minikube directories
sudo chown -R $USER $HOME/.kube $HOME/.minikube
# show current status, see `minikube --help`
minikube status
# open K8s dashboard in local browser
minikube dashboard
# start a minikube cluster with latest k8s version and default driver, see `minikube --help`
minikube start
# start minikube with a specified driver and specified kubernetes version
minikube start --driver=docker --kubernetes-version=1.23.9
# show current IP address
minikube ip
# show current version
minikube version
# connect to minikube cluster
minikube ssh
# list addons
minikube addons list
# enable addons
minikube addons enable $ADDON_NAME
# stop running minikube cluster
minikube stop
# delete stopped minikube cluster
minikube delete
minikube status
kubectl
alias in .bashrc
printf "
# minikube kubectl
alias kubectl='minikube kubectl --'
" >> ~/.bashrc
exec bash
kubectl version
kubectl get all
kubectl completion --help
echo "source <(kubectl completion bash)" >> ~/.bashrc # macos replace bash with zsh
exec bash
kubectl edit
text editor is vi
. To change this:
export KUBE_EDITOR="nano" # use nano
export KUBE_EDITOR="vim" # use vim
minikube dashboard
Create from form
app
, Container image: nginx
, Number of pods: 3
Deploy
ctrl+c # to terminate dashboard
kubectl get all
kubectl delete deploy app
kubectl config get-contexts
kubectl config set-context docker-desktop
kubectl config set-context minikube
nginx
Podkubectl run webserver --image=nginx
kubectl get all
kubectl delete pod webserver
kubectl get all # pod gone
# see `lab3.2 solution` for remaining steps
Pods started without a deployment are called Naked Pods - these are not managed by a replicaset, therefore, are not rescheduled on failure, not eligible for rolling updates, cannot be scaled, cannot be replaced automatically.
Although, Naked Pods are not recommended in live environments, they are crucial for learning how to manage Pods, which is a big part of CKAD.
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
# run a pod, see `kubectl run --help`
kubectl run $POD_NAME $IMAGE_NAME
# run a nginx pod with custom args, args are passed to the pod's container's `ENTRYPOINT`
kubectl run mypod --image=nginx -- <arg1> <arg2> ... <argN>
# run a command in an nginx pod
kubectl run mypod --image=nginx --command -- <command>
# run a busybox pod interactively and delete after task completion
kubectl run -it mypod --image=busybox --rm --restart=Never -- date
# to specify the port exposed by the image is 8080
kubectl run mypod --port=8080 --image=image-that-uses-port-8080
# connect a shell to a running pod `mypod`
kubectl exec mypod -it -- sh
# list pods, see `kubectl get --help`
kubectl get pods # using `pod` or `pods` will work
# only show resource names when listing pods
kubectl get pods -o name | less
# display full details of pod in YAML form
kubectl get pods $POD_NAME -o yaml | less
# show details of pod in readable form, see `kubectl describe --help`
kubectl describe pods $POD_NAME | less
# view the pod spec
kubectl explain pod.spec | less
With
kubectl
, everything after the--
flag is passed to the Pod
💡-- <args>
corresponds to DockerfileCMD
while--command -- <args>
corresponds toENTRYPOINT
See answer tokubectl run --command vs -- arguments
for more details
nginx:alpine
image and confirm creationNot all images expose their applications on port 80. Kubernetes doesn't have a native way to check ports exposed on running container, however, you can connect a shell to a Pod with
kubectl exec
and try one ofnetstat -tulpn
orss -tulpn
in the container, if installed, to show open ports.
# host terminal
kubectl run mypod --image=nginx:alpine
kubectl get pods
kubectl describe pods mypod | less
kubectl get pods -o name
kubectl exec -it mypod -- sh
# container terminal
curl localhost # or curl localhost:80, can omit since 80 is the default
netstat -tulpn
ss -tulpn
exit
# host terminal
kubectl delete pods mypod
kubectl explain pod.spec
kubectl api-resources # pods were introduced in v1 - the first version of kubernetes
Example of a Pod manifest file with a busybox
image and mounted empty-directory volume.
apiVersion: v1 # api version
kind: Pod # type of resource, pod, deployment, configmap, etc
metadata:
name: box # metadata information, including labels, namespace, etc
spec:
volumes: # create an empty-directory volume
- name: varlog
emptyDir: {}
containers:
- name: box
image: busybox:1.28
volumeMounts: # mount created volume
- name: varlog
mountPath: /var/log
Volumes are covered in more detail in Chapter 10 - Storage. For now it will suffice to know how to create and mount an empty-directory volume
# view description of a Kubernetes Object with `kubectl explain <object>[.field]`, see `kubectl explain --help`
kubectl explain pod
kubectl explain pod.metadata # or `pod.spec`, `pod.status` etc
# include nested fields with `--recursive`
kubectl explain --recursive pod.spec | less
# perform actions on a resource with a YAML file
kubectl {create|apply|replace|delete} -f pod.yaml
# generate YAML file of a specific command with `--dry-run`
kubectl run mynginx --image=nginx -o yaml --dry-run=client > pod.yaml
Object fields are case sensitive, always generate manifest files to avoid typos
kubectl apply
creates a new resource, or updates existing if previously created bykubectl apply
Always create single container Pods! However, some special scenarios require a multi-container Pod pattern:
In the official k8s docs, you will often find example code with a URL, e.g.
pods/commands.yaml
. The file can be downloaded by appendinghttps://k8s.io/examples
to the URL, thus:https://k8s.io/examples/pods/commands.yaml
# download file `pods/commands.yaml`
wget https://k8s.io/examples/pods/commands.yaml
# save downloaded file with a new name `comm.yaml`
wget https://k8s.io/examples/pods/commands.yaml -O comm.yaml
# hide output while downloading
wget -q https://k8s.io/examples/pods/commands.yaml
# view contents of a downloaded file without saving
wget -O- https://k8s.io/examples/pods/commands.yaml
# view contents quietly without saving
wget -qO- https://k8s.io/examples/pods/commands.yaml
busybox
Pod that runs the command sleep 60
, see create Pod with command and args docs
kubectl run mypod --image=busybox --dry-run=client -o yaml --command -- sleep 60 > lab5-2.yaml
kubectl apply -f lab5-2.yaml
kubectl get pods
kubectl describe pods mypod | less
kubectl delete -f lab5-2.yaml
Some images, like busybox, do not remain in running state by default. An extra command is required, e.g.
sleep 60
, to keep containers using these images in running state for as long as you need. In the CKAD exam, make sure your Pods remain in running states unless stated otherwise
Note that the main container will only be started after the init container enters STATUS=completed
# view logs of pod `mypod`
kubectl logs mypod
# view logs of specific container `mypod-container-1` in pod `mypod`
kubectl logs mypod -c mypod-container-1
App is running!
to STDOUT
busybox:1.28
imageNever
restartApp is initialising...
to STDOUTSTATUS
State
of both containers.STATUS
# partially generate pod manifest
kubectl run myapp --image=busybox:1.28 --restart=Never --dry-run=client -o yaml --command -- sh -c "echo App is running!" > lab5-3.yaml
# edit lab5-3.yaml to add init container spec
apiVersion: v1
kind: Pod
metadata:
labels:
run: myapp
name: myapp
spec:
containers:
- name: myapp
image: busybox:1.28
command: ["sh", "-c", "echo App is running!"]
initContainers:
- name: myapp-init
image: busybox:1.28
command: ["sh", "-c", 'echo "App is initialising..." && sleep 60']
restartPolicy: Never
kubectl apply -f lab5-3.yaml
kubectl get pods
kubectl logs myapp # not created until after 60secs
kubectl logs myapp -c myapp-init
kubectl describe -f lab5-3.yaml | less
kubectl get pods
kubectl delete -f lab5-3.yaml
# lab5-4.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp-1
image: busybox:1.28
volumeMounts:
- name: logs
mountPath: /var/log
- name: myapp-2
image: busybox:1.28
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
kubectl apply -f lab5-4.yaml
kubectl get pods
kubectl describe pods myapp | less
kubectl logs myapp -c myapp-1
kubectl logs myapp -c myapp-2
kubectl delete -f lab5-4.yaml
Always create single container Pods!
Remember you can prepend
https://k8s.io/examples/
to any example manifest names from the official docs for direct download of the YAML file
busybox
Pod that logs date
to a file every second
https://k8s.io/examples/admin/logging/two-files-counter-pod-streaming-sidecar.yaml
# lab5-5.yaml
apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: myapp
image: busybox:1.28
args:
- /bin/sh
- -c
- >
while true;
do
echo $(date) >> /var/log/date.log;
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: myapp-logs
image: busybox:1.28
args: [/bin/sh, -c, "tail -F /var/log/date.log"]
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {}
kubectl apply -f lab5-5.yaml
kubectl get pods
kubectl describe pods myapp | less
kubectl logs myapp -c myapp
kubectl logs myapp -c myapp-logs
kubectl delete -f lab5-5.yaml
Namespaces are a way to divide/isolate cluster resources between multiple users. Names of resources need to be unique within a Namespace, but not across namespaces.
Not all Kubernetes resources are in a Namespace and Namespace-based scoping is only applicable for namespaced objects.
Namespaces should be used sensibly, you can read more about understanding the motivation for using namespaces
# create namespace called `myns`, see `kubectl create namespace -h`
kubectl create namespace myns
# run a pod in the `myns` namespace with `-n myns`
kubectl run mypod --image=imageName -n myns
# view pods in the `myns` namespaces
kubectl get pods -n myns
# list pods in all namespaces with `--all-namespaces` or `-A`
kubectl get pods --all-namespaces
# list all resources in all namespaces
kubectl get all --all-namespaces
# view the current namespace in use for commands
kubectl config view --minify | grep namespace:
# set `myns` namespace to be the namespace used for subsequent commands
kubectl config set-context --current --namespace=myns
# view kubernetes api resources in a namespace
kubectl api-resources --namespaced=true
# view kubernetes api resources not in a namespace
kubectl api-resources --namespaced=false
# view the namespace object
kubectl explain namespace | less
# view the namespace object recursively
kubectl explain namespace --recursive | less
You can also follow the admin guide doc for namespaces
Remember you can connect a shell to a Pod with
kubectl exec
and try one ofnetstat -tulpn
orss -tulpn
in the container, if installed, to show open ports.
myns
myns
Namespacemyns
Namespace is assigned to the PodNAMESPACED
column of the Kubernetes API resourceskubectl create ns myns --dry-run=client -o yaml > lab5-6.yaml
echo --- >> lab5-6.yaml
kubectl run mypod --image=httpd:alpine -n myns --dry-run=client -o yaml >> lab5-6.yaml
kubectl apply -f lab5-6.yaml
kubectl get pods
kubectl describe -f lab5-6.yaml | less
kubectl delete -f lab5-6.yaml
kubectl api-resources | less
kubectl explain namespace | less
kubectl explain namespace --recursive | less
kubectl explain namespace.spec | less
Remember that namespaced resources are not visible by default unless the namespace is specified
💡kubectl get pods
- only shows resources in thedefault
namespace
💡kubectl get pods -n mynamespace
- shows resources in themynamespace
namespace
Imagine a student in the CKAD Bootcamp training reached out to you for assistance to finish their homework. Their task was to create a webserver
with a sidecar container for logging in the cow
namespace. Find this Pod, which could be located in one of the Namespaces ape
, cow
or fox
, and ensure it is configured as required.
At the end of your task, copy the log file used by the logging container to directory /home/$USER/ckad-tasks/pods/
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"v1","items":[{"kind":"Namespace","apiVersion":"v1","metadata":{"name":"fox"}},{"kind":"Namespace","apiVersion":"v1","metadata":{"name":"ape"}},{"kind":"Namespace","apiVersion":"v1","metadata":{"name":"cow"}},{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"run":"box"},"name":"box","namespace":"ape"},"spec":{"containers":[{"args":["sleep","3600"],"image":"busybox","name":"box"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"}},{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"run":"for-testing"},"name":"for-testing","namespace":"fox"},"spec":{"containers":[{"args":["sleep","3600"],"image":"busybox","name":"for-testing"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"}},{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"run":"webserver"},"name":"webserver","namespace":"fox"},"spec":{"containers":[{"name":"server","image":"ngnx:1.20-alpine","volumeMounts":[{"name":"serverlog","mountPath":"/usr/share/nginx/html"}]},{"name":"logger","image":"busybox:1.28","args":["/bin/sh","-c","while true; do echo $(date) >> /usr/share/nginx/html/1.log;\n sleep 30;\ndone\n"],"volumeMounts":[{"name":"serverlog","mountPath":"/usr/share/nginx/html"}]}],"volumes":[{"name":"serverlog","emptyDir":{}}]}}],"metadata":{"resourceVersion":""},"kind":"List"}' | kubectl apply -f - >/dev/null; echo 'lab: environment setup complete!'
kubectl delete ns ape cow fox
Did you search for Pods in specific namespaces, e.g. kubectl get pod -n ape
?
Did you review the Pod error message under STATUS column of kubectl get po
command? You can reveal more information with kubectl get -owide
.
Did you review more details of the Pod, especially details under Containers section of kubectl describe po
command?
Is the webserver
Pod up and running in the cow
Namespace? Remember this is the requirement, so migrate the Pod if not in correct Namespace. No other resources should be migrated.
Did you delete the webserver
Pod in wrong Namespace fox
?
You can use kubectl cp --help
to copy files and directories to and from containers. See kubectl cheatsheet for more details.
In the rat
Namespace (create if required), create a Pod named webapp
that runs nginx:1.22-alpine
image and has env-var NGINX_PORT=3005
which determines the port exposed by the container. The Pod container should be named web
and should mount an emptyDir
volume to /etc/nginx/templates
.
The Pod should have an Init Container named web-init
, running busybox:1.28
image, that creates a file in the same emptyDir
volume, mounted to /tempdir
, with below command:
echo -e "server {\n\tlisten\t\${NGINX_PORT};\n\n\tlocation / {\n\t\troot\t/usr/share/nginx/html;\n\t}\n}" > /tempdir/default.conf.template
Did you create the Pod in Namespace rat
?
Did you set environment variable NGINX_PORT=3005
in container web
? See kubectl run --help
for how to set an environment variable in a container.
Did you set Pod's containerPort
parameter to be same value as env-var NGINX_PORT
? Since the env-var NGINX_PORT
determines the container port, you must change set the containerPort
parameter to this value. See kubectl run --help
for how to set port exposed by container.
Did you specify an emptyDir
volume and mounted it to /etc/nginx/templates
in Pod container web
? See example pod manifest.
Did you create web-init
as an Init Container under pod.spec.initContainers
? See lab 5.3 - init containers.
Did you run appropriate command in Init Container? You can use list-form, or array-form with single quotes.
# list form
command:
- /bin/sh
- -c
- echo -e "..." > /temp...
# array form with single quotes
command: ["/bin/sh", "-c", "echo -e '...' > /temp..."]
Did you specify an emptyDir
volume, mounted to /tempdir
in Init Container web-init
? See example pod manifest.
Did you confirm that a webpage is being served by container web
on specified port? Connect a shell to the container and run curl localhost:3005
.
Whilst a Pod is running, the kubelet is able to restart containers to handle some faults. Within a Pod, Kubernetes tracks different container states and determines what action to take to make the Pod healthy again.
Kubernetes tracks the phase of a Pod
Kubernetes also tracks the state of containers running in a Pod
The first step in debugging a Pod is taking a look at it. Check the current state of the Pod and recent events with:
kubectl describe pods $POD_NAME
When running commands locally in a Terminal, you can immediately see the output STDOUT
. However, applications running in a cloud environment have their own way of showing their outputs - for Kubernetes, you can view a Pod STDOUT
with:
kubectl logs $POD_NAME
# to view only events
kubectl get events --field-selector=involvedObject.name=$POD_NAME
A Pod
STATUS=CrashLoopBackOff
means the Pod is in a cool off period following container failure. The container will be restarted after cool off
You will usually find more clues in the logs when a Pod shows a none-zeroExit Code
See the official debug running pods tutorial for more details
STATES
might continue to change for containers in error due to default restartPolicy=Always
kubectl run mydb --image=mysql --dry-run=client -o yaml > lab6-1.yaml
kubectl apply -f lab6-1.yaml
kubectl get pods
kubectl describe -f lab6-1.yaml | less
kubectl get pods --watch # watch pods for changes
ctrl+c
kubectl delete -f lab6-1.yaml
kubectl run mydb --image=mysql --env="MYSQL_ROOT_PASSWORD=secret" --dry-run=client -o yaml > lab6-1.yaml
kubectl apply -f lab6-1.yaml
kubectl get pods
kubectl describe -f lab6-1.yaml | less
kubectl delete -f lab6-1.yaml
Ephemeral containers are useful for interactive troubleshooting when kubectl exec
is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
# create a `mysql` Pod called `mypod` (assume the pod fails to start)
kubectl run mydb --image=mysql
# add ephemeral container to Pod `mypod`
kubectl debug -it ephemeral-pod --image=busybox:1.28 --target=ephemeral-demo
The
EphemeralContainers
feature must be enabled in the cluster and the--target
parameter must be supported by the container runtime
When not supported, the Ephemeral Container may not be started, or started without revealing processes
Port forwarding in Kubernetes should only be used for testing purposes.
# get a list of pods with extra information, including IP Address
kubectl get pods -o wide
# view port forwarding help
kubectl port-forward --help
# forward host port 8080 to container `mypod` port 80, requires `ctrl+c` to terminate
kubectl port-forward mypod 8080:80
When a program runs in a unix-based environment, it starts a process. A foreground process prevents further execution of commands, e.g.
sleep
# run any foreground command in the background by adding an ampersand &
sleep 60 &
# view running background processes and their ids
jobs
# bring a background process to the foreground
fg $ID
# run the `kubectl port-forward` command in the background
kubectl port-forward mypod 8080:80 &
curl
)kubectl run webserver --image=httpd
kubectl get pods -o wide
curl $POD_IP_ADDRESS
kubectl port-forward webserver 5000:80 &
curl localhost:5000
fg 1
ctrl+c
kubectl delete pods webserver
This section requires a basic understanding of unix-based systems file permissions and access control covered in ch2 - container access control
A security context defines privilege and access control settings for a Pod or Container. Security context can be controlled at Pod-level pod.spec.securityContext
as well as at container-level pod.spec.containers.securityContext
. A detailed explanation of security context is provided in the linked docs, however, for CKAD, we will only focus on the following:
runAsGroup: $GID
- specifies the GID of logged-in user in pod containers (pod and container level)runAsNonRoot: $boolean
- specifies whether the containers run as a non-root user at image level - containers will not start if set to true
while image uses root (pod and container)runAsUser: $UID
- specifies the UID of logged-in user in pod containers (pod and container)fsGroup: $GID
- specifies additional GID used for filesystem (mounted volumes) in pod containers (pod level)privileged: $boolean
- controls whether containers will run as privileged or unprivileged (container level)allowPrivilegeEscalation: $boolean
- controls whether a process can gain more privileges than its parent process - always true
when the container is run as privileged, or has CAP_SYS_ADMIN
(container level)readOnlyRootFilesystem: $boolean
- controls whether the container has a read-only root filesystem (container level)# show pod-level security context options
kubectl explain pod.spec.securityContext | less
# show container-level security context options
kubectl explain pod.spec.containers.securityContext | less
# view pod details for `mypod`
kubectl get pods mypod -o yaml
Using the official docs manifest example pods/security/security-context.yaml
as base to:
pods/security/security-context.yaml
as base to create a Pod manifest with these security context options:
UID: 1010, GID: 1020
GID: 1110
/data/demo
/data/demo/new-file
and confirm file ownershipsudo su
securityContext
options available at pod-level vs container-level# host terminal
kubectl explain pod.spec.securityContext | less
kubectl explain pod.spec.containers.securityContext | less
wget -qO lab6-3.yaml https://k8s.io/examples/pods/security/security-context.yaml
nano lab6-3.yaml
# lab6-3.yaml
spec:
securityContext:
runAsUser: 1010
runAsGroup: 1020
fsGroup: 1110
containers:
- name: sec-ctx-demo
securityContext:
allowPrivilegeEscalation: false
# etc
# host terminal
kubectl apply -f lab6-3.yaml
kubectl describe pods security-context-demo | less
kubectl get pods security-context-demo -o yaml | grep -A 4 -E "spec:|securityContext:" | less
kubectl exec -it security-context-demo -- sh
# container terminal
whoami
id # uid=1010 gid=1020 groups=1110
ps
ls -l /data # root 1110
touch /data/demo/new-file
ls -l /data/demo # 1010 1110
sudo su # sudo not found - an attacker might try other ways to gain root privileges
exit
# host terminal
nano lab6-3.yaml
# lab6-3.yaml
spec:
securityContext:
runAsNonRoot: true
fsGroup: 1110
containers:
- name: sec-ctx-demo
securityContext:
allowPrivilegeEscalation: false
# etc
# host terminal
kubectl delete -f lab6-3.yaml
kubectl apply -f lab6-3.yaml
kubectl get pods security-context-demo
kubectl describe pods security-context-demo | less
# found error creating container - avoid conflicting rules, enforcing non-root user `runAsNonRoot: true` requires a non-root user specified `runAsUser: $UID`
A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate - a Completed status. Deleting a Job will clean up the Pods it created. Suspending a Job will delete its active Pods until the Job is resumed again. The default restartPolicy
for Pods is Always, while the default restartPolicy
for Jobs is Never.
A Job type is determined by the values of the completions
and parallelism
fields - you can view all Job fields with kubectl explain job.spec
:
completions=1; parallelism=1
- one pod started per job, unless failurecompletions=1; parallelism=x
- multiple pods started, until one successfully completes taskcompletions=n; parallelism=x
- multiple pods started, until n
successful task completionsttlSecondsAfterFinished=x
- automatically delete a job after x
seconds# view resource types you can create in kubernetes
kubectl create -h
# create a job `myjob` that runs `date` command, see `kubectl create job -h`
kubectl create job myjob --image=busybox -- date
# generate a job manifest
kubectl create job myjob --image=busybox --dry-run=client -o yaml -- date
# list jobs
kubectl get jobs
# list jobs and pods
kubectl get jobs,pods
# view the manifest of an existing job `myjob`
kubectl get jobs myjob -o yaml
# view details of a job `myjob`
kubectl describe job myjob
# view the job spec
kubectl explain job.spec | less
myjob1
with a suitable image that runs the command echo Lab 6.4. Jobs!
myjob1
myjob1
myjob2
with a suitable image that runs the command date
myjob3
kubectl explain job.spec | less
kubectl create job myjob1 --image=busybox -- echo Lab 6.4. Jobs!
kubectl get jobs,pods
kubectl describe job myjob1
kubectl get jobs myjob1 -o yaml
kubectl create job myjob2 --image=busybox -- date
kubectl get jobs,pods
kubectl create job myjob3 --image=busybox --dry-run=client -o yaml -- date >> lab6-4.yaml
kubectl apply -f lab6-4.yaml
kubectl get jobs,pods # so many pods!
kubectl delete jobs myjob1 myjob2 myjob3
kubectl get jobs,pods # pods auto deleted!
nano lab6-4.yaml
# lab6-4.yaml
kind: Job
spec:
template:
spec:
completions: 5
ttlSecondsAfterFinished: 30
containers:
# etc
kubectl apply -f lab6-4.yaml
kubectl get jobs,pods
kubectl get pods --watch # watch pods for 30secs
A CronJob creates Jobs on a repeating schedule. It runs a job periodically on a given schedule, written in Cron format. This isn't very different from the Linux/Unix crontab (cron table).
Note that 1 minute is the lowest you can set a crontab schedule. Anything lower will require additional logic or hack If you are not familiar with Linux/Unix crontab, have a look at this beginner guide or this beginner tutorial
# cronjob time syntax: * * * * * - minute hour day_of_month month day_of_week
kubectl create cronjob -h
# create a cronjob `cj` that run a job every minute
kubectl create cronjob cj --image=busybox --schedule="* * * * *" -- date
# view the cronjob spec
kubectl explain cronjob.spec | less
# view the job spec of cronjobs
kubectl explain cronjobs.spec.jobTemplate.spec
kubectl api-resources # jobs was introduced in batch/v1
date
command every minutekubectl explain cronjob.spec | less
kubectl explain cronjob.spec.jobTemplate.spec | less
kubectl create cronjob mycj --image=busybox --schedule="* * * * *" -- date
kubectl describe cj mycj | less
kubectl get cj mycj -o yaml | less
kubectl get all
kubectl get pods --watch # watch pods for 60s to see changes
kubectl delete cj mycj # deletes associated jobs and pods!
kubectl api-resources # cronjobs was introduced in batch/v1
All CronJob
schedule
times are based on the timezone of the kube-controller-manager
Since a CronJob runs a Job periodically, the Job spec auto delete featurettlSecondsAfterFinished
is quite handy
By default, Linux will not limit resources available to processes - containers are processes running on Linux. However, when creating Pod, you can optionally specify how much of each resource a container needs. The most common resources to specify are CPU and RAM, but there are others.
Request is the initial/minimum amount of a particular resource provided to a container, while Limit is the maximum amount of the resource available - the container cannot exceed this value. See resource management for pods and containers for more details.
A Pod resource request/limit is the sum of the resource requests/limits of containers in the Pod A Pod remains in "Pending" status until a Node with sufficient resources becomes available
Note that Requests and Limits management at the Namespace-level is not for CKAD but covered in CKA
spec.containers[].resources.limits.cpu
- in cores and millicores, 500m = 0.5 CPUspec.containers[].resources.limits.memory
- Ki (1024) / k (1000) | Mi/M | Gi/G | Ti/T | Pi/P | Ei/Espec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>
# view container resources object within the pod spec
kubectl explain pod.spec.containers.resources
# pod resource update is forbidden, but you can generate YAML, see `kubectl set -h`
kubectl set resources pod --help
# generate YAML for pod `mypod` that requests 0.2 CPU and 128Mi memory
kubectl set resources pod mypod --requests=cpu=200m,memory=128Mi --dry-run=client -oyaml|less
# generate YAML for requests 0.2 CPU, 128Mi memory, and limits 0.5 CPU, 256Mi memory
kubectl set resources pod mypod --requests=cpu=200m,memory=128Mi --limits=cpu=500m,memory=256Mi --dry-run=client -oyaml|less
You may use the official container resource example manifest or generate a manifest file with kubectl set resources
.
dev
namespacepod.spec.restartPolicy
kubectl describe
cat /proc/meminfo
or free -h
)cat /proc/cpuinfo
or lscpu
)kubectl create ns dev --dry-run=client -o yaml >> lab6-6.yaml
echo --- >> lab6-6.yaml
# add the contents of the example manifest to lab6-6.yaml and modify accordingly
nano lab6-6.yaml
# lab6-6.yaml
kind: Namespace
metadata:
name: dev
# etc
---
kind: Pod
metadata:
name: webapp
namespace: dev
spec:
restartPolicy: OnFailure
containers:
- image: mongo
name: database
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: 1
- image: nginx
name: frontend
resources: # same as above
# etc
kubectl apply -f lab6-6.yaml
kubectl get pods -n dev
kubectl describe pods webapp -n dev | less
kubectl describe pods webapp -n dev | grep -A 4 -E "Containers:|State:|Limits:|Requests:" | less
nano lab6-6.yaml
# lab6-6.yaml
kind: Pod
spec:
containers:
- resources:
requests:
memory: "4Mi"
cpu: "250m"
limits:
memory: "8Mi"
cpu: 1
# etc - use above resources for both containers
kubectl delete -f lab6-6.yaml
kubectl apply -f lab6-6.yaml
kubectl get pods -n dev --watch # watch for OOMKilled | CrashLoopBackOff
kubectl get logs webapp -n dev -c database # not very helpful logs
kubectl get logs webapp -n dev -c frontend
kubectl describe pods webapp -n dev | less # helpful logs - Last State: Terminated, Reason: OutOfMemory (OOMKilled)
kubectl describe pods webapp -n dev | grep -A 4 -E "Containers:|State:|Limits:|Requests:" | less
cat /proc/cpuinfo # check for host memory
cat /proc/meminfo # check for host ram
nano lab6-6.yaml
# lab6-6.yaml
kind: Pod
spec:
containers:
- resources:
requests:
memory: "8Gi" # use value from `cat /proc/meminfo`
cpu: 2 # use value from `cat /proc/cpuinfo`
limits:
memory: "16Gi"
cpu: 4
# etc - use above resources for both containers
kubectl delete -f lab6-6.yaml
kubectl apply -f lab6-6.yaml
kubectl get pods -n dev --watch # remains in Pending until enough resources available
kubectl describe pods webapp
kubectl delete -f lab6-6.yaml
kubectl explain pod.spec.containers.resources | less
Remember a multi-container Pod is not recommended in live environments but only used here for learning purposes
This lab requires a Metrics Server running in your cluster, please run minikube addons enable metrics-server
to enable Metrics calculation.
# enable metrics-server on minikube
minikube addons enable metrics-server
# list available nodes
kubectl get nodes
# view allocated resources for node and % resource usage for running (non-terminated) pods
kubectl describe node $NODE_NAME
# view nodes resource usage
kubectl top node
# view pods resource uage
kubectl top pod
kubectl describe node
nginx:alpine
kubectl explain pod.spec
kubectl explain pod.spec.containers
minikube addons enable metrics-server
kubectl get node # show node name
kubectl describe node $NODE_NAME | grep -iA10 "allocated resources:" # cpu 0.95, memory 460Mi
kubectl run mypod --image=nginx:alpine --restart=Never --image-pull-policy=IfNotPresent --dry-run=client -oyaml>lab6-7.yml
kubectl apply -f lab6-7.yml # cannot use `kubectl set` if pod don't exist
kubectl set resources pod mypod --requests=cpu=200m,memory=64Mi --limits=cpu=475m,memory=230Mi --dry-run=client -oyaml|less
nano lab6-7.yml # copy resources section of above output to pod yaml
kind: Pod
spec:
containers:
- name: mypod
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 475m
memory: 230Mi
requests:
cpu: 200m
memory: 64Mi
kubectl delete -f lab6-7.yml
kubectl apply -f lab6-7.yml
kubectl describe -f lab6-7.yml | grep -iA6 limits:
kubectl delete -f lab6-7.yml
In the boa
Namespace, create a Pod that runs the shell command date
, in a busybox container, once every hour, regardless success or failure. Job should terminate after 20s even if command still running. Jobs should be automatically deleted after 12 hours. A record of 5 successful Jobs and 5 failed Jobs should be kept. All resources should be named bootcamp
, including the container. You may create a new Namespace if required.
At the end of your task, to avoid waiting an hour to confirm all works, manually run the Job from the Cronjob and verify expected outcome.
Did you create the Cronjob in the boa
Namespace? You can generate YAML with Namespace specified, see lab 5.6
You can generate YAML for Cronjob schedule and command, see lab 6.5 - working with cronjobs
See kubectl explain job.spec
for terminating and auto-deleting Jobs after specified time.
See kubectl explain cronjob.spec
for keeping successful/failed Jobs.
You can create a Job to manually run a Cronjob, see kubectl create job --help
Did you create the Job in the boa
Namespace?
Did you specify cronjob.spec.jobTemplate.spec.activeDeadlineSeconds
and cronjob.spec.jobTemplate.spec.ttlSecondsAfterFinished
?
Did you specify cronjob.spec.failedJobsHistoryLimit
and cronjob.spec.successfulJobsHistoryLimit
?
After Cronjob creation, did you verify configured parameters in kubectl describe
?
After manual Job creation, did you verify Job successfully triggered?
A client requires a Pod running the nginx:1.21-alpine
image with name webapp
in the dog
Namespace. The Pod should start with 0.25 CPU and 128Mi memory, but shouldn't exceed 0.5 CPU and half of the Node's memory. All processes in Pod containers should run with user ID 1002 and group ID 1003. Containers mustn't run in privileged
mode and privilege escalation should be disabled. You may create a new Namespace if required.
When you are finished with the task, the client would also like to know the Pod with the highest memory consumption in the default
Namespace. Save the name the Pod in the format <namespace>/<pod-name>
to a file /home/$USER/ckad-tasks/resources/pod-with-highest-memory
Did you create the resource in the dog
Namespace? You can generate YAML with Namespace specified, see lab 5.6
You can separately generate YAML for the pod.spec.containers.resources
section, see lab 6.7 - resource allocation and usage
See lab 6.3 for security context. You will need to add four separate rules for user ID, group ID, privileged and privilege escalation.
You can use a combination of the output-name and sorting format kubectl -oname --sort-by=json-path-to-field
. The JSON path can be derived from viewing the resource with output-json -ojson
. See kubectl cheatsheet for more details
Deployments manages Pods with scalability and reliability. This is the standard way to manage Pods and ReplicaSets in live environments.
# create a deployment `myapp` with 1 pod, see `kubectl create deploy --help`
kubectl create deployment myapp --image=nginx
# create a deployment `myapp` with 3 pods
kubectl create deploy myapp --image=nginx --replicas=3
# list existing resources in `default` namespace
kubectl get all
# list existing resources filtered by selector `app=myapp`
kubectl get all --selector="app=myapp" # or `--selector app=myapp`
# show details of deployment `myapp`, see `kubectl describe deploy --help`
kubectl describe deploy myapp
# scale deployment `myapp`, see `kubectl scale deploy --help`
kubectl scale deploy myapp --replicas=4
# edit deployment `myapp` (not all fields are edittable), see `kubectl edit deploy --help`
kubectl edit deploy myapp
# edit deployment `myapp` with specified editor
KUBE_EDITOR=nano kubectl edit deploy myapp
# set deployment image for `webserver` container to `nginx:1.8`, see `kubectl set --help` for editable fields
kubectl set image deployment/myapp webserver=nginx:1.8
# set deployment image for all containers to `nginx:1.8`, see `kubectl set image --help`
kubectl set image deployment/myapp *=nginx:1.8
# view the deployment spec
kubectl explain deploy.spec
Deployments can be used to rollout a ReplicaSet which manages the number of Pods. In CKAD you will only work with ReplicaSets via Deployments
kubectl delete rs $rsName
and monitor resultskubectl create deploy myapp --image=httpd --replicas=3
kubectl describe deploy myapp | less
kubectl get all
kubectl delete pod $POD_NAME
kubectl get all
kubectl get pods --watch # watch replicaset create new pod to replace deleted
kubectl run mypod --image=httpd
kubectl get all
kubectl delete pod mypod
kubectl get all # naked pod not recreated
kubectl delete replicaset $REPLICASET_NAME # pods and replicaset deleted
kubectl get all
kubectl get pods --watch # deployment creates new replicaset, and replicaset creates new pods
kubectl delete deploy myapp nginx-deployment
kubectl explain deploy.spec
kubectl api-resources # deployments & replicasets were introduced in apps/v1
# replicasets replaced v1 replicationcontrollers
A deployment creates a ReplicaSet that manages scalability. Do not manage replicasets outside of deployments.
controllers/nginx-deployment.yaml
kubectl edit
and change the namespace
to devREADY
, UP-TO-DATE
and AVAILABLE
DESIRED
, CURRENT
and READY
NAME
, READY
and STATUS
kubectl scale
and review same in [6]apiVersion
of the manifest example file to apps/v0
wget -O lab7-2.yaml https://k8s.io/examples/controllers/nginx-deployment.yaml
kubectl apply -f lab7-2.yaml
kubectl get all
kubectl edit -f lab7-2.yaml
kind: Deployment
metadata:
name: nginx-deployment
namespace: dev
# etc (save failed: not all fields are editable - cancel edit)
KUBE_EDITOR=nano kubectl edit -f lab7-2.yaml
kind: Deployment
spec:
replicas: 12
template:
spec:
containers:
- image: nginx:1.3
# etc - save successful
kubectl get all
kubectl describe -f lab7-2.yaml | less
kubectl scale deploy myapp --replicas=3
kubectl get all
kubectl delete -f lab7-2.yaml
nano lab7-2.yaml
apiVersion: apps/v0
kind: Deployment
# etc
kubectl apply -f lab7-2.yaml # recognise errors related to incorrect manifest fields
Labels are used for groupings, filtering and providing metadata. Selectors are used to group related resources. Annotations are used to provide additional metadata but are not used in queries.
When a deployment is created, a default Label app=$appName
is assigned, and a similar Selector is also created. When a pod is created, a default Label run=$podName
is assigned
Labels added after creating a deployment are not inherited by the resources
# add new label `state: test` to deployment `myapp`, see `kubectl label --help`
kubectl label deployment myapp state=test
# list deployments and their labels, see `kubectl get deploy --help`
kubectl get deployments --show-labels
# list all resources and their labels
kubectl get all --show-labels
# list deployments filtered by specific label
kubectl get deployments --selector="state=test"
# list all resources filtered by specific label
kubectl get all --selector="app=myapp"
# remove the `app` label from deployment `myapp`
kubectl label deploy myapp app-
# remove the `run` label from pod `mypod`
kubectl label pod mypod run-
myapp
with three replicas using a suitable imagepipeline: test
to the deploymentkubectl create deploy myapp --image=httpd --dry-run=client -o yaml >> lab7-3.yaml
kubectl apply -f lab7-3.yaml
kubectl get deploy --show-labels
kubectl label deploy myapp pipeline=test
kubectl get deploy --show-labels
kubectl describe -f lab7-3.yaml
kubectl get -o yaml -f lab7-3.yaml | less
kubectl run mypod --image=nginx --dry-run=client -o yaml | less
kubectl get all --selector="app=myapp"
kubectl get all --selector="pipeline=test"
kubectl label pod $POD_NAME app- # pod becomes naked/dangling and unmanaged by deployment
kubectl get pods --show-labels # new pod created to replace one with label removed
kubectl get pods --selector="app=myapp" # shows 3 pods
kubectl delete -f lab7-3.yaml # $POD_NAME not deleted! `deploy.spec.selector` is how a deployment find pods to manage!
Rolling updates is the default update strategy, triggered when a field in the deployment's Pod template deployment.spec.template
is changed. A new ReplicaSet is created that creates updated Pods one after the other, and the old ReplicaSet is scaled to 0 after successful update. At some point during the update, both old version and new version of the app will be live. By default, ten old ReplicaSets will be kept, see deployment.spec.revisionHistoryLimit
The other type of update strategy is Recreate, where all Pods are killed before new Pods are created. This is useful when you cannot have different versions of an app running simultaneously, e.g database.
deploy.spec.strategy.rollingUpdate.maxUnavailable
: control number of Pods upgraded simultaneouslydeploy.spec.strategy.rollingUpdate.maxSurge
: controls the number of additional Pods, more than the specified replicas, created during update. Aim to have a higher maxSurge
than maxUnavailable
.A Deployment's rollout is only triggered if a field within the Pod template
deploy.spec.template
is changed
Scaling down a Deployment to 0 is another way to delete all resources, saving costs, while keeping the config for a quick scale up when required
# view the update strategy field under deployment spec
kubectl explain deployment.spec.strategy
# view update strategy field recursively
kubectl explain deployment.spec.strategy --recursive
# edit the image of deployment `myapp` by setting directly, see `kubectl set -h`
kubectl set image deployment myapp nginx=nginx:1.24
# edit the environment variable of deployment `myapp` by setting directly
kubectl set env deployment myapp dept=MAN
# show recent update history - entries added when fields under `deploy.spec.template` change
kubectl rollout history deployment myapp -h
# show update events
kubectl describe deployment myapp
# view rolling update options
kubectl get deploy myapp -o yaml
# view all deployments history, see `kubectl rollout -h`
kubectl rollout history deployment
# view `myapp` deployment history
kubectl rollout history deployment myapp
# view specific change revision/log for `myapp` deployment (note this shows fields that affect rollout)
kubectl rollout history deployment myapp --revision=n
# revert `myapp` deployment to previous version/revision, see `kubectl rollout undo -h`
kubectl rollout undo deployment myapp --to-revision=n
nginx:1.18
update: feature
kubectl explain deploy.spec.strategy | less
kubectl create deploy myapp --image=nginx --dry-run=client -o yaml > lab7-4.yaml
kubectl apply -f lab7-4.yaml
kubectl describe -f lab7-4.yaml
kubectl get deploy myapp -o yaml | less # for manifest example to use in next step
nano lab7-4.yaml # edit to new parameters
kind: Deployment
metadata:
labels: # labels is `map` not `array` so no `-` like containers
app: myapp
updates: feature
name: myapp
spec:
replicas: 5
strategy:
rollingUpdate:
maxSurge: 3
maxUnavailable: 2
template:
spec:
containers:
- image: nginx:1.18
name: webserver
# etc
kubectl get all --selector="app=myapp"
kubectl get all --selector="updates=feature" # extra deployment label not applied on pods
kubectl rollout history deploy
kubectl set image deploy myapp nginx=n -f lab7-4.yaml
kubectl set image deploy myapp webserver=nginx:1.23
kubectl get all --selector="app=myapp"
kubectl rollout history deploy myapp # 2 revisions
kubectl describe deploy myapp
kubectl rollout history deploy myapp --revision=2
kubectl rollout history deploy myapp --revision=1
kubectl rollout undo deploy myapp --to-revision=1
kubectl get all --selector="app=myapp"
kubectl rollout history deploy myapp # 2 revisions, but revision count incremented
kubectl scale deploy myapp --replicas=0
kubectl rollout history deploy myapp # replicas change does not trigger rollout, only `deploy.spec.template` fields
kubectl get all --selector="app=myapp"
kubectl delete -f lab7-4.yaml
A DaemonSet is a kind of deployment that ensures that all (or some) Nodes run a copy of a particular Pod. This is useful in a multi-node cluster where specific application is required on all nodes, e.g. running a - cluster storage, logs collection, node monitoring, network agent - daemon on every node. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
# create daemonset via yaml file
kubectl create -f daemonset.yaml
# view daemonsets pods
kubectl get ds,pods
# view daemonset in kube system namespace
kubectl get ds,pods -n kube-system
# view the daemonset spec
kubectl explain daemontset.spec | less
# view the daemonset spec recursively
kubectl explain daemontset.spec --recursive | less
DaemonSets can only be created by YAML file, see an official example manifest controllers/daemonset.yaml
.
kubectl create deploy myapp --image=nginx --dry-run=client -o yaml | less # view fields required
wget -qO- https://k8s.io/examples/controllers/daemonset.yaml | less # similar to deployment, except Kind and replicas
kubectl apply -f https://k8s.io/examples/controllers/daemonset.yaml
kubectl get all # note daemonset and related pod
kubectl describe -f https://k8s.io/examples/controllers/daemonset.yaml
kubectl delete -f https://k8s.io/examples/controllers/daemonset.yaml
kubectl api-resources # introduced in version apps/v1
kubectl get ds -n=kube-system --show-labels # used to add network agent `kube-proxy` to all cluster nodes
kubectl get all -n=kube-system --selector="k8s-app=kube-proxy"
kubectl explain daemonset.spec | less
kubectl explain daemonset.spec --recursive | less
Autoscaling is very important in live environments but not covered in CKAD. Visit HorizontalPodAutoscaler Walkthrough for a complete lab on autoscaling.
The lab requires a metrics-server so install one via Minikube if you plan to complete the lab
# list minikube addons
minikube addons list
# enable minikube metrics-server
minikube addons enable metrics-server
# disable minikube metrics-server
minikube addons disable metrics-server
Some bootcamp students have been messing with the webapp
Deployment for the test environment's webpage in the default
Namespace, leaving it broken. Please rollback the Deployment to the last fully functional version. Once on the fully functional version, update the Deployment to have a total of 10 Pods, and ensure that the total number of old and new Pods, during a rolling update, do not exceed 13 or go below 7.
Update the Deployment to nginx:1.22-alpine
to confirm the Pod count stays within these thresholds. Then rollback the Deployment to the fully functional version. Before you leave, set the Replicas to 4, and just to be safe, Annotate all the Pods with description="Bootcamp Test Env - Please Do Not Change Image!"
.
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"appid":"webapp"},"name":"webapp"},"spec":{"replicas":2,"revisionHistoryLimit":15,"selector":{"matchLabels":{"appid":"webapp"}},"template":{"metadata":{"labels":{"appid":"webapp"}},"spec":{"volumes":[{"name":"varlog","emptyDir":{}}],"containers":[{"image":"nginx:1.12-alpine","name":"nginx","volumeMounts":[{"name":"varlog","mountPath":"/var/logs"}]}]}}}}' > k8s-task-6.yml; kubectl apply -f k8s-task-6.yml >/dev/null; cp k8s-task-6.yml k8s-task-6-bak.yml; sed -i -e 's/nginx:1.12-alpine/nginx:1.13alpine/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/nginx:1.12-alpine/nginx:1.13alpine/g' k8s-task-6.yml 2>/dev/null; kubectl apply -f k8s-task-6.yml >/dev/null; sleep 1; sed -i -e 's/nginx:1.13alpine/nginx:1.14-alpine/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/nginx:1.13alpine/nginx:1.14-alpine/g' k8s-task-6.yml 2>/dev/null; kubectl apply -f k8s-task-6.yml >/dev/null; sleep 4; sed -i -e 's/nginx:1.14-alpine/nginx:1.15-alpine/g' k8s-task-6.yml 2>/dev/null; sed -i -e 's/\/var\/logs/\/usr\/share\/nginx\/html/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/nginx:1.14-alpine/nginx:1.15-alpine/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/\/var\/logs/\/usr\/share\/nginx\/html/g' k8s-task-6.yml 2>/dev/null; kubectl apply -f k8s-task-6.yml >/dev/null; sleep 2; sed -i -e 's/nginx:1.15-alpine/ngnx:1.16-alpine/g' k8s-task-6.yml 2>/dev/null; sed -i -e 's/\/var\/logs/\/usr\/share\/nginx\/html/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/nginx:1.15-alpine/ngnx:1.16-alpine/g' k8s-task-6.yml 2>/dev/null; sed -i '' 's/\/var\/logs/\/usr\/share\/nginx\/html/g' k8s-task-6.yml 2>/dev/null; kubectl apply -f k8s-task-6.yml >/dev/null; sleep 4; kubectl apply -f k8s-task-6-bak.yml >/dev/null; sleep 4; kubectl rollout undo deploy webapp --to-revision=5 >/dev/null; kubectl delete $(kubectl get rs --sort-by=".spec.replicas" -oname | tail -n1) >/dev/null; rm k8s-task-6.yml k8s-task-6-bak.yml; echo 'lab: environment setup complete!'
kubectl delete deploy webapp
ReplicaSets store the Pod configuration used by a Deployment.
You can reveal more resource details with kubectl get -owide
. You might be able to find defective Pods/ReplicaSets quicker this way.
You will need to review the Deployment's rollout history, see lab 7.4 - rolling updates
You can view more details of a rollout revision with kubectl rollout history --revision=$REVISION_NUMBER
Did you test that the Pods are serving an actual webpage? This task isn't complete without testing the webpage - Pods in Running state doesn't mean fully functional version.
You can test a Pod with kubectl port-forward
, by creating a temporary Pod kubectl run --rm -it --image=nginx:alpine -- sh
and running curl $POD_IP
, etc.
Always remember kubectl explain
when you encounter new requirements. Use this to figure out what rolling update parameters are required.
You can update a Deployment's image quickly with kubectl set image --help
. You're not required to count Pods during rolling update, all should be fine long as you have maxSurge
and maxUnavailable
set correctly.
Any change that triggers a rollout (changing anything under deploy.spec.template
) will create a new ReplicaSet which becomes visible with kubectl rollout history
.
Be sure to perform updates one after the other, without batching, as an exam question dictates, especially if the changes trigger a rollout. For example, apply replicas and update strategy changes before applying image changes.
You can set replicas quickly with kubectl scale --help
.
You can Annotate all 4 Pods in a single command, see kubectl annotate --help
.
A Service provides access to applications running on a set of Pods. A Deployment creates and destroys Pods dynamically, so you cannot rely on Pod IP. This is where Services come in, to provide access and load balancing to the Pods.
Like Deployments, Services targets Pods by selector but exists independent from a Deployment - not deleted during Deployment deletion and can provide access to Pods in different Deployments.
$NodeIP:$NodePort
- useful for testing purposesKubernetes supports two primary modes of finding a Service - environment variables and DNS.
In the env-vars mode, the kubelet adds a set of env-vars ({SVCNAME}_SERVICE_HOST
and {SVCNAME}_SERVICE_PORT
) to each Pod for each active Service. Services must be created before Pods to auto-populate the env-vars. You can disable this mode by setting the pod.spec
field enableServiceLinks: false
.
The DNS mode is the recommended discovery method. A cluster-aware DNS server, such as CoreDNS, watches the Kubernetes API for new Services and creates a set of DNS records for each one. If DNS has been enabled throughout your cluster, then for a Service called my-service
in a Kubernetes namespace my-ns
, Pods in the my-ns
namespace can find the service by a name lookup for my-service
, while Pods in other namespaces must qualify the name my-service.my-ns
.
Always remember that a Service will only target Pods that have Labels matching the Service's Label Selector
Not all images expose their applications on port 80. When unsure, try one ofnetstat -tulpn
orss -tulpn
in the container.
# service
kind: Service
metadata:
name: webapp
spec:
selector:
appid: webapp # this must match the label of a pod to be targeted by a Service
ports:
- nodePort: 32500 # node port
port: 80 # service port
targetPort: 8080 # container port - do not assume port 80, always check container
---
# pod targeted
kind: Pod
metadata:
labels:
appid: webapp # matches label selector of service
name: mypod
---
# pod not targeted
kind: Pod
metadata:
labels:
app: webapp # does not match label selector of service
name: mypod
# view the service spec
kubectl explain svc.spec | less
# create a ClusterIP service by exposing a deployment `myapp` on port 80, see `kubectl expose -h`
kubectl expose deploy myapp --port=80
# specify a different service name, the deployment name is used if not specified
kubectl expose deploy myapp --port=80 --name=myappsvc
# specify container port 8000
kubectl expose deploy myapp --port=80 --target-port=8000
# create a NodePort service
kubectl expose deploy myapp --type=NodePort --port=80
# print a pod's service environment variables
kubectl exec $POD_NAME -- printenv | grep SERVICE
# view more details of the service exposing deployment `myapp`
kubectl describe svc myapp
# view the yaml form of service in yaml
kubectl get svc myapp -o yaml | less
# edit service
kubectl edit svc myapp
# list all endpoints
kubectl get endpoints
# list pods and their IPs
kubectl get pods -o wide
webserver
TYPE
, CLUSTER-IP
, EXTERNAL-IP
and PORT(S)
IPs
, Port
, TargetPort
and Endpoints
curl $ClusterIP:$Port
minikube ssh
then curl $ClusterIP:$Port
busybox
Pod with a shell connected interactively and perform the following commands:
cat /etc/resolv.conf
and review the outputnslookup webserver
(service name) and review the outputnginx:alpine
Pod to query the Service by name:
kubectl run mypod --rm -it --image=nginx:alpine -- sh
curl $SERVICE_NAME:$PORT
curl $SERVICE_NAME.$SERVICE_NAMESPACE:$PORT
if the Service and the temporary Pod are in separate Namespaces# host terminal
kubectl create deploy webserver --image=httpd --dry-run=client -o yaml > lab8-1.yaml
kubectl apply -f lab8-1.yaml
kubectl get all
kubectl get svc,ep,po -o wide # endpoints have <ip_address:port> of pods targeted by service
echo --- >> lab8-1.yaml
kubectl expose deploy webserver --port=80 --dry-run=client -o yaml >> lab8-1.yaml
kubectl apply -f lab8-1.yaml
kubectl get svc,pods
kubectl describe svc webserver | less
kubectl get svc webserver -o yaml | less # missing endpoints IPs
kubectl exec $POD_NAME -- printenv | grep SERVICE # no service env-vars
kubectl scale deploy webserver --replicas=0; kubectl scale deploy webserver --replicas=2
kubectl get pods -o wide # service env-vars applied to pods created after service
kubectl exec $POD_NAME -- printenv | grep SERVICE
kubectl get endpoints,pods -o wide
curl $CLUSTER_IP # docker-desktop connection error, docker-engine success
minikube ssh
# cluster node terminal
curl $CLUSTER_IP # success with both docker-desktop and docker-engine
exit
# host terminal
kubectl run mypod --rm -it --image=busybox
# container terminal
cat /etc/resolv.conf # shows service ip as dns server
nslookup webserver # shows dns search results, read more at https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#namespaces-of-services
exit
# host terminal
kubectl run mypod --rm -it --image=nginx:alpine -- sh
# container terminal
curl webserver # no need to add port cos default is 80
curl webserver.default # this uses the namespace of the service
exit
# host terminal
kubectl delete -f lab8-1.yaml
kubectl explain service | less
kubectl explain service.spec | less
In this lab, we will implement a naive example of a backend-frontend microservices architecture - expose frontend to external traffic with NodePort
Service while keeping backend hidden with ClusterIP
Service.
Note that live environments typically use Ingress (covered in the next chapter) to expose applications to external traffic
backend
app, with the following spec:
httpd
(for simplicity)backend
app: backend
and tier: webapp
app: backend
and tier: webapp
ClusterIP
$CLUSTER-IP
or $SERVICE_NAME
nginx/default.conf
to redirect traffic for the /
route to the backend service
# nginx/default.conf
upstream backend-server {
server backend; # dns service discovery within the same namespace use service name
}
server {
listen 80;
location / {
proxy_pass http://backend-server;
}
}
frontend
app, with the following spec:
nginx
frontend
app: webapp
and tier: frontend
app: webapp
and tier: frontend
/etc/nginx/conf.d/default.conf
(use fullpath $(pwd)/nginx/default.conf
)NodePort
$(minikube ip):NodePort
kubectl create deploy backend --image=httpd --dry-run=client -o yaml > lab8-2.yaml
echo --- >> lab8-2.yaml
kubectl expose deploy backend --port=80 --dry-run=client -o yaml >> lab8-2.yaml
nano lab8-2.yaml
# backend deploymemt
kind: Deployment
metadata:
labels:
app: backend
tier: webapp
name: backend
spec:
selector:
matchLabels:
app: backend
tier: webapp
template:
metadata:
labels:
app: backend
tier: webapp
# backend service
kind: Service
metadata:
labels:
app: backend
tier: webapp
name: backend
spec:
selector:
app: backend
tier: webapp
kubectl apply -f lab8-2.yaml
curl $CLUSTER_IP # or run in node terminal `minikube ssh`
mkdir nginx
nano nginx/default.conf # use snippet from step [4]
echo --- >> lab8-2.yaml
kubectl create deploy frontend --image=nginx --dry-run=client -o yaml >> lab8-2.yaml
echo --- >> lab8-2.yaml
kubectl expose deploy frontend --port=80 --dry-run=client -o yaml >> lab8-2.yaml
nano lab8-2.yaml
# frontend deploymemt
kind: Deployment
metadata:
labels:
app: frontend
tier: webapp
name: frontend
spec:
selector:
matchLabels:
app: frontend
tier: webapp
template:
metadata:
labels:
app: frontend
tier: webapp
spec:
containers:
- image: nginx
volumeMounts:
- mountPath: /etc/nginx/conf.d/default.conf
name: conf-volume
volumes:
- name: conf-volume
hostPath:
path: /full/path/to/nginx/default.conf # `$(pwd)/nginx/default.conf`
# frontend service
kind: Service
metadata:
labels:
app: frontend
tier: webapp
name: frontend
spec:
type: NodePort
selector:
app: frontend
tier: webapp
kubectl apply -f lab8-2.yaml
kubectl get svc,pods
curl $(minikube ip):$NODE_PORT # shows backend httpd page
kubectl delete -f lab8-2.yaml
Create a Pod named webapp
in the pig
Namespace (create new if required), running nginx:1.20-alpine
image. The Pod should have a Annotation motd="Welcome to Piouson's CKAD Bootcamp"
. Expose the Pod on port 8080.
Did you create the Pod in the pig
Namespace? You should create the Namespace if it doesn't exist.
You can set Annotation when creating a Pod, see kubectl run --help
Actually, besides creating the Namespace, you can complete the rest of the task in a single command. Completing this task any other way is not but time wasting. Have a deeper look at kubectl run --help
.
Did you test you are able to access the app via the Service? This task is not complete until you confirm the application is accessible via the Service.
You can test the Service by connecting a shell to a temporary Pod kubectl run -it --rm --image=nginx:alpine -n $NAMESPACE -- sh
and run curl $SERVICE_NAME:$PORT
. If you did not create the temporary Pod in the same Namespace, you will need to add the Namespace to the hostname curl $SERVICE_NAME.$NAMESPACE:$PORT
.
Testing this way, with Service hostname, is also a way to confirm DNS is working in the cluster.
A bootcamp student is stuck on a simple task and would appreciate your expertise. Their goal is to create a webapp
Deployment running gcr.io/google-samples/node-hello:1.0
image in the bat
Namespace, exposed on port 80 and NodePort 32500. The student claims everything was setup as explained in class but still unable to access the application via the Service. Swoop down like a superhero and save the day by picking up where the student left off.
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"v1","kind":"List","items":[{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"bat"}},{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"appid":"webapp"},"name":"webapp","namespace":"bat"},"spec":{"replicas":2,"selector":{"matchLabels":{"appid":"webapp"}},"template":{"metadata":{"labels":{"appid":"webapp"}},"spec":{"containers":[{"image":"gcr.io/google-samples/node-hello:1.0","name":"nginx"}]}}}},{"apiVersion":"v1","kind":"Service","metadata":{"labels":{"appid":"webapp"},"name":"webapp","namespace":"bat"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"webapp"}}}]}' | kubectl apply -f - >/dev/null; echo 'lab: environment setup complete!'
kubectl delete ns bat
Did you check for the relationship between the Service, Endpoint and Pods? When a Service with a Selector is created, an Endpoint with the same name is automatically created. See lab 8.1 - connecting applications with services.
Did you confirm that the Service configuration matches the requirements with kubectl describe svc
? You should also run some tests, see discovering services and lab 8.1 - connecting applications with services.
If you're still unable to access the app but Endpoints have correct IP addresses, you might want to check if there is a working application to begin with. See lab 5.1 - creating pods
Now you have the container port? Is the Service configured to use this container port? Is the Pod configured to use this container port? 💡
Remember a Service can specify three types of ports: port | targetPort | nodePort
. Which is the container port?
For a Service, you can quickly verify the configured container port by reviewing the IP addresses of the Service Endpoint, they should be of the form $POD_IP:CONTAINER_PORT
Once resolved, you should be able to access the application via the Service with curl
.
For a Pod, you can quickly verify the configured container port by reviewing the ReplicaSet config with kubectl describe rs
.
Once resolved, you should be able to access the application via the Service with curl
.
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL/TLS, and offer name-based virtual hosting.
💡 Only creating an Ingress resource has no effect! You must have an Ingress controller to satisfy an Ingress. In our local lab, we will use the Minikube Ingress controller
# list existing minikube addons
minikube addons list
# enable ingress on minikube
minikube addons enable ingress
# enable ingress manually
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
# list existing namespaces
kubectl get ns
# list resources in the ingress namespace
kubectl get all -n ingress-nginx
# list ingressclass resource in the ingress namespace
kubectl get ingressclass -n ingress-nginx
# view ingress spec
kubectl explain ingress.spec | less
You can remove the need for a trailing slash
/
in urls by adding annotationnginx.ingress.kubernetes.io/rewrite-target: /
to ingress specingress.metadata.annotations
kubectl get ns # not showing ingress-nginx namespace
minikube addons list # ingress not enable
minikube addons enable ingress
minikube addons list # ingress enabled
kubectl get ns # shows ingress-nginx namespace
kubectl get all,ingressclass -n ingress-nginx # shows pods, services, deployment, replicaset, jobs and ingressclass
kubectl get svc ingress-nginx-controller -o yaml | less
kubectl get ingressclass nginx -o yaml | less # annotations - ingressclass.kubernetes.io/is-default-class: "true"
kubectl explain ingress.spec | less
# create ingress with a specified rule, see `kubectl create ingress -h`
kubectl create ingress $INGRESS_NAME --rule="$PATH=$SERVICE_NAME:$PORT"
# create single-service ingress `myingress`
kubectl create ingress myingress --rule="/=app1:80"
# create simple-fanout ingress
kubectl create ingress myingress --rule="/=app1:80" --rule="/about=app2:3000" --rule="/contact=app3:8080"
# create name-based-virtual-hosting ingress
kubectl create ingress myingress --rule="api.domain.com/*=apiservice:80" --rule="db.domain.com/*=dbservice:80" --rule="app.domain.com/*=appservice:80"
web
using a httpd
imageweb-svc
web-ing
with a Prefix rule to redirect /
requests to the ServiceCLASS
, HOSTS
& ADDRESS
?
CLASS
and HOSTS
have such values..web
via ingress curl $(minikube ip)
/test
path, will this work? Repeat steps 3-7 to confirm:
web2
with image httpd
web2-svc
/test
to web2-svc
web2
app via curl $(minikube ip)/test
?web
app via curl $(minikube ip)
?kubectl edit ingress web-ing
:
metadata:
name: web-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
curl $(minikube ip)/test
and curl $(minikube ip)
kubectl get svc -n ingress-nginx
80
and HTTPS 443
?kubectl create deploy web --image=httpd --dry-run=client -oyaml > lab9-2.yml
kubectl apply -f lab9-2.yml
echo --- >> lab9-2.yml
kubectl expose deploy web --name=web-svc --port=80 --dry-run=client -oyaml >> lab9-2.yml
echo --- >> lab9-2.yml
kubectl create ingress web-ing --rule="/*=web-svc:80" --dry-run=client -oyaml >> lab9-2.yml
kubectl apply -f lab9-2.yml
kubectl get deploy,po,svc,ing,ingressclass # CLASS=nginx, HOSTS=*, ADDRESS starts empty then populated later
curl $(minikube ip) # it works
echo --- >> lab9-2.yml
kubectl create deploy web2 --image=httpd --dry-run=client -oyaml > lab9-2.yml
kubectl apply -f lab9-2.yml
echo --- >> lab9-2.yml
kubectl expose deploy web2 --name=web2-svc --port=80 --dry-run=client -oyaml >> lab9-2.yml
KUBE_EDITOR=nano kubectl edit ingress web-ing
Kind: Ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
...
- path: /test
pathType: Prefix
backend:
service:
name: web2-svc
port:
number: 80
# etc
curl $(minikube ip)/test # 404 not found ???
curl $(minikube ip) # it works
KUBE_EDITOR=nano kubectl edit ingress web-ing
Kind: Ingress
metadata:
name: web-ing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
# etc
curl $(minikube ip)/test # it works
curl $(minikube ip) # it works
curl https://$(minikube ip)/test --insecure # it works, see `curl --help`
curl https://$(minikube ip) --insecure # it works
kubectl get svc -n ingress-nginx # NodePort, 80:$HTTP_NODE_PORT/TCP,443:$HTTPS_NODE_PORT/TCP
curl $(minikube ip):$HTTP_NODE_PORT
curl $(minikube ip):$HTTP_NODE_PORT/test
curl https://$(minikube ip):$HTTPS_NODE_PORT --insecure
curl https://$(minikube ip):$HTTPS_NODE_PORT/test --insecure
kubectl delete deploy web web2
kubectl delete svc web-svc web2-svc
kubectl delete ingress web-ing web2-ing
Ingress relies on Annotations to specify additional configuration. The supported Annotations depends on the Ingress controller type in use - in this case Ingress-Nginx
Please visit the Ingress-Nginx official Rewrite documentation for more details
webapp-ingress
that:
myawesomesite.com/
to a Service webappsvc:80
myawesomesite.com/hello
to a Service hellosvc:8080
HOSTS
to the previous labwebapp
with image httpd
webapp
Deployment as NodePort with service name webappsvc
webapp
via the minikube Node curl $(minikube ip)
or curl myawesomesite.com
?hello
with image gcr.io/google-samples/hello-app:1.0
hello
as NodePort with service name hellosvc
hello
via curl $(minikube ip)/hello
or myawesomesite.com/hello
?/etc/hosts
that maps the minikube Node IP to an hostname $(minikube ip) myawesomesite.com
webapp
via curl $(minikube ip)
or myawesomesite.com
with HTTP and HTTPShello
via curl $(minikube ip)/hello
or myawesomesite.com/hello
with HTTP and HTTPSwebapp
and hello
on myawesomesite.com
via the NodePorts specified by the ingress-nginx-controller
, webappsvc
and hellosvc
Services?kubectl create ingress webapp-ingress --rule="myawesomesite.com/*=webappsvc:80" --rule="myawesomesite.com/hello/*=hellosvc:8080" --dry-run=client -oyaml > lab9-3.yaml
echo --- >> lab9-3.yaml
kubectl apply -f lab9-3.yaml
kubectl get ingress
kubectl describe ingress webapp-ingress | less # endpoints not found
kubectl get ingress webapp-ingress -oyaml | less
kubectl create deploy webapp --image=httpd --dry-run=client -oyaml >> lab9-3.yaml
echo --- >> lab9-3.yaml
kubectl apply -f lab9-3.yaml
kubectl expose deploy webapp --name=webappsvc --type=NodePort --port=80 --dry-run=client -o yaml >> lab9-3.yaml
echo --- >> lab9-3.yaml
kubectl apply -f lab9-3.yaml
kubectl get ingress,all
kubectl describe ingress webapp-ingress | less # only webappsvc endpoint found
curl $(minikube ip) # 404 not found
curl myawesomesite.com # 404 not found
kubectl create deploy hello --image=gcr.io/google-samples/hello-app:1.0 --dry-run=client -o yaml >> lab9-3.yaml
echo --- >> lab9-3.yaml
kubectl apply -f lab9-3.yaml
kubectl expose deploy hello --name=hellosvc --type=NodePort --port=8080 --dry-run=client -o yaml >> lab9-3.yaml
echo --- >> lab9-3.yaml
kubectl apply -f lab9-3.yaml
kubectl get all --selector="app=hello"
kubectl describe ingress webapp-ingress | less # both endpoints found
curl $(minikube ip)/hello # 404 not found
curl myawesomesite.com/hello # 404 not found
echo "$(minikube ip) myawesomesite.com" | sudo tee -a /etc/hosts # see `tee --help`
curl $(minikube ip) # 404 not found
curl $(minikube ip)/hello # 404 not found
curl myawesomesite.com # it works
curl myawesomesite.com/hello # hello world
curl https://myawesomesite.com --insecure # it works
curl https://myawesomesite.com/hello --insecure # hello world
kubectl get svc -A # find NodePorts for ingress-nginx-controller, webappsvc and hellosvc
curl myawesomesite.com:$NODE_PORT_FOR_WEBAPPSVC # it works
curl myawesomesite.com:$NODE_PORT_FOR_HELLOSVC # hello world
curl myawesomesite.com:$HTTP_NODE_PORT_FOR_NGINX_CONTROLLER # it works
curl myawesomesite.com:$HTTP_NODE_PORT_FOR_NGINX_CONTROLLER/hello # hello world
curl https://myawesomesite.com:$HTTPS_NODE_PORT_FOR_NGINX_CONTROLLER --insecure
curl https://myawesomesite.com:$HTTPS_NODE_PORT_FOR_NGINX_CONTROLLER/hello --insecure
kubectl delete -f lab9-3.yaml
This is similar to defining API routes on a backend application, except each defined route points to a separate application/service/deployment.
service.name
and a service.port.name
or service.port.number
posix_002dextended-regular-expression-syntax.html).spec.defaultBackend
can be defined on the Ingress or Ingress controller for traffic that doesn't match any known paths, similar to a 404 route - if defaultBackend
is not set, the default 404 behaviour will depend on the type of Ingress controller in useEach rule-path in an Ingress must have a pathType
. Paths without a pathType
will fail validation.
There are three supported path types:
ImplementationSpecific
- matching is up to the IngressClass
Exact
- case sensitive matching of exact URL pathPrefix
- case sensitive matching of URL path prefix, split into elements by /
, on element by element basisPlease read the official docs on path matching examples and using wildcards
Ingresses can be implemented by different controllers, often with different configuration. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
Depending on your ingress controller, you may be able to use parameters that you set cluster-wide, or just for one namespace.
ingressclass.spec.parameters
field without setting ingressclass.spec.parameters.scope
, or setting ingressclass.spec.parameters.scope: Cluster
ingressclass.spec.parameters
field and set ingressclass.spec.parameters.scope: Namespace
A particular IngressClass can be configured as default for a cluster by setting the ingressclass.kubernetes.io/is-default-class
annotation to true
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
# etc, see https://k8s.io/examples/service/networking/external-lb.yaml
# list existing namespaces
kubectl get ns
# list ingressclasses in the ingress namespace
kubectl get ingressclass -n ingress-nginx
# list ingressclasses in the default namespace - present in all namespaces
kubectl get ingressclass
# view ingressclass object
kubectl explain ingressclass | less
nginx
and httpd
Cluster-IP
on port 80nginx.yourchosenhostname.com
to the nginx
Servicehttpd.yourchosenhostname.com
to the httpd
ServicePrefix
path type/etc/hosts
that maps the minikube Node IP to hostnames below:
$(minikube ip) nginx.yourchosenhostname.com
$(minikube ip) httpd.yourchosenhostname.com
kubectl explain ingressclass | less
kubectl explain ingressclass --recursive | less
kubectl create deploy nginx --image=nginx --dry-run=client -o yaml > lab9-4.yaml
echo --- >> lab9-4.yaml
kubectl expose deploy nginx --port=80 --dry-run=client -o yaml >> lab9-4.yaml
echo --- >> lab9-4.yaml
kubectl create deploy httpd --image=httpd --dry-run=client -o yaml >> lab9-4.yaml
echo --- >> lab9-4.yaml
kubectl expose deploy httpd --port=80 --dry-run=client -o yaml >> lab9-4.yaml
echo --- >> lab9-4.yaml
kubectl create ingress myingress --rule="nginx.yourchosenhostname.com/*=nginx:80" --rule="httpd.yourchosenhostname.com/*=httpd:80" --dry-run=client -o yaml > lab9-4.yaml
echo --- >> lab9-4.yaml
kubectl apply -f lab9-4.yaml
kubectl get ingress,all
kubectl get ingress myingress -o yaml | less # `pathType: Prefix` and `ingressClassName: nginx`
kubectl get ingressclass nginx -o yaml | less # annotation `ingressclass.kubernetes.io/is-default-class: "true"` makes this class the default
echo "
$(minikube ip) nginx.yourchosenhostname.com
$(minikube ip) httpd.yourchosenhostname.com
" | sudo tee -a /etc/hosts
curl nginx.yourchosenhostname.com
curl httpd.yourchosenhostname.com
kubectl delete -f lab9-4.yaml
# note that when specifying ingress path, `/*` creates a `Prefix` path type and `/` creates an `Exact` path type
There are two kinds of Pod isolation: isolation for egress (outbound), and isolation for ingress (inbound). By default, all ingress and egress traffic is allowed to and from pods in a namespace, until you have a NetworkPolicy in that namespace.
Network policies are implemented by a network plugin. A NetworkPolicy will have no effect if a network plugin that supports NetworkPolicy is not installed in the cluster.
There are three different identifiers that controls entities that a Pod can communicate with:
podSelector
: selects pods within the NetworkPolicy namespace allowed for ingress/egress using selector matching (note: a pod cannot block itself)namespaceSelector
: selects all pods in specific namespaces allowed for ingress/egress using selector matchingipBlock
: selects IP CIDR ranges (cluster-external IPs) allowed for ingress/egress (note: node traffic is always allowed - not for CKAD)minikube stop
minikube delete
# start minikube with calico plugin
minikube start --kubernetes-version=1.23.9 --cni=calico
# verify calico plugin running, allow enough time (+5mins) for all pods to enter `running` status
kubectl get pods -n kube-system --watch
# create network policy
kubectl apply -f /path/to/networkpolicy/manifest/file
# list network policies
kubectl get networkpolicy
# view more details of network policies `mynetpol`
kubectl describe networkpolicy mynetpol
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-netpol
# create default deny all ingress/egress traffic
spec:
podSelector: {}
policyTypes:
- Ingress # or Egress
# create allow all ingress/egress traffic
spec:
podSelector: {}
ingress: # or egress
- {}
You may follow the official declare network policy walkthrough
⚠ A Network Policy will have no effect without a network provider with network policy support (e.g. Calico)
⚠ Minikube Calico plugin might conflict with future labs, so remember to disable Calico after this lab
ℹ You can prependhttps://k8s.io/examples/
to example filepaths from the official docs to use the file locally
webapp
using image httpd
wget --spider --timeout=1 webapp
tier=frontend
have access - see official manifest example service/networking/nginx-policy.yaml
wget --spider --timeout=1 webapp
tier=frontend
and connect an interactive shellwget --spider --timeout=1 webapp
# host terminal
minikube stop
minikube delete
minikube start --kubernetes-version=1.23.9 --driver=docker --cni=calico
kubectl get pods -n kube-system --watch # allow enough time, under 5mins if lucky, more than 10mins if you have bad karma 😼
kubectl create deploy webapp --image=httpd --dry-run=client -o yaml > lab9-5.yaml
kubectl apply -f lab9-5.yaml
echo --- >> lab9-5.yaml
kubectl expose deploy webapp --port=80 --dry-run=client -o yaml > lab9-5.yaml
kubectl apply -f lab9-5.yaml
kubectl get svc,pod
kubectl get pod --watch # wait if pod not in running status
kubectl run mypod --rm -it --image=busybox
# container terminal
wget --spider --timeout=1 webapp # remote file exists
exit
# host terminal
echo --- >> lab9-5.yaml
wget -qO- https://k8s.io/examples/service/networking/nginx-policy.yaml >> lab9-5.yaml
nano lab9-5.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: mynetpol
spec:
podSelector:
matchLabels:
app: webapp
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
kubectl apply -f lab9-5.yaml
kubectl describe networkpolicy mynetpol | less
kubectl run mypod --rm -it --image=busybox
# container terminal
wget --spider --timeout=1 webapp # wget: download timed out
exit
# host terminal
kubectl run mypod --rm -it --image=busybox --labels="tier=frontend"
# container terminal
wget --spider --timeout=1 webapp # remote file exists
exit
# host terminal
kubectl delete -f lab9-5.yaml
minikube stop
minikube delete
minikube start --kubernetes-version=1.23.9 --driver=docker
The application is meant to be accessible at ckad-bootcamp.local
. Please debug and resolve the issue without creating any new resource.
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"v1","kind":"List","items":[{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"bat"}},{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"appid":"webapp"},"name":"webapp","namespace":"bat"},"spec":{"replicas":2,"selector":{"matchLabels":{"appid":"webapp"}},"template":{"metadata":{"labels":{"appid":"webapp"}},"spec":{"containers":[{"image":"gcr.io/google-samples/node-hello:1.0","name":"nginx"}]}}}},{"apiVersion":"v1","kind":"Service","metadata":{"labels":{"appid":"webapp"},"name":"webapp","namespace":"bat"},"spec":{"ports":[{"port":80,"protocol":"TCP","targetPort":80}],"selector":{"app":"webapp"}}},{"kind":"Ingress","apiVersion":"networking.k8s.io/v1","metadata":{"name":"webapp","namespace":"bat"},"spec":{"ingressClassName":"ngnx","rules":[{"http":{"paths":[{"path":"/","pathType":"Prefix","backend":{"service":{"name":"webapp","port":{"number":80}}}}]}}]}}]}' | kubectl apply -f - >/dev/null; echo 'lab: environment setup complete!'
kubectl delete ns bat
Given several Pods in Namespaces pup
and cat
, create network policies as follows:
Pods in the same Namespace can communicate together
webapp
Pod in the pup
Namespace can communicate with microservice
Pod in the cat
Namespace
DNS resolution on UDP/TCP port 53 is allowed for all Pods in all Namespaces
Command to setup environment:
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"v1","kind":"List","items":[{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"pup"}},{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"cat"}},{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"server":"frontend"},"name":"webapp","namespace":"pup"},"spec":{"containers":[{"image":"nginx:1.22-alpine","name":"nginx"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"}},{"apiVersion":"v1","kind":"Pod","metadata":{"labels":{"server":"backend"},"name":"microservice","namespace":"cat"},"spec":{"containers":[{"image":"node:16-alpine","name":"nodejs","args":["sleep","7200"]}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"}}]}' | kubectl apply -f - >/dev/null; echo 'lab: environment setup complete!'
Command to destroy environment:
kubectl delete ns cat pup
PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes, with a lifecycle independent of any individual Pod that uses the PV.
PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Claims can request specific size and access modes (ReadWriteOnce, ReadOnlyMany, ReadWriteMany, or ReadWriteOncePod).
STATUS=Pending
until it finds and connects to a matching PV and thus STATUS=Bound
PV attributes | PVC attributes |
---|---|
capacity | resources |
volume modes | volume modes |
access modes | access modes |
storageClassName | storageClassName |
mount options | selector |
reclaim policy | |
node affinity | |
phase |
A StorageClass provides a way for administrators to describe the "classes" of storage they offer. It enables automatic PV provisioning to meet PVC requests, thus removing the need to manually create PVs. StorageClass must have a specified provisioner that determines what volume plugin is used for provisioning PVs.
storageClassName
can only be bound to PVCs that request that storageClassName
storageClassName
attribute not set is intepreted as a PV with no class, and can only be bound to PVCs that request a PV with no class.storageClassName=""
(empty string) is intepreted as a PVC requesting a PV with no class.storageClassName
attribute not set is not quite the same and behaves different whether the DefaultStorageClass
admission plugin is enabled
storageClassName
can be bound to PVs of that defaultstorageClassName
can only be bound to PVs with no classIf a PVC doesn't find a PV with matching access modes and storage, StorageClass may dynamically create a matching PV
hostPath
volumes is created on the host, in minikube use theminikube ssh
command to access the host (requires starting the cluster with--driver=docker
)
# list PVCs, PVs
kubectl get {pvc|pv|storageclass}
# view more details of a PVC
kubectl decribe {pvc|pv|storageclass} $NAME
pods/storage/pv-volume.yaml
manifest file as basepods/storage/pv-claim.yaml
manifest file as baseSTATUS
and VOLUME
does the PVC have?StorageClass
?
storageClassName
from both YAML fileswget -q https://k8s.io/examples/pods/storage/pv-volume.yaml
wget -q https://k8s.io/examples/pods/storage/pv-claim.yaml
nano pv-volume.yaml
nano pv-claim.yaml
# pv-volume.yaml
kind: PersistentVolume
spec:
storageClassName: manual
capacity:
storage: 3Gi
# pv-claim.yaml
kind: PersistentVolumeClaim
spec:
storageClassName: manual
resources:
requests:
storage: 1Gi
# etc
kubectl get pv,pvc # STATUS=Bound, task-pv-volume uses task-pv-claim
# when `storageClassName` is not specified, the StorageClass creates a new PV for the PVC
The benefit of configuring Pods with PVCs is to decouple site-specific details.
You can follow the official configure a Pod to use a PersistentVolume for storage docs to complete this lab.
/mnt/data/index.html
file on cluster host minikube ssh
with some message, e.g. "Hello, World!"https://k8s.io/examples/pods/storage/pv-volume.yaml
hostPath
storagehttps://k8s.io/examples/pods/storage/pv-pod.yaml
https://k8s.io/examples/pods/storage/pv-claim.yaml
httpd
and default documentroot is /usr/local/apache2/htdocs
or /var/www/html
pod,pv,pvc,storageclass
, and also review each detailed information
STATUS
for PV and PVCcurl localhost
# host terminal
minikube ssh
# node terminal
sudo mkdir /mnt/data
sudo sh -c "echo 'Hello from Kubernetes storage' > /mnt/data/index.html"
cat /mnt/data/index.html
exit
# host terminal
echo --- > lab10-2.yaml
wget https://k8s.io/examples/pods/storage/pv-volume.yaml -O- >> lab10-2.yaml
echo --- >> lab10-2.yaml
wget https://k8s.io/examples/pods/storage/pv-claim.yaml -O- >> lab10-2.yaml
echo --- >> lab10-2.yaml
wget https://k8s.io/examples/pods/storage/pv-pod.yaml -O- >> lab10-2.yaml
echo --- >> lab10-2.yaml
nano lab10-2.yaml # edit the final file accordingly
kubectl apply -f lab10-2.yaml
kubectl get pod,pv,pvc,storageclass
kubectl describe pod,pv,pvc,storageclass | less
kubectl exec -it task-pv-pod -- /bin/bash
kubectl delete -f lab10-2.yaml
For further learning, see mounting the same persistentVolume in two places and access control
In the kid
Namespace (create if required), create a Deployment webapp
with two replicas, running the nginx:1.22-alpine
image, that serves an index.html
HTML document (see below) from the Cluster Node's /mnt/data
directory. The HTML document should be made available via a Persistent Volume with 5Gi storage and no class name specified. The Deployment should use Persistent Volume claim with 2Gi storage.
<!-- index.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=Edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>K8s Bootcamp (CKAD)</title>
</head>
<body>
<h1>Welcome to K8s Bootcamp!</h1>
</body>
</html>
Variables can be specified via the command-line when creating a naked Pod with kubectl run mypod --image=nginx --env="MY_VARIABLE=myvalue"
. However naked Pods are not recommended in live environments, so our main focus is creating variables for deployments.
The kubectl create deploy
command does not currently support the --env
option, thus the easiest way to add variables to a deployment is to use kubectl set env deploy
command after the deployment is created.
Note that doing
kubectl set env deploy --dry-run=client
will only work if the deployment is already created
To generate a YAML file with variables via command-line, firstkubectl create deploy
, thenkubectl set env deploy --dry-run=client -o yaml
and edit to remove unnecessary metadata and statuses
db
Deployment using mysql
imageSTATUS
db
Pod with an appropriate environment variable specifiedkubectl create deploy db --image=mysql
kubectl get po --watch # status=containercreating->error->crashloopbackoff->error->etc, ctrl+c to quit
kubectl describe po $POD_NAME # not enough info to find issue, so check logs
kubectl logs $POD_NAME|less # found issue, must specify one of `MYSQL_ROOT_PASSWORD|MYSQL_ALLOW_EMPTY_PASSWORD|MYSQL_RANDOM_ROOT_PASSWORD`
kubectl set env deploy db MYSQL_ROOT_PASSWORD=mysecret
kubectl get po # status=running
kubectl describe deploy db # review deployment env-var format
kubectl get deploy db -oyaml|less # review deployment env-var format
kubectl run db --image=mysql --env=MYSQL_ROOT_PASSWORD=mypwd
kubectl get po # status=running
kubectl describe deploy db # review pod env-var format
kubectl describe deploy,po db | grep -iEA15 "pod template:|containers:" | less # see `grep -h`
kubectl get po db -oyaml|less # review pod env-var format
kubectl delete deploy,po db
Note that you can use Pod fields as env-vars, as well as use container fields as env-vars
ConfigMaps are used to decouple configuration data from application code. The configuration data may be variables, files or command-line args.
# create configmap `mycm` from file or directory, see `kubectl create cm -h`
kubectl create configmap mycm --from-file=path/to/file/or/directory
# create configmap from file with specified key
kubectl create configmap mycm --from-file=key=path/to/file
# create configmap from a varibales file (file contains KEY=VALUE on each line)
kubectl create configmap mycm --from-env-file=path/to/file.env
# create configmap from literal values
kubectl create configmap mycm --from-literal=KEY1=value1 --from-literal=KEY2=value2
# display details of configmap `mycm`
kubectl describe cm mycm
kubectl get cm mycm -o yaml
# use configmap `mycm` in deployment `web`, see `kubectl set env -h`
kubectl set env deploy web --from=configmap/mycm
# use specific keys from configmap with mutliple env-vars, see `kubectl set env deploy -h`
kubectl set env deploy web --keys=KEY1,KEY2 --from=configmap/mycm
# remove env-var KEY1 from deployment web
kubectl set env deploy web KEY1-
file.env
file with the following content:
MYSQL_ROOT_PASSWORD=pwd
MYSQL_ALLOW_EMPTY_PASSWORD=true
mycm-file
from the file using --from-file
optionmycm-env
from the file using --from-env-file
optionmysql
image using the ConfigMaps as env-vars:
web-file
for ConfigMap mycm-file
web-env
for ConfigMap mycm-env
printenv
to confirm env-varsprintenv
to confirm env-varsecho "MYSQL_ROOT_PASSWORD=mypwd
MYSQL_ALLOW_EMPTY_PASSWORD=true" > file.env
kubectl create cm mycm-file --keys=MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD --from-file=file.env
kubectl create cm mycm-env --keys=MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD --from-env-file=file.env
kubectl describe cm mycm-file mycm-env |less # mycm-file has one filename key while mycm-env has two env-var keys
kubectl get cm mycm-file mycm-env -oyaml|less
kubectl create deploy web-file --image=mysql --dry-run=client -oyaml > webfile.yml
kubectl apply -f webfile.yml # need an existing deployment to generate yaml for env-vars
kubectl set env deploy web-file --keys=MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD --from=configmap/mycm-file --dry-run=client -oyaml
kubectl create deploy web-env --image=mysql --dry-run=client -oyaml | less # no output = keys not found in configmap
kubectl create deploy web-env --image=mysql --dry-run=client -oyaml > webenv.yml
kubectl apply -f webenv.yml # need an existing deployment to generate yaml for env-vars
kubectl set env deploy web-env --keys=MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD --from=configmap/mycm-env --dry-run=client -oyaml|less # output OK and two env-var keys set
# copy the working env-var within the container spec to webenv.yml to avoid adding unnecessary fields
kubectl apply -f webenv.yml
kubectl get deploy,po # deployment web-env shows 1/1 READY, copy pod name
kubectl exec -it $POD_NAME -- printenv # shows MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD
kubectl run mypod --image=mysql --dry-run=client -oyaml > pod.yml
kubectl apply -f pod.yml # need existing pod to generate yaml for env-vars
kubectl set env pod mypod --keys=MYSQL_ROOT_PASSWORD,MYSQL_ALLOW_EMPTY_PASSWORD --from=configmap/mycm-env --dry-run=client -oyaml|less
# copy env-var from output container spec to pod.yml to avoid clutter
kubectl delete -f pod.yml # naked pod cannot update env-var, only deployment
kubectl apply -f pod.yml
kubectl get all,cm # mypod in running state
kubectl exec -it mypod -- printenv
kubectl delete deploy,po,cm mycm-file mycm-env web-file web-env mypod
rm file.env
In the previous lab, only the Env-Var ConfigMap worked for our use-case. In this lab we will see how we can use the File ConfigMap.
You may also follow the offical add ConfigMap data to a Volume docs
file.env
file with the following content:
MYSQL_ROOT_PASSWORD=pwd
mycm
from the file and verify resource detailsmysql
imageMYSQL_ROOT_PASSWORD_FILE=/etc/config/file.env
, see the Docker Secrets section of MYSQL image
mycm
as a volume to /etc/config/
, see Populate a volume with ConfigMap
html/index.html
file with any contentwebserver
deployment with an appropriate image and mount the file to the DocumentRoot via ConfigMap
nginx
DocumentRoot - /usr/share/nginx/htmlhttpd
DocumentRoot - /usr/local/apache2/htdocsecho "MYSQL_ROOT_PASSWORD=pwd" > file.env
kubectl create cm mycm --from-file=file.env --dry-run=client -oyaml > lab11-3.yml
echo --- >> lab11-3.yml
kubectl run mypod --image=mysql --env=MYSQL_ROOT_PASSWORD_FILE=/etc/config/file.env --dry-run=client -oyaml >> lab11-3.yml
wget -qO- https://k8s.io/examples/pods/pod-configmap-volume.yaml | less # copy relevant details to lab11-3.yml
nano lab11-3.yml
kind: Pod
spec:
volumes:
- name: config-volume
configMap:
name: mycm
containers:
- name: mypod
volumeMounts:
- name: config-volume
mountPath: /etc/config
# etc, rest same as generated
kubectl apply -f lab11-3.yml
kubectl get po # mypod in running state
kubectl exec mypod -it -- printenv # shows MYSQL_ROOT_PASSWORD_FILE
# part 2 of lab
mkdir html
echo "Welcome to Lab 11.3 - Part 2" > html/index.html
kubectl create cm webcm --from-file=html/index.html
echo --- >> 11-3.yml
kubectl create deploy webserver --image=httpd --dry-run=client -oyaml > lab11-3.yml
nano lab11-3.yml # copy yaml format above and fix indentation
kind: Deployment
spec:
template:
spec:
volumes:
- name: config-volume
configMap:
name: webcm
containers:
- name: httpd
volumeMounts:
- name: config-volume
mountPath: /usr/local/apache2/htdocs
kubectl get deploy,po # note pod name and running status
kubectl exec $POD_NAME -it -- ls /usr/local/apache2/htdocs # index.html
kubectl port-forward pod/$POD_NAME 3000:80 & # bind port 3000 in background
curl localhost:3000 # Welcome to Lab 11.3 - Part 2
fg # bring job to fore-ground, then ctrl+c to terminate
kubectl delete -f lab11-3.yml
Pay attention to the types of ConfigMaps, File vs Env-Var, and also note their YAML form differences
Secrets are similar to ConfigMaps but specifically intended to hold sensitive data such as passwords, auth tokens, etc. By default, Kubernetes Secrets are not encrypted but base64 encoded.
To safely use Secrets, ensure to:
defaultMode
when mounting secrets to set file permissions to user:readonly - 0400
kubectl set env
image registry credentials
, e.g. docker image registry credsSecrets are basically encoded ConfigMaps and are both managed with
kubectl
in a similar way, seekubectl create secret -h
for more details
# secret `myscrt` as file for tls keys, see `kubectl create secret tls -h`
kubectl create secret tls myscrt --cert=path/to/file.crt --key=path/to/file.key
# secret as file for ssh private key, see `kubectl create secret generic -h`
kubectl create secret generic myscrt --from-file=ssh-private-key=path/to/id_rsa
# secret as env-var for passwords, ADMIN_PWD=shush
kubectl create secret generic myscrt --from-literal=ADMIN_PWD=shush
# secrets as image registry creds, `docker-registry` works for other registry types
kubectl create secret docker-registry myscrt --docker-username=dev --docker-password=shush [email protected] --docker-server=localhost:3333
# view details of the secret, shows base64 encoded value
kubectl describe secret myscrt
kubectl get secret myscrt -o yaml
# view the base64 encoded contents of secret `myscrt`
kubectl get secret myscrt -o jsonpath='{.data}'
# for secret with nested data, '{"game":{".config":"yI6eyJkb2NrZXIua"}}'
kubectl get secret myscrt -o jsonpath='{.data.game.\.config}'
# decode secret ".config" in '{"game":{".config":"yI6eyJkb2NrZXIua"}}'
kubectl get secret myscrt -o jsonpath='{.data.game.\.config}' | base --decode
# get a service account `mysa`
kubectl get serviceaccount mysa -o yaml
See the Kubernetes JSONPath support docs to learn more about using
jsonpath
You may follow the official managing secrets using kubectl docs
kube-system
namespace and determine its serviceAccountName
Secret
in useSecret
and decode the value of its keys: ca.crt
namespace
and token
.kubectl -nkube-system get po # shows name of coredns pod
kubectl -nkube-system get po $COREDNS_POD_NAME -oyaml | grep serviceAccountName
kubectl -nkube-system get sa $SERVICE_ACCOUNT_NAME -oyaml # shows secret name
kubectl -nkube-system get secret $SECRET_NAME -ojsonpath="{.data}" | less # shows the secret keys
kubectl -nkube-system get secret $SECRET_NAME -ojsonpath="{.data.ca\.crt}" | base64 -d # decode ca.crt, BEGIN CERTIFICATE... long string
kubectl -nkube-system get secret $SECRET_NAME -ojsonpath="{.data.namespace}" | base64 -d # decode namespace, kube-system
kubectl -nkube-system get secret $SECRET_NAME -ojsonpath="{.data.token}" | base64 -d # decode token, ey... long string
# very similar to configmap solution, accepting pull-requests
# very similar to configmap solution, accepting pull-requests
kubectl describe
yaml
.dockerconfigjson
key with jsonpath
# one-line command can be found in `kubectl create secret -h` examples, accepting pull-requests
The latest Bootcamp cohort have requested a new database in the rig
Namespace. This should be created as a single replica Deployment named db
running the mysql:8.0.22
image with container named mysql
. The container should start with 128Mi memory and 0.25 CPU but should not exceed 512Mi memory and 1 CPU.
The Resource limit values should be available in the containers as env-vars MY_CPU_LIMIT
and MY_MEM_LIMIT
for the values of the cpu limit and memory limit respectively. The Pod IP address should also be available as env-var MY_POD_IP
in the container.
A Secret named db-secret
should be created with variables MYSQL_DATABASE=bootcamp
and MYSQL_ROOT_PASSWORD="shhhh!"
to be used by the Deployment as the database credentials. A ConfigMap named db-config
should be used to load the .env
file (see below) and provide environment variable DEPLOY_ENVIRONMENT
to the Deployment.
# .env
DEPLOY_CITY=manchester
DEPLOY_REGION=north-west
DEPLOY_ENVIRONMENT=staging
Whilst a Pod is running, the kubelet is able to restart containers to handle some kind of faults. Within a Pod, Kubernetes tracks different container states and determines what action to take to make the Pod healthy again. See Pod lifecycle for more details.
Pod states can be viewed with kubectl get pods
under STATUS
column:
A Pod's status
field is a PodStatus object, which has a phase
field that can have the values: Pending | Running | Succeeded | Failed | Unknown
.
A probe is a diagnostic performed periodically by the kubelet on a container, either by executing code within the container, or by network request. A probe will either return: Success | Failure | Unknown
. There are four different ways to check a container using a probe:
exec
: executes a specified command within the container, status code 0 means Success
.grpc
: performs a remote procedure call using gRPC, this feature is in alpha
stage (not for CKAD)httpGet
: performs HTTP GET request against the Pod's IP on a specified port and path, status code greater than or equal to 200 and less than 400 means Success
.tcpSocket
: performs a TCP check against the Pod's IP on a specified port, port is open means Success
, even if connection is closed immediately.The kubelet can optionally perform and react to three kinds of probes on running containers:
livenessProbe
: indicates if container is running, On failure, the kubelet kills the container which triggers restart policy. Defaults to Success
if not set. See when to use liveness probe.readinessProbe
: indicates if container is ready to respond to requests. On failure, the endpoints controller removes the Pod's IP address from the endpoints of all Services that match the Pod. Defaults to Success
if not set. If set, starts as Failure
. See when to use readiness probe?.startupProbe
: indicates if application within container is started. All other probes are disabled if a startup probe is set, until it succeeds. On failure, the kubelet kills the container which triggers restart policy. Defaults to Success
if not set. See when to use startup probe?.For more details, see configuring Liveness, Readiness and Startup Probes
Many applications running for long periods of time eventually transition to broken states, and cannot recover except by being restarted. Kubernetes provides liveness probes to detect and remedy such situations.
You may follow the official define a liveness command tutorial to complete this lab.
# get events
kubectl get events
# get events of a specific resource, pod, deployment, etc
kubectl get events --field-selector=involvedObject.name=$RESOURCE_NAME
# watch events for updates
kubectl get events --watch
pods/probe/exec-liveness.yaml
as base, create a Deployment myapp
manifest file as follows:
mkdir /tmp/healthy; sleep 30; rm -d /tmp/healthy; sleep 60; mkdir /tmp/healthy; sleep 600;
/tmp/healthy
directorythe container creates a directory
/tmp/healthy
on startup, deletes the directory 30secs later, recreates the directory 60secs later
your goal is to monitor the Pod behaviour/statuses during these events, you can repeat this lab until you understand liveness probes
kubectl create deploy myapp --image=busybox --dry-run=client -oyaml -- /bin/sh -c "touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 60; touch /tmp/healthy; sleep 600;" >lab12-1.yml
wget -qO- https://k8s.io/examples/pods/probe/exec-liveness.yaml | less # copy the liveness probe section
nano lab12-1.yml # paste, edit and fix indentation
kind: Deployment
spec:
template:
spec:
containers:
livenessProbe:
exec:
command:
- ls # `cat` for file
- /tmp/healthy
initialDelaySeconds: 10
periodSeconds: 10
kubectl apply -f lab12-1.yml
kubectl get po # find pod name
kubectl get events --field-selector=involvedObject.name=$POD_NAME --watch
kubectl delete -f lab12-1.yml
Probes have a number of fields that you can use to more precisely control the behavior of liveness and readiness checks:
initialDelaySeconds
: Seconds to wait after container starts before initiating liveness/readiness probes - default 0, minimum 0.periodSeconds
: How often (in seconds) to perform the probe - default 10, minimum 1.timeoutSeconds
: Seconds after which the probe times out - default 1, minimum 1.successThreshold
: Number of consecutive successes after a failure for the probe to be considered successful - default 1, minimum 1, must be 1 for liveness/startup ProbesfailureThreshold
: Number of consecutive retries on failure before giving up, liveness probe restarts the container after giving up, readiness probe marks the Pod as Unready - defaults 3, minimum 1.Sometimes, applications are temporarily unable to serve traffic, for example, a third party service become unavailable, etc. In such cases, you don't want to kill the application, but you don't want to send it requests either. Kubernetes provides readiness probes to detect and mitigate these situations. Both readiness probe and liveness probe use similar configuration.
pods/probe/http-liveness.yaml
as base, create a Deployment myapp
manifest file as follows:
nginx:1.22-alpine
image/
on port 80 returns a success status code/
on port 80 returns a success status codedelay | timeout | period | success | failure
, how do you set these values?pods/probe/tcp-liveness-readiness.yaml
as example, edit the Deployment as follows:
deploy.spec.template
kubectl create deploy myapp --image=nginx:1.22-alpine --replicas=2 --dry-run=client -oyaml > lab12-2.yml
wget -qO- https://k8s.io/examples/pods/probe/http-liveness.yaml | less # copy probe section
nano lab12-2.yml # paste, fix indentation, edit correctly
kind: Deployment
spec:
template:
spec:
containers:
- name: nginx
readinessProbe:
httpGet:
path: /
port: 80 # change this to 8080 in later steps
initialDelaySeconds: 3
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 8
periodSeconds: 6
# etc
kubectl apply -f lab12-2.yml
kubectl describe deploy myapp # review `Pod Template > Containers` and Events
kubectl get po
kubectl describe po $POD_NAME # review Containers and Events
KUBE_EDITOR=nano kubectl edit deploy myapp # change port to 8080 and save
kubectl describe deploy myapp # only shows rollout events
kubectl get po # get new pod names
kubectl describe po $NEW_POD_NAME # review Containers and Events
KUBE_EDITOR=nano kubectl edit deploy myapp # replace probes config with below
kind: Deployment
spec:
template:
spec:
containers:
- name: nginx
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 10
# etc
kubectl get po # get new pod names
kubectl describe po $ANOTHER_NEW_POD_NAME # review Containers and Events, no news is good news
kubectl delete -f lab12-2.yml
Sometimes, you have to deal with legacy applications that might require additional startup time on first initialization. In such cases, it can be tricky to set up liveness probe without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds
long enough to cover the worse case startup time.
pods/probe/http-liveness.yaml
as base, create a Deployment myapp
manifest file as follows:
nginx:1.22-alpine
image/
on port 80 returns a success status code/
on port 80 returns a success status codekubectl create deploy myapp --image=nginx:1.22-alpine --replicas=2 --dry-run=client -oyaml > lab12-3.yml
nano lab12-3.yml # add probes
kind: Deployment
spec:
template:
spec:
containers:
- name: nginx
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 5
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 8
periodSeconds: 6
startupProbe:
httpGet:
path: /
port: 80
periodSeconds: 5
failureThreshold: 36 # 5secs x 36 = 3mins
# etc
kubectl apply -f lab12-3.yml
kubectl describe deploy myapp
kubectl get po
kubectl describe po $POD_NAME
kubectl delete -f lab12-2.yml
Blue/green deployment is a update-strategy used to accomplish zero-downtime deployments. The current version application is marked blue and the new version application is marked green. In Kubernetes, blue/green deployment can be easily implemented with Services.
blue
Deployment
nginx:1.19-alpine
/mnt/data/index.html
with any contentindex.html
file to the DocumentRoot as a HostPath volumeblue
Deployment on port 80 with Service name bg-svc
curl
green
Deployment using [1] as base
nginx:1.21-alpine
/mnt/data2/index.html
with different contentindex.html
file to the DocumentRoot as a HostPath volumecurl
bg-svc
Service Selector as app=green
to redirect traffic to green
Deploymentcurl
# host terminal
minikube ssh
# node terminal
sudo mkdir /mnt/data /mnt/data2
echo "This is blue deployment" | sudo tee /mnt/data/index.html
echo "Green deployment" | sudo tee /mnt/data2/index.html
exit
# host terminal
kubectl create deploy blue --image=nginx:1.19-alpine --replicas=3 --dry-run=client -oyaml>12-4.yml
nano 12-4.yml # add hostpath volume and pod template label
kind: Deployment
spec:
template:
spec:
containers:
volumeMounts:
- mountPath: /usr/share/nginx/html
name: testvol
volumes:
- name: testvol
hostPath:
path: /mnt/data
kubectl apply -f lab12-4.yml
kubectl expose deploy blue --name=bg-svc --port=80
kubectl get all,ep -owide
kubectl run mypod --rm -it --image=nginx:alpine -- sh
# container terminal
curl bg-svc # This is blue deployment
exit
# host terminal
cp lab12-4.yml lab12-4b.yml
nano lab12-4b.yml # change `blue -> green`, hostpath `/mnt/data2`, image `nginx:1.21-alpine
kubectl apply -f lab12-4b.yml
kubectl edit svc bg-svc # change selector `blue -> green`
kubectl run mypod --rm -it --image=nginx:alpine -- sh
# container terminal
curl bg-svc # Green deployment
exit
kubectl delete -f lab12-4.yml,lab12-4b.yml
Canary deployment is an update strategy where updates are deployed to a subset of users/servers (canary application) for testing prior to full deployment. This is a scenario where Labels are required to distinguish deployments by release or configuration.
updateType=canary
nginx:1.19-alpine
index.html
with any contentindex.html
file to the DocumentRoot as a ConfigMap volumecanary-svc
curl
updateType=canary
nginx:1.22-alpine
index.html
with different contentindex.html
file to the DocumentRoot as a ConfigMap volumecurl
requests to the IP in [2] and confirm access to both webserversScaling down to zero instead of deleting provides an easy option to revert changes when there are issues
# host terminal
kubectl create cm cm-web1 --from-literal=index.html="This is current version"
kubectl create deploy web1 --image=nginx:1.19-alpine --replicas=3 --dry-run=client -oyaml>12-5.yml
nano 12-5.yml # add configmap volume and pod template label
kind: Deployment
spec:
selector:
matchLabels:
app: web1
updateType: canary
template:
metadata:
labels:
app: web1
updateType: canary
spec:
containers:
volumeMounts:
- mountPath: /usr/share/nginx/html
name: testvol
volumes:
- name: testvol
configMap:
name: cm-web1
kubectl apply -f lab12-5.yml
kubectl expose deploy web1 --name=canary-svc --port=80
kubectl get all,ep -owide
kubectl run mypod --rm -it --image=nginx:alpine -- sh
# container terminal
curl canary-svc # This is current version
exit
# host terminal
cp lab12-5.yml lab12-5b.yml
kubectl create cm cm-web2 --from-literal=index.html="New version"
nano lab12-5b.yml # change `web1 -> web2`, image `nginx:1.22-alpine`, replicas 1, add pod template label
kubectl apply -f lab12-5b.yml
kubectl get all,ep -owide # more ip addresses added to endpoint
kubectl run mypod --rm -it --image=nginx:alpine -- sh
# container terminal
watch "curl canary-svc" # both "New version" and "This is current version"
kubectl scale deploy web2 --replicas=3
kubectl get rs,po -owide
kubectl scale deploy web1 --replicas=0
kubectl get rs,po -owide
kubectl delete -f lab12-5.yml,lab12-5b.yml
You have a legacy application legacy
running in the dam
Namespace that has a long startup time. Once startup is complete, the /healthz:8080
endpoint returns 200 status. If this application is down at anytime or starting up, this endpoint will return a 500 status. The container port for this application often changes and will not always be 8080
.
Create a probe for the existing Deployment that checks the endpoint every 10secs, for a maximum of 5mins, to ensure that the application does not receive traffic until startup is complete. 20 secs after startup, a probe should continue to check, every 30secs, that the application is up and running, otherwise, the Pod should be killed and restarted anytime the application is down.
You do not need to test that the probes work, you only need to configure them. Another test engineer will perform all tests.
printf '\nlab: lab environment setup in progress...\n'; echo '{"apiVersion":"v1","kind":"List","items":[{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"dam"}},{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"app":"legacy"},"name":"legacy","namespace":"dam"},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"legacy"}},"template":{"metadata":{"labels":{"app":"legacy"}},"spec":{"containers":[{"args":["/server"],"image":"registry.k8s.io/liveness","name":"probes","ports":[{"containerPort":8080}]}],"restartPolicy":"OnFailure"}}}}]}' | kubectl apply -f - >/dev/null; echo 'lab: environment setup complete!'
kubectl delete ns dam
In the hog
Namespace, you will find a Deployment named high-app
, and a Service named high-svc
. It is currently unknown if these resources are working together as expected. Make sure the Service is a NodePort type exposed on TCP port 8080 and that you're able to reach the application via the NodePort.
Create a single replica Deployment named high-appv2
based on high-app.json
file running nginx:1.18-alpine
.
high-appv2
Deployment such that 20% of all traffic going to existing high-svc
Service is routed to high-appv2
. The total Pods between high-app
and high-appv2
should be 5.high-app
and high-appv2
Deployments such that 100% of all traffic going to high-svc
Service is routed to high-appv2
. The total Pods between high-app
and high-appv2
should be 5.Finally, create a new Deployment named high-appv3
based on high-app.json
file running nginx:1.20-alpine
with 5 replicas and Pod Template label box: high-app-new
.
Update high-svc
Service such that 100% of all incoming traffic is routed to high-appv3
.
Since high-appv2
Deployment will no longer be used, perform a cleanup to delete all Pods related to high-appv2
only keeping the Deployment and ReplicaSet.
Command to setup environment (also creates high-app.json
file):
printf '\nlab: environment setup in progress...\n'; echo '{"apiVersion":"v1","kind":"List","items":[{"apiVersion":"v1","kind":"Namespace","metadata":{"name":"hog"}},{"apiVersion":"v1","kind":"Service","metadata":{"labels":{"kit":"high-app"},"name":"high-svc","namespace":"hog"},"spec":{"ports":[{"port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"box":"high-svc-child"}}}]}' | kubectl apply -f - >/dev/null; echo '{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"labels":{"kit":"high-app"},"name":"high-app","namespace":"hog"},"spec":{"replicas":4,"selector":{"matchLabels":{"box":"high-app-child"}},"template":{"metadata":{"labels":{"box":"high-app-child"}},"spec":{"containers":[{"image":"nginx:1.15-alpine","name":"nginx","ports":[{"containerPort":80}]}]}}}}' > high-app.json; kubectl apply -f high-app.json >/dev/null; echo 'lab: environment setup complete!';
Command to destroy environment:
kubectl delete ns hog
When you deploy Kubernetes, you get a cluster. See Kubernetes cluster components for more details.
Use kubectl api-resources | less
for an overview of available API resources.
APIVERSION
v1
core Kubernetes API groupapps/v1
first extension to the core grouppolicy/v1
and policy/v1beta1
NAMESPACED
controls visibilityThe Kubernetes release cycle is 3 months and deprecated features are supported for a minimum of 2 release cycles (6 months). Respond to deprecation message swiftly, you may use
kubectl api-versions
to view a short list of API versions andkubectl explain --recursive
to get more details on affected resources.
The current API docs at time of writing ishttps://kubernetes.io/docs/reference/generated/kubernetes-api/v1.25/
The Kubernetes API server kube-apiserver
is the interface to access all Kubernetes features, which include pods, services, replicationcontrollers, and others.
From within a Pod, the API server is accessible via a Service named kubernetes
in the default
namespace. Therefore, Pods can use the kubernetes.default.svc
hostname to query the API server.
In our minikube lab so far, we have been working with direct access to a cluster node, which removes the need for kube-proxy
. When using the Kubernetes CLI kubectl
, it uses stored TLS certificates in ~/.kube/config
to make secured requests to the kube-apiserver
.
However, direct access is not always possible with K8s in the cloud. The Kubernetes network proxy kube-proxy
runs on each node and make it possible to access kube-apiserver
securely by other applications like curl
or programmatically.
See the official so many proxies docs for the different proxies you may encounter when using Kubernetes.
# view a more verbose pod detais
kubectl --v=10 get pods
# start kube-proxy
kubectl proxy --port=PORT
# explore the k8s API with curl
curl localhost:PORT/api
# get k8s version with curl
curl localhost:PORT/version
# list pods with curl
curl localhost:PORT/api/v1/namespaces/default/pods
# get specific pod with curl
curl localhost:PORT/api/v1/namespaces/default/pods/$POD_NAME
# delete specific pod with curl
curl -XDELETE localhost:PORT/api/v1/namespaces/default/pods/$POD_NAME
Two things are required to access a cluster - the location of the cluster and the credentials to access it. Thus far, we have used kubectl
to access the API by running kubectl
commands. The location and credentials that kubectl
uses were automatically configured by Minikube during our Minikube environment setup.
Run kubectl config view
to see the location and credentials configured for kubectl
.
kubectl config view
Rather than run kubectl
commands directly, we can use kubectl
as a reverse proxy to provide the location and authenticate requests. See access the API using kubectl proxy for more details.
You may follow the official accessing the rest api docs
kube-proxy
curl
curl
curl
curl
kubectl
provides the auth can-i
subcommand for quickly querying the API authorization layer.
# check if deployments can be created in a namespace
kubectl auth can-i create deployments --namespace dev
# check if pods can be listed
kubectl auth can-i get pods
# check if a specific user can list secrets
kubectl auth can-i list secrets --namespace dev --as dave
Just as user accounts identifies humans, a service account identifies processes running in a Pod.
default
ServiceAccount/var/run/secrets/kubernetes.io/serviceaccount/token
/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
/var/run/secrets/kubernetes.io/serviceaccount/namespace
automountServiceAccountToken: false
on the ServiceAccount. Note that the pod spec takes precedence over the service account if both specify a automountServiceAccountToken
valueThis requires using the token of the default ServiceAccount. The token can be read directly (see lab 11.4 - decoding secrets), but the recommended way to get the token is via the TokenRequest API.
You may follow the official access the API without kubectl proxy docs.
kubectl create token $SERVICE_ACCOUNT_NAME
on Kubernetes v1.24+.curl
to access the API with the generated token as credentials# request token
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-token
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
# confirm token generated (optional)
kubectl get secret default-token -o yaml
# use token
APISERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
TOKEN=$(kubectl get secret default-token -o jsonpath='{.data.token}' | base64 --decode)
curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
Using
curl
with the--insecure
option skips TLS certificate validation
You may follow the official access the API from within a Pod docs.
From within a Pod, the Kubernetes API is accessible via the
kubernetes.default.svc
hostname
curl
to access the API at kubernetes.default.svc/api
with the automounted ServiceAccount credentials (token
and certificate
)kubernetes.default.svc/api/v1/namespaces/default/pods
?# connect an interactive shell to a container within the Pod
kubectl exec -it $POD_NAME -- /bin/sh
# use token stored within container to access API
SA=/var/run/secrets/kubernetes.io/serviceaccount
CERT_FILE=$($SA/ca.crt)
TOKEN=$(cat $SA/token)
curl --cacert $CERT_FILE --header "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/api
Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. RBAC authorization uses the rbac.authorization.k8s.io
API group for dynamic policy configuration through the Kubernetes API. RBAC is beyond CKAD, however, a basic understanding of RBAC can help understand ServiceAccount permissions.
The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding.
The Default RBAC policies grant scoped permissions to control-plane components, nodes, and controllers, but grant no permissions to service accounts outside the kube-system namespace (beyond discovery permissions given to all authenticated users).
There are different ServiceAccount permission approaches, but we will only go over two:
serviceAccountName
specified in the pod spec, and for the ServiceAccount to have been createddefault
service account in a namespace
default
service account are available to any pod in the namespace that does not specify a serviceAccountName
. This is a security concern in live environments without RBAC
# create a service account imperatively
kubectl create service account $SERVICE_ACCOUNT_NAME
# assign service account to a deployment
kubectl set serviceaccount deploy $DEPLOYMENT_NAME $SERVICE_ACCOUNT_NAME
# create a role that allows users to perform get, watch and list on pods, see `kubectl create role -h`
kubectl create role $ROLE_NAME --verb=get --verb=list --verb=watch --resource=pods
# grant permissions in a Role to a user within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --role=$ROLE_NAME --user=$USER --namespace=$NAMESPACE
# grant permissions in a ClusterRole to a user within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --user=$USER --namespace=$NAMESPACE
# grant permissions in a ClusterRole to a user across the entire cluster
kubectl create clusterrolebinding $CLUSTERROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --user=$USER
# grant permissions in a ClusterRole to an application-specific service account within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --serviceaccount=$NAMESPACE:$SERVICE_ACCOUNT_NAME --namespace=$NAMESPACE
# grant permissions in a ClusterRole to the "default" service account within a namespace
kubectl create rolebinding $ROLE_BINDING_NAME --clusterrole=$CLUSTERROLE_NAME --serviceaccount=$NAMESPACE:default --namespace=$NAMESPACE
In lab 13.3 we were unable to access the PodList API at kubernetes.default.svc/api/v1/namespaces/default/pods
. Lets apply the required permissions to make this work.
default
namespace, and verifycurl
to PodList API# create service account yaml
kubectl create serviceaccount test-sa --dry-run=client -o yaml > lab13-4.yaml
echo --- >> lab13-4.yaml
# create role yaml
kubectl create role test-role --resource=pods --verb=list --dry-run=client -o yaml >> lab13-4.yaml
echo --- >> lab13-4.yaml
# create rolebinding yaml
kubectl create rolebinding test-rolebinding --role=test-role --serviceaccount=default:test-sa --namespace=default --dry-run=client -o yaml >> lab13-4.yaml
echo --- >> lab13-4.yaml
# create configmap yaml
kubectl create configmap test-cm --from-literal="SA=/var/run/secrets/kubernetes.io/serviceaccount" --dry-run=client -o yaml >> lab13-4.yaml
echo --- >> lab13-4.yaml
# create pod yaml
kubectl run test-pod --image=nginx --dry-run=client -o yaml >> lab13-4.yaml
# review & edit yaml to add configmap and service account in pod spec, see `https://k8s.io/examples/pods/pod-single-configmap-env-variable.yaml`
nano lab13-4.yaml
# create all resources
kubectl apply -f lab13-4.yaml
# verify resources
kubectl get sa test-sa
kubectl describe sa test-sa | less
kubectl get role test-role
kubectl describe role test-role | less
kubectl get rolebinding test-rolebinding
kubectl describe rolebinding test-rolebinding | less
kubectl get configmap test-cm
kubectl describe configmap test-cm | less
kubectl get pod test-pod
kubectl describe pod test-pod | less
# access k8s API from within the pod
kubectl exec -it test-pod -- bash
TOKEN=$(cat $SA/token)
HEADER="Authorization: Bearer $TOKEN"
curl -H $HEADER https://kubernetes.default.svc/api --insecure
curl -H $HEADER https://kubernetes.default.svc/api/v1/namespaces/default/pods --insecure
curl -H $HEADER https://kubernetes.default.svc/api/v1/namespaces/default/pods/$POD_NAME --insecure
curl -H $HEADER https://kubernetes.default.svc/api/v1/namespaces/default/deployments --insecure
exit
# clean up
kubectl delete -f lab13-4.yaml