Node is basically a virtual machine. There are two types of nodes: master nodes and worker nodes. Worker nodes host pods. Pods run containers. The goal of deployments is to deploy pods.
# Create or apply changes to a deployment
k apply -f file.deployment.yml
# Scale the deployment pods to 5
k scale deployment [deployment-name] --replicas=5
Rolling updates allow deployments’ update to take place with zero downtime by incrementally updating pods instances with new ones
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 2 # num of pod replicas
minReadySeconds: 1 # time to wait to be considered healthy
progressDeadlineSeconds: 60 # time to wait before reporting stalled deployment
revisionHistoryLimit: 5 # num of ReplicaSets that can be rolled back (default is 10)
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # max num of pods that can exceed the replicas count
maxUnavailable: 1 # max num of pods that can be unavailable
# Create initial deployment and record in the revision history
k apply -f file.deployment.yml --record
# Get information about a deployment
k rollout status deployment [deployment-name]
Rolling update revisions can be tracked using --record
If a deployment has issues, a new deployment can be applied or you can revert to a previous revision
# Get information about a deployment
k rollout history deployment [deployment-name]
# Describe a deployment revision
k rollout history deployment [deployment-name] --revision=2
# Check status
k rollout status -f file.deployment.yml
# Rollback a deployment
k rollout undo -f file.deployment.yml
# Rollback to a specific revision
k rollout undo deployment [deployment-name] --to-revision=2
Canary deployment
A canary deployment involves 3 main kubernetes resources:
Blue-green deployment
Blue
: old applicationGreen
: new applicationblue
to green
when checks on the green
environment have passedBlue-green deployment flow
blue
and green
deployment and services
green
deploymentblue
to green
blue
deployment and servicesgreen
environment technically becomes the blue
environmentKey considerations
Define a blue test service
Define a blue public service
Define a blue deployment
Changing from blue
to green
green
deployment has been successfully rolled out and tested, change the public service’s selector to green
green
(imperative)role: blue
to role: green
# Change the role in service's YAML (declarative)
k apply -f file.service.yml
# Change the role using CLI (imperative)
k set selector svc [service-name] 'role=green'
Jobs
CronJobs
Understanding the cron format
Cron | Schedule |
---|---|
0 * * * * |
@hourly - run once every hour |
0 0 * * * |
@daily - run once every day at midnight |
0 0 * * 0 |
@weekly - run once every week |
0 0 1 * * |
@monthly - run once every month |
0 0 1 1 * |
@yearly - run once every year |
30 22 * * 1 |
Run at 22:30 every Monday |
1 0 1 * * |
Run at 00:01 on the first day of each month |
*/1 * * * * |
Run once every minute |
apiVersion: batch/v1 # Batch API
kind: Job # Job kind
metadata:
name: pie-counter
spec:
template:
metadata:
name: pie-counter
spec:
restartPolicy: Never # never try to restart when it fails (Never | OnFailure)
containers:
- name: pie-counter
image: alpine
command: # job command to run
- "sh"
- "-c"
- "echo 'scale=1000; 4*a(1)' | bc -l;sleep 2;"
apiVersion: batch/v1
kind: Job
metadata:
name: pie-counter
spec:
completions: 4 # run 4 pods sequentially (specifies the number of pods that must complete successfully)
activeDeadlineSeconds: 240 # how long can the job be active before it's terminated
template:
...
apiVersion: batch/v1
kind: Job
metadata:
name: pie-counter
spec:
completions: 4 # 4 pods must complete successfully
parallelism: 2 # 2 pods can run in parallel at a time
activeDeadlineSeconds: 240
template:
...
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: pie-counter
spec:
concurrencyPolicy: Allow # allow multiple pods to run event if their scheduling overlaps (Allow | Forbid | Replace)
schedule: "*/1 * * * *" # run the job every minute
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure # restart if it fails
containers:
...
# Create a new job
k create -f file.job.yml --save-config
# Create or modify a job
k apply -f file.job.yml
# Get jobs
k get jobs
k get job [job-name] -o yaml
# Get CronJobs
k get cronjobs
k get cronjob [job-name] -o yaml
# Describe jobs
k describe job [job-name]
Web UI (Dashboard) is a web-based Kubernetes user interface. It hooks into the kubernetes API to visualize your Kubernetes cluster. Everything in the dashboard could be found through kubectl
.
# Dashboard UI
k apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
# Get token
k -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
# Run proxy
k proxy
# Visit http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
Metrics Server
is a cluster-wide aggregator of resource usage data. It is deployed by default in clusters created by kube-up.sh script as a deployment object.
kube-state-metrics
is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. It is not focused on the health of the individual Kubernetes components, but rather on the health of the various objects inside, such as deployments, nodes and pods.
Prometheus
is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. Prometheus joined the Cloud Native Computing Foundation in 2016 as the second hosted project, after Kubernetes.
Follow instructions here if you want to set it up locally: https://github.com/DanWahlin/DockerAndKubernetesCourseCode/blob/master/samples/prometheus/readme.md
# Create namespace for monitoring resources
k create namespace monitoring
k get namespace
# Monitor resource usage from kubectl
k top nodes
k top pods
Grafana allows you to query, visualize, alert on and understand your metrics no matter where they are stored.
# Get the YAML manifest for a pod
k get pod [pod-name] -o yaml
# Can view events of a pod
k describe pod [pod-name]
# Shell into a pod
k exec [pod-name] -it sh
# View logs
k logs [pod-name]
k logs -f [pod-name]
# View container log
k logs [pod-name] -c [container-name]
# View logs of a previously run pod
k logs -p [pod-name]