Course by Dan Wahlin (Pluralsight)
Why Kubernetes?
Wouldn’t it be nice if can:
Kubernetes is a conductor of a container orchestra
Key Kubernetes features
Kubernetes is a container and cluster management that provides a declarative way to define a cluster’s state (if you define a desired state, kubernetes will get you there).
Master and worker nodes
master node
is the boss of the operation that knows how to manage the different employees, which we call worker nodes
master node
and worker nodes
form a cluster
master node
will start pods
inside the worker nodes
Pods and Containers
pods
could be seen as packaging for the containers
nodes
can run one or more pods
Kubernetes building blocks
deployment/ReplicaSet
)service
)Master node and kubectl
etcd (store)
: a database for the master node
to track things that happen in the cluster
controller manager
: responsible for when a request comes in, the manager can act upon that request and schedule it using a scheduler
scheduler
: determines when the nodes and the different pods come to life or go away, etc.kubectl
: command line tool that we can use to send different requests into the master node
, and those requests can then be scheduled to run on our different worker nodes
within the cluster
Worker node
kubelet
: an agent that registers the worker node
with the cluster
and reports back and forth to the manager (lives inside the worker node
)container runtime
: runtime to run containers within the podskube-proxy
: ensures each pod gets a unique IP address and ties into the services
Key container benefits
Key kubernetes benefits
Developer use cases for kubernetes
docker-compose
to kubernetesInstalling and running kubernetes locally
# common kubectl commands
k version
k cluster-info
k get all
k get pods
k get services
Pods
Pods, IPs, and Ports
# run the nginx:alpine container in a pod
k run [pod-name] --image=nginx:alpine
# list pods
k get pods
# list all resources
k get all
Expose a pod port
# enable pod container to be called externally (host port : internal port)
k port-forward [pod-name] 8080:80
# delete pod
k delete pod [pod-name]
# A deployment is responsible for the current state to be maintained. That is why if you use the delete command, it will spawn up a new pod right away.
# delete deployment
k delete deployment [deployment-name]
YAML files are composed of maps and lists
file.pod.yml
apiVersion: v1
kind: Pod # type of kubernetes resource
metadata: # metadata about the pod
name: my-nginx
labels: # used to link with other resources (services, deployments)
app: nginx
rel: stable
spec: # blueprint for the pod
containers: # info about the container that will run in the pod
- name: my-nginx
image: nginx:alpine
ports:
- containerPort: 80
resources: # add resource constraints to the containers
limits:
memory: "128Mi" # 128 MB
cpu: "200m" # 200 millicpu (0.2 cpu)
# to create a pod using YAML, use the kubectl create command along with --filename
k create -f file.pod.yml
# perform a dry run
k create -f file.pod.yml --dry-run --validate=true
# to create or apply changes to a pod using YAML, use the kubectl apply command along with --filename
# use this over `create`
k apply -f file.pod.yml
# use --save-config when you want to use `apply` in the future. It stores current properties in resource’s annotations.
k create -f file.pod.yml --save-config
# delete pod using YAML file that created it
k delete -f file.pod.yml
# get output of a pod (you can see that there are annotations if you created your pod with --save-config)
k get pod my-nginx -o yaml
# describe pod (the events at the very end is useful for debugging)
k describe pod my-nginx
# interactive mode
k exec my-nginx -it sh
# edit pod
k edit -f nginx.pod.yml
# delete pod (this will actually delete the pod since we do not have a deployment)
k delete -f nginx.pod.yml
Kubernetes relies on probes to determine the health of a pod container. A probe is a diagnostic performed periodically by the kubelet on a container.
Types of probes
Probe Actions
Probes can have the following results
Liveness probe
spec:
containers:
- name: my-nginx
image: nginx:alpine
livenessProbe: # define liveness probe
httpGet: # check liveness by sending HTTP requests
path: /index.html # check /index.html on port 80
port: 80
initialDelaySeconds: 15 # wait 15 seconds before sending the first request
timeoutSeconds: 2 # timeout after 2 seconds
periodSeconds: 5 # check every 5 seconds
failureThreshold: 1 # allow 1 failure before failing the pod
Readiness probe
spec:
containers:
- name: my-nginx
image: nginx:alpine
readinessProbe: # define readiness probe
httpGet:
path: /index.html # check /index.html on port 80
port: 80
initialDelaySeconds: 2 # wait 2 seconds
periodSeconds: 5 # check every 5 seconds
A ReplicaSet
is a declarative way to manage pods. A Deployment
is a declarative way to manage pods using a ReplicaSet
.
Pods, Deployments, and ReplicaSets
The Role of ReplicaSets
The Role of Deployments
Deployment
is a higher level wrapper around ReplicaSet
.
apiVersion: apps/v1
kind: Deployment # resource type
metadata: # metadata about the deployment
name: frontend
labels: # labels can be used with selectors to tie resources together
app: my-nginx
tier: frontend
spec: # define the deployment spec
selector: # selector will be used to select the template to create the pods
matchLabels:
tier: frontend # here we are selecting the template with the label frontend (this will select the pod template below)
template: # pod template
metadata:
labels:
tier: frontend
spec: # define the pod spec
containers: # container that will run in the pod
- name: my-nginx
image: nginx:alpine
# Start Deployments
k apply -f nginx.deployment.yml
# Describe Deployments
k describe deployment my-nginx
# List all Deployments and their labels
k get deployments --show-labels
# Get all Deployments with a specific label
k get deployments -l app=nginx
# Delete Deployment
k delete deployment [deployment-name]
# Scale the Deployment pods
k scale deployment [deployment-name] --replicas=5
Zero downtime deployments allow software updates to be deployed to production without impacting end users
A service provides a single point of entry for accessing one or more pods We cannot rely on IP addresses of pods because they live and die. That’s why we need services to manage them at a higher level.
Pods are “mortal” and may only live a short time (ephemeral). You can’t rely on a pod IP address staying the same. Pods can also horizontally scale. A pod gets an IP address after it has been scheduled (no way for clients to know the IP address ahead of time).
The Role of Services
ClusterIP Service
NodePort Service
LoadBalancer Service
ExternalName Service
# Listen on port 8080 locally and forward to service's pod
k port-forward service/[service-name] 8080
apiVersion: v1
kind: Service
metadata:
name: frontend # name of service (each service gets a DNS entry, which can be used in place of the actual IP address)
ClusterIP Service
apiVersion: v1
kind: Service
metdata:
name: nginx-clusterip # DNS name
spec:
type: ClusterIP
selector:
app: my-nginx
ports:
- port: 8080
targetPort: 80
NodePort Service
apiVersion: v1
kind: Service
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 31000
LoadBalancer Service
apiVersion: v1
kind: Service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
ExternalName Service
apiVersion: v1
kind: Service
metadata:
name: external-service # other pods can use this alias to access the external service
spec:
type: ExternalName
externalName: api.acmecorp.com
ports:
- port: 9000
# Update a Service
k apply -f file.service.yml
# Delete a Service
k delete -f file.service.yml
k delete service [service-name]
# Get services
k get services
# Testing a Service and Pod with curl
# Grab the IP address of a pod (podIP)
k get pod [pod-name] -o yaml
# Shell into a Pod and test a URL
k exec [pod-name] -- curl -s http://podIP
# Install and use curl
k exec [pod-name] -it sh
> apk add curl
> curl -s http://podIP
Q: How do you store application state/data and exchange it between pods with kubernetes? A: Volumes (although other data storage options exist such as database)
A volume can be used to hold data and state for pods and containers
A volume references a storage location
Volume Types
PersistentVolume & PersistentVolumeClaim examples
Any properties from StorageClass template will be available to PersistentVolume and PersistentVolumeClaim
ConfigMap
Accessing ConfigMap Data in a Pod
Defining values in a ConfigMap manifest
apiVersion: v1
kind: ConfigMap
metadata:
name: app-settings
labels:
app: app-settings
data:
enemies: aliens
lives: "3"
enemies.cheat: "true"
enemies.cheat.level=noGoodRotten
Ways to create a ConfigMap
# Create a ConfigMap using data from a file
k create configmap [configmap-name] --from-file=[path-to-file]
# Create ConfigMap using data from an env file
k create configmap [configmpa-name] --from-env-file=[path-to-file]
# Create ConfigMap from individual data values
k create configmap [cm-name] --from-literal=exampleKey=exampleValue
# Create from a ConfigMap manifest
k create -f file.configmap.yml
# get all configmaps
k get configmap
# get configmap info
k get configmap [configmap-name] -o yaml
k get cm [configmap-name] -o yaml
Environment Variables
Volume
Secret
Secrets Best Practices
Creating a secret
# Create a secret and store securely in k8s
k create secret generic [secret-name] --from-literal=pwd=my-password
# Create a secret from a file
k create secret generic [secret-name] \
--from-file=ssh-privatekey=~/.ssh/id_rsa \
--from-file=ssh-publickey=~/.ssh/id_rsa.pub
# Create a secret from a key pair
k create secret tls [secret-name] --cert=path/to/tls.cert \
--key=path/to/tls.key
# Get secrets
k get secrets
# Get YAML for specific secret
k get secrets [secret-name] -o yaml
Accessing a secret: environment variables
Accessing a secret: volumes
# View the logs for a pod's container
k logs [pod-name]
# View the logs for a specific container within a pod
k logs [pod-name] -c [container-name]
# View the logs for a previously running pod
k logs -p [pod-name]
# Stream a pod's logs
k logs -f [pod-name]
# Describe a pod
k describe pod [pod-name]
# Change a pod's output format
k get pod [pod-name] -o yaml
# Change a deployment's output format
k get deployment [deployment-name] -o yaml
# Shell into a pod container
k exec [pod-name] -it sh