Skip to content

CIS Benchmarks K8s

CIS Benchmarks K8s published on

Overview CIS Benchmarks on K8s

From CKS there is a chapter about CIS Benchmarks with kube-bench.

Benchmarks are coming from https://www.cisecurity.org/benchmark/kubernetes/. You have to go to "Download Latest CIS Benchmark" and register, then you'll get a link an can download all relevant benchmarks, for this how-to we used Fixes are based on "CIS Kubernetes V1.20 Benchmark - v1.0.0" from 2021-05-19.

This documents what changes are needed, to fix all FAILs.


Get Benchmarks

First check https://github.com/aquasecurity/kube-bench/blob/main/docs/platforms.md and take a note which benchmark version you want to check against.

For my cluster running with K8s version 1.23.1 we use cis-1.20 version.

wget -O kube-bench-control-plane.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-
master.yaml
wget -O kube-bench-node.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml

Now you could change the startup command in the job yamls, to add the benchmark you want to use. You could also define --version instead of --benchmark, but it should be done automatically:

      containers:
        - name: kube-bench
          image: aquasec/kube-bench:latest
          command: ["kube-bench", "run", "--benchmark", "cis-1.20", "--targets", "master"]

Now create the jobs:

kubectl create -f kube-bench-control-plane.yaml
kubectl create -f kube-bench-node.yaml

kubectl get pods

kubectl logs kube-bench-master-<RAND> > bench-master.log
kubectl logs kube-bench-node-<RAND> > bench-worker.log

Control Plane

Get the output for master:

$ grep FAIL bench-master.log 
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
[FAIL] 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[FAIL] 1.2.15 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[FAIL] 1.2.18 Ensure that the --insecure-port argument is set to 0 (Automated)
[FAIL] 1.2.20 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.2.21 Ensure that the --audit-log-path argument is set (Automated)
[FAIL] 1.2.22 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
[FAIL] 1.2.23 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
[FAIL] 1.2.24 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)

Worker Nodes

Get the output of the worker:

$ grep FAIL bench-worker.log 
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)

Fixing Control Plane

Below documented the failed controls.

Fixes are based on "CIS Kubernetes V1.20 Benchmark" v1.0.0 - 2021-05-19.

Summary

After fixing, we have three open controls, which cannot be fixed right now:

[FAIL] 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[FAIL] 1.2.15 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[FAIL] 1.2.21 Ensure that the --audit-log-path argument is set (Automated)

Reason see below in their respective sections.

1.1.12

By default there is no etcd user.
First add user (from https://devopscube.com/setup-etcd-cluster-linux/)

groupadd -f -g 1501 etcd
useradd -c "etcd user" -d /var/lib/etcd -s /bin/false -g etcd -u 1501 etcd
chown etcd:etcd /var/lib/etcd/

1.2.5

1.2.5 Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the
--kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>

The obvious solution to add - --kubelet-certificate-authority=/etc/kubernetes/pki/etcd/ca.crt to /etc/kubernetes/manifests/kube-apiserver.yaml does not really work. After rerunning the kube-bench job, the master pod will have theses logs:

$ kubectl logs kube-bench-master-jhnp9 
Error from server: Get "https://213.167.224.157:10250/containerLogs/default/kube-bench-master-jhnp9/kube-bench": x509: cannot validate certificate for 213.167.224.157 because it doesn't contain any IP SANs

According to https://stackoverflow.com/q/63994701/7311363 "you first need to make sure you got Kubelet authentication and Kubelet authorization enabled. After that you can follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelet"

Sounds interesting, to be revisited once all FAILs are fixed.

Set it back to the default.

1.2.15

deprecated soon

https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/

1.2.15 Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the --enable-admission-plugins parameter to a value that includes PodSecurityPolicy:
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.

Check ps -ef | grep kube-apiserver and ensure PodSecurityPolicy is included in --enable-admission-plugins.

If not add it to /etc/kubernetes/manifests/kube-apiserver.yaml

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy

After setting that, the pod for the our CIS benchmark job cannot be started:

  Type     Reason        Age                From            Message
  ----     ------        ----               ----            -------
  Warning  FailedCreate  27s (x3 over 57s)  job-controller  Error creating: pods "kube-bench-master-" is forbidden: PodSecurityPolicy: no providers available to validate pod request

checking for PodSecurityPolicies:

$ kubectl get podsecuritypolicies.policy 
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found

1.2.18

1.2.18 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.
--insecure-port=0

check

ps -ef | grep kube-apiserver

Verify that the --insecure-port argument is set to 0 .

fix

If not add it to /etc/kubernetes/manifests/kube-apiserver.yaml

spec:
  containers:
  - command:
    - kube-apiserver
    ...
    - --insecure-port=0

1.2.21 - 1.2.25 profiling and auditlog

Audit log is important to log security-relevant actions.

Edit /etc/kubernetes/manifests/kube-apiserver.yaml

spec:
  containers:
  - command:
    - kube-apiserver
...
    - --audit-log-path=/var/log/apiserver/audit.log
    - --audit-log-maxage=30
    - --audit-log-maxbackup=10
    - --audit-log-maxsize=100

WARNING:
If you set audit-log-path, kubelet cannot start and will show error like:

Jan 15 19:43:28 server.example.com kubelet[13157]: E0115 19:43:28.784974   13157 kubelet.go:2422] "Error getting node" err="node \"server.example.com\" not found"
Jan 15 19:43:28 server.example.com kubelet[13157]: E0115 19:43:28.886336   13157 kubelet.go:2422] "Error getting node" err="node \"server.example.com\" not found"

1.2.20, 1.3.2, 1.4.1 Disable Profiling

Disable profiling to reduce the potential attack surface. Profiling is used or the identification of specific performance bottlenecks and if we aren't active troubleshooting, we can disable it.

We have to do this for:

  • /etc/kubernetes/manifests/kube-apiserver.yaml (1.2.20)
  • /etc/kubernetes/manifests/kube-controller-manager.yaml (1.3.2)
  • /etc/kubernetes/manifests/kube-scheduler.yaml (1.4.1)
spec:
  containers:
  - command:
    - kube-<SYSTEMPOD>
    ....
    - --profiling=false

Fixing Worker Node

There is only one FAIL in the default configuration.
On my Cluster I have a control plane who is also working as worker node, therefore it must also be added there.

4.2.6

4.2.6 If using a Kubelet config file, edit the file to set rotectKernelDefaults: true.
If using command line arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

According to CIS Benchmarks we should add --protect-kernel-defaults=true to KUBELET_SYSTEM_PODS_ARGS. This environment variable does not exist. We create it and edit the ExecStart command:

  1. Edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  2. Add
    Environment="KUBELET_SYSTEM_PODS_ARGS=--protect-kernel-defaults=true"
  3. Add $KUBELET_SYSTEM_PODS_ARGS to the ExecStart, have it like that:
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_SYSTEM_PODS_ARGS
  4. sudo systemctl daemon-reload
  5. sudo systemctl restart kubelet.service

Now CIS Benchmarks should show everything ok for the nodes.

Categories

K8s at Hosttech

K8s at Hosttech published on

This how-to documents how to set up a K8s cluster at hosttech.

Base setup Ubuntu VMs

There are two VMS:

  • saanen.chloesoe.ch
  • lauenen.chloesoe.ch

Via https://www.myhosttech.eu/user-products/ it's possible to re-install the operating system.

Configure after re-install:

  • vigr and add user to sudo group
  • visudo and ensure
    %sudo   ALL=(ALL:ALL) NOPASSWD: ALL
  • Set hostname
    • /etc/hosts 127.0.1.1 xyz.chloesoe.ch xyz
    • hostnamectl set-hostname xyz.chloesoe.ch
  • update-alternatives --config editor
  • enable bash completion in interactive shells in /etc/bash.bashrc
  • ~/.vimrc
    set laststatus=2
    set hlsearch
    set backup
    set backupdir=~/.vim/tmp,/tmp,~/
    set history=5000
  • ~/.bashrc
    • alias ls='ls --color --group-directories-first'
  • /etc/ssh/sshd_conf
    • PasswordAuthentication no
    • PermitRootLogin no
  • copy your key to ~/.ssh/authorized_keys
  • echo "source <(kubectl completion bash)" >> ~/.bashrc

Install K8s

See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Commands from acould.guru course and adjusted where needed.

On all nodes, set up containerd. You will need to load some kernel modules and modify some system settings as part of this
process:

cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward
= 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system

Install and configure containerd.

sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd

Disable swap on all nodes:

On all nodes, disable swap.
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

On all nodes, install kubeadm, kubelet, and kubectl.

sudo apt-get update && sudo apt-get install -y apt-transport-https curl gnupg2

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

sudo apt-get update

export kversion=1.23.1-00
sudo apt install -y kubelet=$kversion kubeadm=$kversion kubectl=$kversion

sudo apt-mark hold kubelet kubeadm kubectl

only control-plane

sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.23.1

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify the cluster is working.

kubectl get nodes

Install the Calico network add-on.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Get join command for worker node:

kubeadm token create --print-join-command

Worker node

sudo kubeadm join 213.xxx.yyy.zzz:6443 --token <hash> --discovery-token-ca-cert-hash sha256:<shahash> 

after joining

Label worker nodes:

kubectl label node lauenen.chloesoe.ch node-role.kubernetes.io/worker=worker