Overview
This page should be a collection of useful git commands I usually forget about
Commands
Find branches ordered by last commit date
git for-each-ref --sort=committerdate refs/heads/ --format='%(committerdate:short) %(refname:short)'
Zufällig gesammelte Themen
This page should be a collection of useful git commands I usually forget about
git for-each-ref --sort=committerdate refs/heads/ --format='%(committerdate:short) %(refname:short)'
From CKS there is a chapter about CIS Benchmarks with kube-bench.
Benchmarks are coming from https://www.cisecurity.org/benchmark/kubernetes/. You have to go to "Download Latest CIS Benchmark" and register, then you'll get a link an can download all relevant benchmarks, for this how-to we used Fixes are based on "CIS Kubernetes V1.20 Benchmark - v1.0.0" from 2021-05-19.
This documents what changes are needed, to fix all FAILs.
First check https://github.com/aquasecurity/kube-bench/blob/main/docs/platforms.md and take a note which benchmark version you want to check against.
For my cluster running with K8s version 1.23.1 we use cis-1.20
version.
wget -O kube-bench-control-plane.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-
master.yaml
wget -O kube-bench-node.yaml https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job-node.yaml
Now you could change the startup command in the job yamls, to add the benchmark you want to use. You could also define --version
instead of --benchmark
, but it should be done automatically:
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench", "run", "--benchmark", "cis-1.20", "--targets", "master"]
Now create the jobs:
kubectl create -f kube-bench-control-plane.yaml
kubectl create -f kube-bench-node.yaml
kubectl get pods
kubectl logs kube-bench-master-<RAND> > bench-master.log
kubectl logs kube-bench-node-<RAND> > bench-worker.log
Get the output for master:
$ grep FAIL bench-master.log
[FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)
[FAIL] 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[FAIL] 1.2.15 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[FAIL] 1.2.18 Ensure that the --insecure-port argument is set to 0 (Automated)
[FAIL] 1.2.20 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.2.21 Ensure that the --audit-log-path argument is set (Automated)
[FAIL] 1.2.22 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)
[FAIL] 1.2.23 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)
[FAIL] 1.2.24 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)
[FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated)
[FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated)
Get the output of the worker:
$ grep FAIL bench-worker.log
[FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
Below documented the failed controls.
Fixes are based on "CIS Kubernetes V1.20 Benchmark" v1.0.0 - 2021-05-19.
After fixing, we have three open controls, which cannot be fixed right now:
[FAIL] 1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)
[FAIL] 1.2.15 Ensure that the admission control plugin PodSecurityPolicy is set (Automated)
[FAIL] 1.2.21 Ensure that the --audit-log-path argument is set (Automated)
Reason see below in their respective sections.
By default there is no etcd user.
First add user (from https://devopscube.com/setup-etcd-cluster-linux/)
groupadd -f -g 1501 etcd
useradd -c "etcd user" -d /var/lib/etcd -s /bin/false -g etcd -u 1501 etcd
chown etcd:etcd /var/lib/etcd/
1.2.5 Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file
/etc/kubernetes/manifests/kube-apiserver.yaml
on the master node and set the
--kubelet-certificate-authority
parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
The obvious solution to add - --kubelet-certificate-authority=/etc/kubernetes/pki/etcd/ca.crt
does not really work. After rerunning the kube-bench job, the master pod will have theses logs: to
/etc/kubernetes/manifests/kube-apiserver.yaml
$ kubectl logs kube-bench-master-jhnp9
Error from server: Get "https://213.167.224.157:10250/containerLogs/default/kube-bench-master-jhnp9/kube-bench": x509: cannot validate certificate for 213.167.224.157 because it doesn't contain any IP SANs
According to https://stackoverflow.com/q/63994701/7311363 "you first need to make sure you got Kubelet authentication and Kubelet authorization enabled. After that you can follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelet"
Sounds interesting, to be revisited once all FAILs are fixed.
Set it back to the default.
deprecated soon
https://kubernetes.io/blog/2021/04/06/podsecuritypolicy-deprecation-past-present-and-future/
1.2.15 Follow the documentation and create Pod Security Policy objects as per your environment.
Then, edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the --enable-admission-plugins parameter to a value that includes PodSecurityPolicy:
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
Check ps -ef | grep kube-apiserver
and ensure PodSecurityPolicy
is included in --enable-admission-plugins
.
If not add it to /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
...
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
After setting that, the pod for the our CIS benchmark job cannot be started:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 27s (x3 over 57s) job-controller Error creating: pods "kube-bench-master-" is forbidden: PodSecurityPolicy: no providers available to validate pod request
checking for PodSecurityPolicies:
$ kubectl get podsecuritypolicies.policy
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
1.2.18 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter.
--insecure-port=0
ps -ef | grep kube-apiserver
Verify that the --insecure-port
argument is set to 0 .
If not add it to /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
...
- --insecure-port=0
Audit log is important to log security-relevant actions.
Edit /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
...
- --audit-log-path=/var/log/apiserver/audit.log
- --audit-log-maxage=30
- --audit-log-maxbackup=10
- --audit-log-maxsize=100
WARNING:
If you set audit-log-path, kubelet cannot start and will show error like:
Jan 15 19:43:28 server.example.com kubelet[13157]: E0115 19:43:28.784974 13157 kubelet.go:2422] "Error getting node" err="node \"server.example.com\" not found"
Jan 15 19:43:28 server.example.com kubelet[13157]: E0115 19:43:28.886336 13157 kubelet.go:2422] "Error getting node" err="node \"server.example.com\" not found"
Disable profiling to reduce the potential attack surface. Profiling is used or the identification of specific performance bottlenecks and if we aren't active troubleshooting, we can disable it.
We have to do this for:
/etc/kubernetes/manifests/kube-apiserver.yaml
(1.2.20)/etc/kubernetes/manifests/kube-controller-manager.yaml
(1.3.2)/etc/kubernetes/manifests/kube-scheduler.yaml
(1.4.1)spec:
containers:
- command:
- kube-<SYSTEMPOD>
....
- --profiling=false
There is only one FAIL in the default configuration.
On my Cluster I have a control plane who is also working as worker node, therefore it must also be added there.
4.2.6 If using a Kubelet config file, edit the file to set rotectKernelDefaults: true.
If using command line arguments, edit the kubelet service file/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
on each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
According to CIS Benchmarks we should add --protect-kernel-defaults=true
to KUBELET_SYSTEM_PODS_ARGS
. This environment variable does not exist. We create it and edit the ExecStart command:
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_SYSTEM_PODS_ARGS=--protect-kernel-defaults=true"
$KUBELET_SYSTEM_PODS_ARGS
to the ExecStart
, have it like that:
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_SYSTEM_PODS_ARGS
sudo systemctl daemon-reload
sudo systemctl restart kubelet.service
Now CIS Benchmarks should show everything ok for the nodes.
This how-to documents how to set up a K8s cluster at hosttech.
There are two VMS:
Via https://www.myhosttech.eu/user-products/ it's possible to re-install the operating system.
Configure after re-install:
vigr
and add user to sudo groupvisudo
and ensure
%sudo ALL=(ALL:ALL) NOPASSWD: ALL
127.0.1.1 xyz.chloesoe.ch xyz
hostnamectl set-hostname xyz.chloesoe.ch
update-alternatives --config editor
/etc/bash.bashrc
~/.vimrc
set laststatus=2
set hlsearch
set backup
set backupdir=~/.vim/tmp,/tmp,~/
set history=5000
~/.bashrc
alias ls='ls --color --group-directories-first'
/etc/ssh/sshd_conf
PasswordAuthentication no
PermitRootLogin no
~/.ssh/authorized_keys
echo "source <(kubectl completion bash)" >> ~/.bashrc
See https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
Commands from acould.guru course and adjusted where needed.
On all nodes, set up containerd. You will need to load some kernel modules and modify some system settings as part of this
process:
cat << EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward
= 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sudo sysctl --system
Install and configure containerd.
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
Disable swap on all nodes:
On all nodes, disable swap.
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
On all nodes, install kubeadm, kubelet, and kubectl.
sudo apt-get update && sudo apt-get install -y apt-transport-https curl gnupg2
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
export kversion=1.23.1-00
sudo apt install -y kubelet=$kversion kubeadm=$kversion kubectl=$kversion
sudo apt-mark hold kubelet kubeadm kubectl
sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.23.1
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify the cluster is working.
kubectl get nodes
Install the Calico network add-on.
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Get join command for worker node:
kubeadm token create --print-join-command
sudo kubeadm join 213.xxx.yyy.zzz:6443 --token <hash> --discovery-token-ca-cert-hash sha256:<shahash>
Label worker nodes:
kubectl label node lauenen.chloesoe.ch node-role.kubernetes.io/worker=worker
After you installed PiHole according to Install PiHole in docker-compose on Ubuntu Server you probably want to run regular updates.
With docker compose you simply could run this:
cd /opt/pihole/
sudo docker-compose stop
sudo docker-compose rm -f
sudo docker-compose pull
sudo docker-compose up -d
This will document how to install PiHole on an Ubuntu server. PiHole will run in docker-compose with couple including some volumes from the host, so data could be stored during updates. The docker container for pihole should be is ephemeral.
the following steps are done according to pi-hole/docker-pi-hole
Run this steps:
Install docker compose installed on yourserver.example.com with sudo apt install docker-compose
For the following use install folder /opt/pihole
create docker-compose.yaml in /opt/pihole/
, below is the final version incl the volumes which are added later:
* version: "3"
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
pihole:
container_name: pihole
hostname: yourserver-pihole
image: pihole/pihole:latest
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp"
- "80:80/tcp"
- "443:443/tcp"
environment:
ADMIN_EMAIL: 'pihole@example.com'
DNS1: '9.9.9.9'
DNS2: '1.1.1.1'
PIHOLE_BASE: '/opt/pihole'
TZ: 'Europe/Zurich'
WEBPASSWORD: '...'
# Volumes store your data between container upgrades
volumes:
- './etc-pihole/:/etc/pihole/'
- './etc-dnsmasq.d/:/etc/dnsmasq.d/'
- './letsencrypt:/opt/letsencrypt/'
- './letsencrypt/lighttpd-external.conf:/etc/lighttpd/external.conf'
- './fakewebroot/.well-known:/var/www/html/.well-known'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
You now can start it with with: docker-compose up --detach
You now can connect to http://yourserver.example.com/admin, make sure you don't login with the defined WEBPASSWORD
, your conneciton isn't encrypted yet.
The Admin interface isn't encrypted yet, therefore we want to run the let's encrypt (certbot) on the host machine.
Below was done with information from https://discourse.pi-hole.net/t/enabling-https-for-your-pi-hole-web-interface/5771
/opt/pihole/fakewebroot
and /opt/pihole/letsencrypt
../letsencrypt:/opt/letsencrypt/
to copy the combined.pem and fullchain.pem in./fakewebroot/.well-known:/var/www/html/.well-known
which will be used by certbot for to safe the challengesudo certbot certonly --webroot /opt/pihole/fakewebroot/ -d yourserver.example.com
sudo cat /etc/letsencrypt/live/yourserver.example.com/privkey.pem /etc/letsencrypt/live/yourserver.example.com/cert.pem > /opt/pihole/letsencrypt/combined.pem
./letsencrypt/lighttpd-external.conf:/etc/lighttpd/external.conf
Add the following to the lighthttpd-external.conf
, make sure you have the correct file names for ssl.pemfile
and ssl.ca-file
:
$HTTP["host"] == "yourserver.example.com" {
# Ensure the Pi-hole Block Page knows that this is not a blocked domain
setenv.add-environment = ("fqdn" => "true")
# Enable the SSL engine with a LE cert, only for this specific host
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.pemfile = "/opt/letsencrypt/combined.pem"
ssl.ca-file = "/opt/letsencrypt/fullchain.pem"
ssl.honor-cipher-order = "enable"
ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH"
ssl.use-sslv2 = "disable"
ssl.use-sslv3 = "disable"
}
# Redirect HTTP to HTTPS
$HTTP["scheme"] == "http" {
$HTTP["host"] =~ ".*" {
url.redirect = (".*" => "https://%0$0")
}
}
}
In the section before we already added the well-known folder /opt/pihole/fakewebroot/
and it is already added as volume in docker-compose.yaml
We now need a post action for the timer renewing, create a post hook file. Add the file with
sudo vim /etc/letsencrypt/renewal-hooks/post/redeploy-docker.sh
With this content:
cat /etc/letsencrypt/live/yourserver.example.com/privkey.pem /etc/letsencrypt/live/yourserver.example.com/cert.pem > /opt/pihole/letsencrypt/combined.pem
cat /etc/letsencrypt/live/yourserver.example.com/fullchain.pem /etc/letsencrypt/live/yourserver.example.com/cert.pem > /opt/pihole/letsencrypt/fullchain.pem
/usr/bin/docker-compose -f /opt/pihole/docker-compose.yaml down &>/dev/null
/usr/bin/docker-compose -f /opt/pihole/docker-compose.yaml up --detach &>/dev/null
And make it executable
sudo chmod +x /etc/letsencrypt/renewal-hooks/post/redeploy-docker.sh
This will copy the new certificate in to the correct folder and ensures, the docker container es restarted, so it will have the new ceritificate.
You can test whether your script works properly with a dry-run
sudo certbot renew --dry-run
If docker ps
shows a new container id after that, the container was restarted successful.
With sudo openssl x509 -noout -text -in /opt/letsencrypt/combined.pem | grep Validity -A3
you will see, whether the new certificate was copied correctly (doesn't really work shortly after the installation, because you have no new certificate)
Now you can use the IP address of yourserver.example.com as you DNS server address.
You can now use https://yourserver.example.com/admin/ to check your server.
To add your non privileged user to the docker group, so it could run docker
commands, run this command:
sudo usermod -a -G docker $USER
source:
https://techoverflow.net/...