kubectl cheat sheet
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8
Initialise a Kubernetes control-plane node. CIDR required by flannel networking later.
root@nano1:/# kubeadm init --pod-network-cidr = 10.244.0.0/16
W0311 12:36:59.643230 8155 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0311 12:36:59.643396 8155 validation.go:28] Cannot validate kubelet config - no validator is available
[ init] Using Kubernetes version: v1.17.3
[ preflight] Running pre-flight checks
[ preflight] Pulling images required for setting up a Kubernetes cluster
[ preflight] This might take a minute or two, depending on the speed of your internet connection
[ preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[ kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[ kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[ kubelet-start] Starting the kubelet
[ certs] Using certificateDir folder "/etc/kubernetes/pki"
[ certs] Generating "ca" certificate and key
[ certs] Generating "apiserver" certificate and key
[ certs] apiserver serving cert is signed for DNS names [ nano1.home.lan kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [ 10.96.0.1 192.168.2.21]
[ certs] Generating "apiserver-kubelet-client" certificate and key
[ certs] Generating "front-proxy-ca" certificate and key
[ certs] Generating "front-proxy-client" certificate and key
[ certs] Generating "etcd/ca" certificate and key
[ certs] Generating "etcd/server" certificate and key
[ certs] etcd/server serving cert is signed for DNS names [ nano1.home.lan localhost] and IPs [ 192.168.2.21 127.0.0.1 ::1]
[ certs] Generating "etcd/peer" certificate and key
[ certs] etcd/peer serving cert is signed for DNS names [ nano1.home.lan localhost] and IPs [ 192.168.2.21 127.0.0.1 ::1]
[ certs] Generating "etcd/healthcheck-client" certificate and key
[ certs] Generating "apiserver-etcd-client" certificate and key
[ certs] Generating "sa" key and public key
[ kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[ kubeconfig] Writing "admin.conf" kubeconfig file
[ kubeconfig] Writing "kubelet.conf" kubeconfig file
[ kubeconfig] Writing "controller-manager.conf" kubeconfig file
[ kubeconfig] Writing "scheduler.conf" kubeconfig file
[ control-plane] Using manifest folder "/etc/kubernetes/manifests"
[ control-plane] Creating static Pod manifest for "kube-apiserver"
[ control-plane] Creating static Pod manifest for "kube-controller-manager"
W0311 12:39:37.242102 8155 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC" ; using "Node,RBAC"
[ control-plane] Creating static Pod manifest for "kube-scheduler"
W0311 12:39:37.247664 8155 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC" ; using "Node,RBAC"
[ etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[ wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" . This can take up to 4m0s
[ kubelet-check] Initial timeout of 40s passed.
[ apiclient] All control plane components are healthy after 64.509661 seconds
[ upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[ kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[ upload-certs] Skipping phase. Please see --upload-certs
[ mark-control-plane] Marking the node nano1.home.lan as control-plane by adding the label "node-role.kubernetes.io/master=''"
[ mark-control-plane] Marking the node nano1.home.lan as control-plane by adding the taints [ node-role.kubernetes.io/master:NoSchedule]
[ bootstrap-token] Using token: 7r9n64.3c3x2j73e0axqgxk
[ bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[ bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[ bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[ bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[ bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[ kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[ addons] Applied essential addon: CoreDNS
[ addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u ) :$( id -g ) $HOME /.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.21:6443 --token 7r9n64.3c3x2j73e0axqgxk \
--discovery-token-ca-cert-hash sha256:5d4fb62d6372b4bc2cfc044943d79ef8b8aa7ac342570d20c6dc48e1dbb44c67
troubleshoot
kubectl describe po coredns-6955765f44-847hz -n kube-system
Setup environment
mkdir -p $HOME /.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config
sudo chown $( id -u ) :$( id -g ) $HOME /.kube/config
Check control-plane status
root@nanopifire3:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-ln6b2 1/1 Running 0 8m39s
kube-system coredns-6955765f44-ml8bc 1/1 Running 0 8m39s
kube-system etcd-nanopifire3 1/1 Running 0 8m39s
kube-system kube-apiserver-nanopifire3 1/1 Running 0 8m39s
kube-system kube-controller-manager-nanopifire3 1/1 Running 2 8m39s
kube-system kube-proxy-sfp8f 1/1 Running 0 8m39s
kube-system kube-scheduler-nanopifire3 1/1 Running 2 8m39s
Flannel network - Flannel is a very simple overlay network that satisfies Kubernetes requirements.
https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md
when intialising the CIDR was defined.
–pod-network-cidr=10.244.0.0/16
review config
kubectl -n kube-system get cm kubeadm-config -oyaml
launch flannel
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@nanopifire3:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
show pods
kubectl get pods –all-namespaces
root@nanopifire3:~# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-ln6b2 1/1 Running 0 11m
kube-system coredns-6955765f44-ml8bc 1/1 Running 0 11m
kube-system etcd-nanopifire3 1/1 Running 0 11m
kube-system kube-apiserver-nanopifire3 1/1 Running 0 11m
kube-system kube-controller-manager-nanopifire3 1/1 Running 2 11m
kube-system kube-flannel-ds-arm64-8dn2k 1/1 Running 0 3m17s
kube-system kube-proxy-sfp8f 1/1 Running 0 11m
kube-system kube-scheduler-nanopifire3 1/1 Running 2 11m
troubleshoot
kubectl logs –namespace kube-system kube-flannel-ds-arm64-vhmvb -c kube-flannel
example config
/etc/cni/net.d/10-flannel.conflist
{
"name" : "cbr0" ,
"plugins" : [
{
"type" : "flannel" ,
"delegate" : {
"hairpinMode" : true ,
"isDefaultGateway" : true
}
},
{
"type" : "portmap" ,
"capabilities" : {
"portMappings" : true
}
}
],
"cniVersion" : "0.2.0"
}
add nodes
root@nano2:~# kubeadm join 192.168.2.21:6443 --token 7r9n64.3c3x2j73e0axqgxk \
> --discovery-token-ca-cert-hash sha256:5d4fb62d6372b4bc2cfc044943d79ef8b8aa7ac342570d20c6dc48e1dbb44c67
W0311 12:52:53.410629 14815 join.go:346] [ preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[ preflight] Running pre-flight checks
[ preflight] Reading configuration from the cluster...
[ preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[ kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[ kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[ kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[ kubelet-start] Starting the kubelet
[ kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@nano3:~# kubeadm join 192.168.2.21:6443 --token 7r9n64.3c3x2j73e0axqgxk \
> --discovery-token-ca-cert-hash sha256:5d4fb62d6372b4bc2cfc044943d79ef8b8aa7ac342570d20c6dc48e1dbb44c67
W0311 12:52:54.731761 14780 join.go:346] [ preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[ preflight] Running pre-flight checks
[ preflight] Reading configuration from the cluster...
[ preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[ kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[ kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[ kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[ kubelet-start] Starting the kubelet
[ kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
From the control-plane node (nano1.home.lan)
root@nano1:/etc# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nano1.home.lan Ready master 14m v1.17.3
nano2.home.lan Ready <none> 118s v1.17.3
nano3.home.lan Ready <none> 116s v1.17.3
nginx-ingress
https://kubernetes.github.io/ingress-nginx/deploy/
curl -O https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
kubectl apply -f mandatory.yaml
Update image too arm64
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-arm64:0.30.0
root@nano1:~# kubectl apply -f mandatory.yaml
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
get status
root@nano1:~# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-5c486bc575-hmkh5 1/1 Running 0 2m52s
root@nano1:~# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/baremetal/service-nodeport.yaml
service/ingress-nginx created
root@nano1:~# kubectl get pods --all-namespaces -l app.kubernetes.io/name= ingress-nginx --watch
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-5c486bc575-hmkh5 1/1 Running 0 5m31s
root@nano1:~# POD_NAMESPACE = ingress-nginx
root@nano1:~# POD_NAME = $( kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name= ingress-nginx -o jsonpath = '{.items[0].metadata.name}' )
root@nano1:~#
root@nano1:~# kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.30.0
Build: git-7e65b90c4
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.8
-------------------------------------------------------------------------------
apiVersion : extensions/v1beta1
kind : Ingress
metadata :
name : nano-ingress
annotations :
nginx.ingress.kubernetes.io/rewrite-target : /
spec :
rules :
- http :
paths :
- path : /nginx-deployment
backend :
serviceName : nginx-deployment
servicePort : 8080
- path : /hello-world-go
backend :
serviceName : helloworld-v1
servicePort : 8080
apiVersion : v1
kind : Service
metadata :
name : ingress-nginx
namespace : ingress-nginx
labels :
app.kubernetes.io/name : ingress-nginx
app.kubernetes.io/part-of : ingress-nginx
spec :
type : NodePort
ports :
- name : http
port : 80
targetPort : 80
protocol : TCP
- name : https
port : 443
targetPort : 443
protocol : TCP
externalIPs :
- 192.168.0.71
- 192.168.0.72
- 192.168.0.73
selector :
app.kubernetes.io/name : ingress-nginx
app.kubernetes.io/part-of : ingress-nginx
4) prometheus monitoring
docker clean ups
docker rm -f $(docker ps -qa)
docker rmi -f $(docker images -q)
docker volume rm $(docker volume ls -q)
if reset and get cert errors
root@nanopifire3:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)
unset KUBECONFIG
export KUBECONFIG=/etc/kubernetes/admin.conf
mv $HOME/.kube $HOME/.kube.bak
mkdir $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
https://docs.min.io/minio/baremetal/