一、Master节点初始化¶
1.Master01节点创建kubeadm-config.yaml配置文件如下
$ vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.1.60 #Master01节点的IP地址
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 192.168.1.65 #VIP地址/公有云的负载均衡地址
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.1.65:16443
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.26.9 #此处版本号和kubeadm版本一致
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/12
serviceSubnet: 10.0.0.0/16
scheduler: {}
2.Master01节点上更新kubeadm文件
$ kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
3.在Master01节点上将new.yaml文件复制到其他master节点
$ for i in master02 master03; do scp new.yaml $i:/root/; done
4.所有Master节点提前下载镜像,可以节省初始化时间(其他节点不需要更改任何配置,包括IP地址也不需要更改)
$ kubeadm config images pull --config /root/new.yaml
5.所有节点设置开机自启动kubelet
$ systemctl enable --now kubelet
6.Master01节点初始化,初始化以后会在/etc/kubernetes目录下生成对应的证书和配置文件,之后其他Master节点加入Master01即可
$ kubeadm init --config /root/new.yaml --upload-certs

补充:
如果初始化失败,重置后再次初始化,命令如下(没有失败不要执行)
$ kubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube
7.Master01节点配置环境变量,用于访问Kubernetes集群
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看节点状态
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane,master 4m5s v1.26.9
二、添加Master和Node到k8s集群¶
1.添加Master02节点和Master03节点到k8s集群
$ kubeadm join 192.168.1.65:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:c43fc602f2c19d3322d48dc68b150f940e4679cd74d5fc1542183a7f1dc7fa62 \
--control-plane --certificate-key f2309d5858bb593a047e797831f23ba18cafd234bccdd138f7d0279004abd34e
如果Token过期(24小时),需要生成新的token
$ kubeadm token create --print-join-command
另外需要生成--certificate-key
$ kubeadm init phase upload-certs --upload-certs
2.添加Node01节点和Node02节点到k8s集群
$ kubeadm join 192.168.1.65:16443 --token 7t2weq.bjbawausm0jaxury \
--discovery-token-ca-cert-hash sha256:c43fc602f2c19d3322d48dc68b150f940e4679cd74d5fc1542183a7f1dc7fa62
如果Token过期(24小时),需要生成新的token,需要和Master节点保持一致
$ kubeadm token create --print-join-command
3.在Master01节点上查看节点状态
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane,master 7m11s v1.26.9
master02 NotReady control-plane,master 2m28s v1.26.9
master03 NotReady control-plane,master 102s v1.26.9
node01 NotReady <none> 106s v1.26.9
node02 NotReady <none> 84s v1.26.9
三、Calico组件安装¶
1.在Master01节点上进入相应分支目录
$ cd /root/k8s-ha-install && git checkout manual-installation-v1.26.x && cd calico/
2.在Master01节点上提取Pod网段并赋值给变量
$ POD_SUBNET=`cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep cluster-cidr= | awk -F= '{print $NF}'`
3.在Master01节点上修改calico.yaml文件
$ sed -i "s#POD_CIDR#${POD_SUBNET}#g" calico.yaml
4.在Master01节点上安装Calico
$ kubectl apply -f calico.yaml
5.在Master01节点上查看节点状态
$ kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 9h v1.26.9
master02 Ready control-plane,master 9h v1.26.9
master03 Ready control-plane,master 9h v1.26.9
node01 Ready <none> 9h v1.26.9
node02 Ready <none> 9h v1.26.9
6.查看pod状态,观察到所有pod都是running
$ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6f6595874c-tntnr 1/1 Running 0 8m52s
calico-node-5mj9g 1/1 Running 1 (41s ago) 8m52s
calico-node-hhjrv 1/1 Running 2 (61s ago) 8m52s
calico-node-szjm7 1/1 Running 0 8m52s
calico-node-xcgwq 1/1 Running 0 8m52s
calico-node-ztbkj 1/1 Running 1 (11s ago) 8m52s
calico-typha-6b6cf8cbdf-8qj8z 1/1 Running 0 8m52s
coredns-65c54cc984-nrhlg 1/1 Running 0 9h
coredns-65c54cc984-xkx7w 1/1 Running 0 9h
etcd-master01 1/1 Running 1 (29m ago) 9h
etcd-master02 1/1 Running 1 (29m ago) 9h
etcd-master03 1/1 Running 1 (29m ago) 9h
kube-apiserver-master01 1/1 Running 1 (29m ago) 9h
kube-apiserver-master02 1/1 Running 1 (29m ago) 9h
kube-apiserver-master03 1/1 Running 2 (29m ago) 9h
kube-controller-manager-master01 1/1 Running 2 (29m ago) 9h
kube-controller-manager-master02 1/1 Running 1 (29m ago) 9h
kube-controller-manager-master03 1/1 Running 1 (29m ago) 9h
kube-proxy-7rmrs 1/1 Running 1 (29m ago) 9h
kube-proxy-bmqhr 1/1 Running 1 (29m ago) 9h
kube-proxy-l9rqg 1/1 Running 1 (29m ago) 9h
kube-proxy-nn465 1/1 Running 1 (29m ago) 9h
kube-proxy-sghfb 1/1 Running 1 (29m ago) 9h
kube-scheduler-master01 1/1 Running 2 (29m ago) 9h
kube-scheduler-master02 1/1 Running 1 (29m ago) 9h
kube-scheduler-master03 1/1 Running 1 (29m ago) 9h
四、Metrics部署¶
在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。
1.将Master01节点的front-proxy-ca.crt复制到Node-01节点和Node-02节点
$ scp /etc/kubernetes/pki/front-proxy-ca.crt node01:/etc/kubernetes/pki/front-proxy-ca.crt
$ scp /etc/kubernetes/pki/front-proxy-ca.crt node02:/etc/kubernetes/pki/front-proxy-ca.crt
2.在Master01节点上操作安装metrics server
$ cd /root/k8s-ha-install/kubeadm-metrics-server
$ kubectl create -f comp.yaml
3.在Master01节点上查看metrics-server部署情况
$ kubectl get po -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE
metrics-server-5cf8885b66-jdjtb 1/1 Running 0 115s
4.在Master01节点上查看node使用情况
$ kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master01 130m 0% 1019Mi 12%
master02 102m 0% 1064Mi 13%
master03 93m 0% 971Mi 12%
node01 45m 0% 541Mi 6%
node02 57m 0% 544Mi 6%
五、开启自动生成Token(1.24版本上必做)¶
如果安装的K8s版本以上是1.24以上的(包含1.24),需要修改apiserver的以下配置:
1、所有master节点修改apiserver配置,在/etc/kubernetes/manifests/kube-apiserver.yaml文件中的command参数的第二行,添加- --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false (如果有feature-gates参数,直接在后面添加,LegacyServiceAccountTokenNoAutoGeneration=false即可)
$ vim /etc/kubernetes/manifests/kube-apiserver.yaml
...
...
spec:
containers:
- command:
- kube-apiserver
- --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false
- --advertise-address=192.168.1.60
- --allow-privileged=true
...
...

2、所有master节点修改apiserver配置,在/etc/kubernetes/manifests/kube-controller-manager.yaml文件中的command参数的第二行,添加- --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false (如果有feature-gates参数,直接在后面添加,LegacyServiceAccountTokenNoAutoGeneration=false即可)
$ vim /etc/kubernetes/manifests/kube-controller-manager.yaml
...
...
spec:
containers:
- command:
- kube-controller-manager
- --feature-gates=LegacyServiceAccountTokenNoAutoGeneration=false
- --allocate-node-cidrs=true
...
...

3、所有Master节点重启kubelet
$ systemctl restart kubelet
六、Dashboard部署¶
官方GitHub地址:https://github.com/kubernetes/dashboard
Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。 例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。
1.在Master01节点上操作安装Dashboard
$ cd /root/k8s-ha-install/dashboard/
$ kubectl create -f .
2.在Master01节点上查看Dashboard服务
[root@master01 dashboard]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.0.75.7 <none> 8000/TCP 27s
kubernetes-dashboard NodePort 10.0.245.201 <none> 443:31468/TCP 27s
3.在谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题
(1)右键谷歌浏览器(Chrome),选择【属性】
(2)在【目标】位置处添加下面参数,这里再次强调一下--test-type --ignore-certificate-errors前面有参数
--test-type --ignore-certificate-errors

4.在Master01节点上查看token值
[root@master01 dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-2crl2
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 61e058ad-8095-42bd-aa60-bbb58e4475a5
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1107 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjBHaWNuNTlDVVBFcUR6Zl83SVA2cXFRMlZvYTh4QklmT1JpLTJsSmNObUUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTJjcmwyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2MWUwNThhZC04MDk1LTQyYmQtYWE2MC1iYmI1OGU0NDc1YTUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.yYMviv8KFb1wGTxjTUYG7J0TtMoC0Q96U53T9yFQFYPvoU7ct37EWp_ocsK-qqY8MU2qjym854rBaBsApHvuA0yUzWRQVGBI4amgBIhnSzyhmzETchjsUCwd8_d00toAMhKwuEGe1p1YZOsVcvtdicHcpmBP6zzdzD9E8a2KSwRi29O5TryqXPh5eSJy9g51uTo-fjkiaWUaxAmzvouSPxOPJfpIbGTiVvTzinZ0CMDIfqU5zScZDoPhxk7wd-H6uTqDQZ1gz15tLHRnyBdT1KomKmuuGoRXEZd7ZRNXnbrtarAW49ZB896HRHQ9pBWVH_PXJBSOpl5gV3aQNMD4fA
5.打开谷歌浏览器(Chrome),输入https://任意节点IP:服务端口,这里以Master01节点为例

6.切换命名命名空间为kube-system,默认defult命名空间没有资源

七、设置Kube-proxy模式为ipvs¶
1.在Master01节点上将Kube-proxy改为ipvs模式,默认是iptables
$ kubectl edit cm kube-proxy -n kube-system
#第48行
...
...
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: "ipvs"
...
...

2.在Master01节点上更新Kube-Proxy的Pod
$ kubectl patch daemonset kube-proxy -p "{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"date\":\"`date +'%s'`\"}}}}}" -n kube-system
3.在Master01节点上查看kube-proxy滚动更新情况
$ kubectl get po -n kube-system | grep kube-proxy
kube-proxy-2kz9g 1/1 Running 0 58s
kube-proxy-b54gh 1/1 Running 0 63s
kube-proxy-kclcc 1/1 Running 0 61s
kube-proxy-pv8gc 1/1 Running 0 59s
kube-proxy-xt52m 1/1 Running 0 56s
4.在Master01节点上验证Kube-Proxy模式
$ curl 127.0.0.1:10249/proxyMode
ipvs
八、Kubectl自动补全¶
1.在Master01节点上开启kubectl自动补全
当前用户
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
系统全局
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
sudo chmod a+r /etc/bash_completion.d/kubectl
2.在Master01节点上为 kubectl 使用一个速记别名
$ alias k=kubectl
$ complete -o default -F __start_kubectl k
3.在Master01节点上添加一些速记别名
echo "alias kg='kubectl get' " >> ~/.bashrc
echo "alias k='kubectl' " >> ~/.bashrc
echo "alias kgp='kubectl get pod' " >> ~/.bashrc
echo "alias kgs='kubectl get svc' " >> ~/.bashrc
echo "alias kgd='kubectl get deploy' " >> ~/.bashrc
echo "alias kdp='kubectl describe pod' " >> ~/.bashrc
echo "alias klf='kubectl log -f' " >> ~/.bashrc
echo "alias kaf='kubectl apply -f' " >> ~/.bashrc
九、清除Master节点上的污点¶
Kubeadm安装后,master节点默认不允许部署pod,可以通过以下方式删除Taint,即可部署Pod:
[root@master01 ~]# kubectl taint node -l node-role.kubernetes.io/control-plane node-role.kubernetes.io/control-plane:NoSchedule-
清除完成后进行验证
[root@master01 ~]# k describe node | grep -i taints
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>