一、同一个应用部署至不同宿主机¶
在使用Kubernetes时,一般都会有很多节点运行容器,此时可以使用Pod反亲和力将同一个应用部署到不同的节点上,达到更高的可用率,以免同一个应用部署到相同的宿主机带来的风险。
这里又分为两种情况:
- 1)同一个应用必须部署到不同宿主机
- 2)同一个应用尽量部署到不同宿主机
1.1 同一个应用必须部署到不同宿主机¶
1.查看节点污点情况,保证无污点
[root@k8s-master01 ~]# kubectl describe node | grep Taint
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>
如果存在污点,则参考下面方式删除污点
[root@k8s-master01 ~]# k taint node k8s-master01 node-role.kubernetes.io/control-plane-
2.定义一个名为podAntiAffinity01的yaml文件
[root@k8s-master01 Affinity]# vim podAntiAffinity01.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: must-be-diff-nodes
name: must-be-diff-nodes
namespace: kube-public
spec:
replicas: 3
selector:
matchLabels:
app: must-be-diff-nodes
template:
metadata:
labels:
app: must-be-diff-nodes
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- must-be-diff-nodes
topologyKey: kubernetes.io/hostname
containers:
- image: registry.cn-hangzhou.aliyuncs.com/zq-demo/nginx:1.14.2
imagePullPolicy: IfNotPresent
name: must-be-diff-nodes
3.开始部署
[root@k8s-master01 Affinity]# kubectl create -f podAntiAffinity01.yaml
4.查看pod状态,观察到pod被部署到不同节点
[root@k8s-master01 ~]# kubectl get po -n kube-public -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
must-be-diff-nodes-7cb8b45d66-78n96 1/1 Running 0 2m55s 192.168.58.246 k8s-node02 <none> <none>
must-be-diff-nodes-7cb8b45d66-ltwps 1/1 Running 0 2m55s 192.168.32.129 k8s-master01 <none> <none>
must-be-diff-nodes-7cb8b45d66-q5mml 1/1 Running 0 2m55s 192.168.85.241 k8s-node01 <none> <none>
5.扩容副本数为4个
[root@k8s-master01 Affinity]# kubectl scale deployment must-be-diff-nodes --replicas=4 -n kube-public
除了上面扩容副本测试外,也可以尝试重启deployment
[root@k8s-master01 ~]# k rollout restart deployment must-be-diff-nodes -n kube-public
6.再次查看Pod状态,发现有一个节点处于Pending
[root@k8s-master01 ~]# kubectl get po -n kube-public -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
must-be-diff-nodes-7cb8b45d66-78n96 1/1 Running 0 3m25s 192.168.58.246 k8s-node02 <none> <none>
must-be-diff-nodes-7cb8b45d66-ltwps 1/1 Running 0 3m25s 192.168.32.129 k8s-master01 <none> <none>
must-be-diff-nodes-7cb8b45d66-q5mml 1/1 Running 0 3m25s 192.168.85.241 k8s-node01 <none> <none>
must-be-diff-nodes-7cb8b45d66-xhzp6 0/1 Pending 0 4s <none> <none> <none> <none>
7.详细查看Pod处于Pending原因,这是因为我们设置的是硬亲和力,必须部署在不同宿主机上,节点只有3个,而副本数设置为4个,所以导致一直处于Pending
[root@k8s-master01 Affinity]# kubectl describe po must-be-diff-nodes-bdbb64998-58ngr -n kube-public
...
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 57s default-scheduler 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
8.环境复原
[root@k8s-master01 ~]# kubectl delete -f podAntiAffinity01.yaml
1.2 同一个应用尽量部署到不同宿主机¶
1.查看节点污点情况,保证无污点
[root@k8s-master01 ~]# kubectl describe node | grep Taint
Taints: <none>
Taints: <none>
Taints: <none>
Taints: <none>
如果存在污点,则参考下面方式删除污点
[root@k8s-master01 ~]# k taint node k8s-master01 node-role.kubernetes.io/control-plane-
2.定义一个名为podAntiAffinity02的yaml文件
[root@k8s-master01 Affinity]# vim podAntiAffinity02.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: must-be-diff-nodes
name: must-be-diff-nodes
namespace: kube-public
spec:
replicas: 3
selector:
matchLabels:
app: must-be-diff-nodes
template:
metadata:
labels:
app: must-be-diff-nodes
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- must-be-diff-nodes
topologyKey: kubernetes.io/hostname
containers:
- image: registry.cn-hangzhou.aliyuncs.com/zq-demo/nginx:1.14.2
imagePullPolicy: IfNotPresent
name: must-be-diff-nodes
3.开始部署
[root@k8s-master01 Affinity]# kaf podAntiAffinity02.yaml
4.查看pod状态,观察到pod被部署到不同节点
[root@k8s-master01 ~]# kubectl get po -n kube-public -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
must-be-diff-nodes-75c74cd9d6-jfbtt 1/1 Running 0 23s 192.168.58.247 k8s-node02 <none> <none>
must-be-diff-nodes-75c74cd9d6-kk6bc 1/1 Running 0 23s 192.168.85.242 k8s-node01 <none> <none>
must-be-diff-nodes-75c74cd9d6-rlv9b 1/1 Running 0 23s 192.168.32.130 k8s-master01 <none> <none>
5.扩容副本数为4个
[root@k8s-master01 Affinity]# kubectl scale deployment must-be-diff-nodes --replicas=4 -n kube-public
除了上面扩容副本测试外,也可以尝试重启deployment
[root@k8s-master01 ~]# k rollout restart deployment must-be-diff-nodes -n kube-public
6.再次查看Pod状态,观察到这次没有pod处于pending状态
[root@k8s-master01 ~]# kubectl get po -n kube-public -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
must-be-diff-nodes-75c74cd9d6-jfbtt 1/1 Running 0 95s 192.168.58.247 k8s-node02 <none> <none>
must-be-diff-nodes-75c74cd9d6-kk6bc 1/1 Running 0 95s 192.168.85.242 k8s-node01 <none> <none>
must-be-diff-nodes-75c74cd9d6-pp69w 1/1 Running 0 33s 192.168.58.248 k8s-node02 <none> <none>
must-be-diff-nodes-75c74cd9d6-rlv9b 1/1 Running 0 95s 192.168.32.130 k8s-master01 <none> <none>
7.环境复原
[root@k8s-master01 ~]# kubectl delete -f podAntiAffinity02.yaml