ROUND_ROBIN负载均衡(针对DestinationRule)¶
这个很容易理解,就是纯粹的轮询负载均衡算法,它不管后端的服务是否忙闲,非常公平地把流量均分给发送到后端。
1、将advertisement服务扩展到两个pod
[root@master01 10.5]# kubectl edit deploy advertisement-v1 -n weather
将replicas: 1改为replicas: 2
...
...
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
...
...
查看
[root@master01 11.1]# kgp -n weather | grep advertisement-v1
NAME READY STATUS RESTARTS AGE
advertisement-v1-6b65cd7c78-ddrnk 2/2 Running 0 2d20h
advertisement-v1-6b65cd7c78-hfxrv 2/2 Running 0 3m47s
2、更改advertisement相关的DestinationRule
[root@master01 ~]# cd cloud-native-istio/11_traffic-management/11.1
[root@master01 11.1]# kubectl apply -f dr-advertisement-round-robin.yaml -n weather
3、查看该DestinationRule对应的配置
[root@master01 11.1]# kubectl get dr advertisement-dr -n weather -o yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"advertisement-dr","namespace":"weather"},"spec":{"host":"advertisement","subsets":[{"labels":{"version":"v1"},"name":"v1"}],"trafficPolicy":{"loadBalancer":{"simple":"ROUND_ROBIN"}}}}
creationTimestamp: "2023-11-09T01:59:24Z"
generation: 2
name: advertisement-dr
namespace: weather
resourceVersion: "915645"
uid: c1066046-d2dc-4638-9b18-e38c644e8fc0
spec:
host: advertisement
subsets:
- labels:
version: v1
name: v1
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
4、验证该负载均衡的效果,需要进入到frontend的pod里。使用curl来访问advertisement对应的service,为了让效果更明显,我们可以访问20次,借助for循环来实现:
[root@master01 11.1]# kubectl exec -it frontend-v1-58896bbfdd-khzxt -n weather -- bash
root@frontend-v1-58896bbfdd-khzxt:/app# for i in `seq 1 20`; do curl advertisement:3003/ad ; done
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
5、查看istio-proxy容器的日志,来分析流量的去向
[root@master01 11.1]# kubectl -n weather logs frontend-v1-58896bbfdd-khzxt -c istio-proxy |grep curl |awk '{print $18}' |sort |uniq -c
10 "172.18.71.29:3003"
10 "172.29.55.31:3003"
说明:192.168.206.206和192.168.68.183为两个advertisement对应pod的IP地址,两个IP分别为10次。
RANDOM负载均衡(针对DestinationRule)¶
相比较ROUND_ROBIN,RANDOM会比较随机一些,它会随机将请求发送到后端的服务上,并不保证公平和均衡。
1、修改一下advertisement的DestinationRule
[root@master01 ~]# cd cloud-native-istio/11_traffic-management/11.2
[root@master01 11.2]# kubectl apply -f dr-advertisement-random.yaml -n weather
2、查看其规则
[root@master01 11.2]# kubectl get dr advertisement-dr -n weather -o yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"advertisement-dr","namespace":"weather"},"spec":{"host":"advertisement","subsets":[{"labels":{"version":"v1"},"name":"v1"}],"trafficPolicy":{"loadBalancer":{"simple":"RANDOM"}}}}
creationTimestamp: "2023-11-09T01:59:24Z"
generation: 3
name: advertisement-dr
namespace: weather
resourceVersion: "917613"
uid: c1066046-d2dc-4638-9b18-e38c644e8fc0
spec:
host: advertisement
subsets:
- labels:
version: v1
name: v1
trafficPolicy:
loadBalancer:
simple: RANDOM
3、把之前的测试log清空,比较快速的方法是将frontend-v1对应的pod删除,让它自动生成新的pod
[root@master01 11.2]# kubectl get po -n weather|grep frontend-v1 |awk '{print $1}' |xargs -i kubectl -n weather delete po {}
4、进入到新的frontend-v1 pod里,curl 100次
[root@master01 11.2]# kubectl -n weather exec -it `kubectl get po -n weather|grep frontend-v1|awk '{print $1}'` -- bash
root@frontend-v1-58896bbfdd-n2v6t:/app# for i in `seq 1 100`; do curl advertisement:3003/ad ; done
5、查看istio-proxy容器的日志,来分析流量的去向.可以看到,两个pod的请求数量并不均衡
[root@master01 1]# kubectl -n weather logs `kubectl get po -n weather|grep frontend-v1|awk '{print $1}'` -c istio-proxy |grep curl |awk '{print $18}' |sort |uniq -c
63 "172.18.71.29:3003"
37 "172.29.55.31:3003"
会话保持(针对DestinationRule)¶
有时候的负载均衡策略,需要会话保持,即请求第一次到哪个pod,下次还会到哪个pod上,保证同一个来源用户始终到相同的后端服务上。
1、配置DestinationRule规则
[root@master01 ~]# cd cloud-native-istio/11_traffic-management/11.4
[root@master01 11.4]# kubectl apply -f dr-advertisement-consistenthash.yaml -n weather
2、查看规则
[root@master01 11.4]# kubectl get dr advertisement-dr -n weather -o yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"advertisement-dr","namespace":"weather"},"spec":{"host":"advertisement","subsets":[{"labels":{"version":"v1"},"name":"v1"}],"trafficPolicy":{"loadBalancer":{"consistentHash":{"httpCookie":{"name":"user","ttl":"60s"}}}}}}
creationTimestamp: "2023-11-09T01:59:24Z"
generation: 4
name: advertisement-dr
namespace: weather
resourceVersion: "958117"
uid: c1066046-d2dc-4638-9b18-e38c644e8fc0
spec:
host: advertisement
subsets:
- labels:
version: v1
name: v1
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 60s
3、把之前的测试log清空,比较快速的方法是将frontend-v1对应的pod删除,让它自动生成新的pod
[root@master01 11.4]# kubectl get po -n weather|grep frontend-v1 |awk '{print $1}' |xargs -i kubectl -n weather delete po {}
4、测试
进入到新的frontend-v1 pod里
[root@master01 11.4]# kubectl -n weather exec -it `kubectl get po -n weather|grep frontend-v1|awk '{print $1}'` -- bash
curl 测试10次,但是要携带一个cookie,并包含user关键字
root@frontend-v1-58896bbfdd-p88jn:/app# for i in `seq 1 10`; do curl advertisement:3003/ad --cookie "user=test"; done
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
{"adImgName":"airCleanImg"}
再通过查看istio-proxy容器的日志,来分析流量的去向。观察到,流量全部到了一个pod里
[root@master01 11.4]# kubectl -n weather logs `kubectl get po -n weather|grep frontend-v1|awk '{print $1}'` -c istio-proxy |grep curl |awk '{print $18}' |sort |uniq -c
10 "172.29.55.31:3003"