一、ResourceQuota配置解析

apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-test
  labels:
    app: resourcequota
spec:
  hard:
    pods: 2
    requests.cpu: 0.5
    requests.memory: 512Mi
    limits.cpu: 5
    limits.memory: 16Gi
    configmaps: 2
    requests.storage: 40Gi
    persistentvolumeclaims: 20
    replicationcontrollers: 20
    secrets: 20
    services: 50
    services.loadbalancers: "2"
    services.nodeports: "10"

用户可以对给定命名空间下的可被请求的 计算资源总量进行限制。其中配额机制所支持的资源类型:

资源名称 描述
limits.cpu 所有非终止状态的 Pod,其 CPU 限额总量不能超过该值。
limits.memory 所有非终止状态的 Pod,其内存限额总量不能超过该值。
requests.cpu 所有非终止状态的 Pod,其 CPU 需求总量不能超过该值。
requests.memory 所有非终止状态的 Pod,其内存需求总量不能超过该值。
hugepages-<size> 对于所有非终止状态的 Pod,针对指定尺寸的巨页请求总数不能超过此值。
cpu requests.cpu 相同。
memory requests.memory 相同。

当使用 count/* 资源配额时,如果对象存在于服务器存储中,则会根据配额管理资源。 这些类型的配额有助于防止存储资源耗尽。例如,用户可能想根据服务器的存储能力来对服务器中 Secret 的数量进行配额限制。 集群中存在过多的 Secret 实际上会导致服务器和控制器无法启动。 用户可以选择对 Job 进行配额管理,以防止配置不当的 CronJob 在某命名空间中创建太多 Job 而导致集群拒绝服务。

对有限的一组资源上实施一般性的对象数量配额也是可能的。

支持以下类型:

资源名称 描述
configmaps 在该命名空间中允许存在的 ConfigMap 总数上限。
persistentvolumeclaims 在该命名空间中允许存在的 PVC的总数上限。
pods 在该命名空间中允许存在的非终止状态的 Pod 总数上限。Pod 终止状态等价于 Pod 的 .status.phase in (Failed, Succeeded) 为真。
replicationcontrollers 在该命名空间中允许存在的 ReplicationController 总数上限。
resourcequotas 在该命名空间中允许存在的 ResourceQuota 总数上限。
services 在该命名空间中允许存在的 Service 总数上限。
services.loadbalancers 在该命名空间中允许存在的 LoadBalancer 类型的 Service 总数上限。
services.nodeports 在该命名空间中允许存在的 NodePort 类型的 Service 总数上限。
secrets 在该命名空间中允许存在的 Secret 总数上限。

ResourceQuota作用于Pod,并且有命名空间限制!!!

二、ResourceQuota如何使用

下面分别以允许存在的 ConfigMap 总数上限数和允许存在的非终止状态的 Pod 总数上限数为例:

2.1 设置允许存在的 ConfigMap 总数

1.定义一个yaml文件

[root@k8s-master01 study]# vim resourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-test
  labels:
    app: resourcequota
spec:
  hard:
    pods: 2
   # requests.cpu: 0.5
   # requests.memory: 512Mi
   # limits.cpu: 5
   # limits.memory: 16Gi
    configmaps: 2
   # requests.storage: 40Gi
   # persistentvolumeclaims: 20
   # replicationcontrollers: 20
   # secrets: 20
   # services: 50
   # services.loadbalancers: "2"
   # services.nodeports: "10"

#pods:限制最多启动Pod的个数
#requests.cpu:限制最高CPU请求数
#requests.memory:限制最高内存的请求数
#limits.cpu:限制最高CPU的limit上限
#limits.memory:限制最高内存的limit上限

2.新建一个命名空间

[root@k8s-master01 study]# kubectl create ns rq-test 

3.开始创建

[root@k8s-master01 study]# kubectl create -f  resourceQuota.yaml -n rq-test

4.查看部署情况

[root@k8s-master01 study]# kubectl get resourcequota -n rq-test 
NAME            AGE   REQUEST                      LIMIT
resource-test   69s   configmaps: 1/2, pods: 0/2   

[root@k8s-master01 study]# kubectl get resourcequota -n rq-test -oyaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ResourceQuota
  metadata:
    creationTimestamp: "2022-12-11T01:03:55Z"
    labels:
      app: resourcequota
    name: resource-test
    namespace: rq-test
    resourceVersion: "14348"
    uid: c58788f1-6b16-4d6f-b5c2-16522af71574
  spec:
    hard:
      configmaps: "2" #限制内容
      pods: "2"       #限制内容
  status:
    hard:
      configmaps: "2"
      pods: "2"
    used:
      configmaps: "1"#目前使用情况
      pods: "0"      #目前使用情况
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

5.查看cm默认数量为1

[root@k8s-master01 study]# kubectl get cm -n rq-test
NAME               DATA   AGE
kube-root-ca.crt   1      8m29s

6.再创建两个cm,验证。观察到,满足两个后,再创建会发生报错信息

[root@k8s-master01 study]# kubectl create cm test-cm01 --from-file=job.yaml -n rq-test
configmap/test-cm01 created
[root@k8s-master01 study]# kubectl get cm -n rq-test
NAME               DATA   AGE
kube-root-ca.crt   1      12m
test-cm01          1      11s
[root@k8s-master01 study]# kubectl create cm test-cm02 --from-file=job.yaml -n rq-test
error: failed to create configmap: configmaps "test-cm02" is forbidden: exceeded quota: resource-test, requested: configmaps=1, used: configmaps=2, limited: configmaps=2

2.2 设置允许存在的Pod总数

1.定义一个yaml文件

[root@k8s-master01 study]# vim resourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-test
  labels:
    app: resourcequota
spec:
  hard:
    pods: 2
   # requests.cpu: 0.5
   # requests.memory: 512Mi
   # limits.cpu: 5
   # limits.memory: 16Gi
    configmaps: 2
   # requests.storage: 40Gi
   # persistentvolumeclaims: 20
   # replicationcontrollers: 20
   # secrets: 20
   # services: 50
   # services.loadbalancers: "2"
   # services.nodeports: "10"

#pods:限制最多启动Pod的个数
#requests.cpu:限制最高CPU请求数
#requests.memory:限制最高内存的请求数
#limits.cpu:限制最高CPU的limit上限
#limits.memory:限制最高内存的limit上限

2.创建deployment

[root@k8s-master01 study]# kubectl create deploy test-01 --image=registry.cn-hangzhou.aliyuncs.com/zq-demo/nginx:1.14.2 --replicas=3 -n rq-test

3.查看创建情况,观察到只起来两个pod

[root@k8s-master01 study]# kubectl get deploy -n rq-test
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
test-01   0/3     2            0           17s

[root@k8s-master01 study]# kubectl get po -n rq-test
NAME                       READY   STATUS              RESTARTS   AGE
test-01-5465c8b4fd-5x4wg   0/1     ContainerCreating   0          92s
test-01-5465c8b4fd-jx4wt   0/1     ContainerCreating   0          92s

4.查看报错信息

[root@k8s-master01 study]# kubectl describe deployment test-01 -n rq-test
...
...
...
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m25s  deployment-controller  Scaled up replica set test-01-5465c8b4fd to 3

[root@k8s-master01 study]# kubectl describe rs test-01-5465c8b4fd  -n rq-test
...
...
...
Events:
  Type     Reason            Age                  From                   Message
  ----     ------            ----                 ----                   -------
  Normal   SuccessfulCreate  6m15s                replicaset-controller  Created pod: test-01-5465c8b4fd-jx4wt
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-q5cmt" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Normal   SuccessfulCreate  6m15s                replicaset-controller  Created pod: test-01-5465c8b4fd-5x4wg
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-87tkn" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-9ptwh" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-4dv8h" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-2qr6p" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m15s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-jjqsn" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m14s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-sgx8b" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m14s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-hk4lk" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      6m14s                replicaset-controller  Error creating: pods "test-01-5465c8b4fd-xhdvs" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2
  Warning  FailedCreate      51s (x8 over 6m14s)  replicaset-controller  (combined from similar events): Error creating: pods "test-01-5465c8b4fd-hp6cq" is forbidden: exceeded quota: resource-test, requested: pods=1, used: pods=2, limited: pods=2

4.修改pod限制数为3

[root@k8s-master01 study]# vim resourceQuota.yaml 
apiVersion: v1
kind: ResourceQuota
metadata:
  name: resource-test
  labels:
    app: resourcequota
spec:
  hard:
    pods: 3
   # requests.cpu: 0.5
   # requests.memory: 512Mi
   # limits.cpu: 5
   # limits.memory: 16Gi
    configmaps: 2
   # requests.storage: 40Gi
   # persistentvolumeclaims: 20
   # replicationcontrollers: 20
   # secrets: 20
   # services: 50
   # services.loadbalancers: "2"
   # services.nodeports: "10"

#pods:限制最多启动Pod的个数
#requests.cpu:限制最高CPU请求数
#requests.memory:限制最高内存的请求数
#limits.cpu:限制最高CPU的limit上限
#limits.memory:限制最高内存的limit上限

5.重新创建

[root@k8s-master01 study]# kubectl replace -f resourceQuota.yaml -n rq-test 

6.等待轮询检查完毕,自动创建完成即可

[root@k8s-master01 study]# kubectl get po -n rq-test 
NAME                       READY   STATUS              RESTARTS   AGE
test-01-5465c8b4fd-5x4wg   0/1     ContainerCreating   0          10m
test-01-5465c8b4fd-jx4wt   0/1     ContainerCreating   0          10m