【问题标题】:Kubernetes HPA doesn't scale down after decreasing the loads减少负载后 Kubernetes HPA 不会缩减
【发布时间】:2020-09-30 13:00:37
【问题描述】:

当 pod 的负载增加时,Kubernetes HPA 可以正常工作,但负载降低后,部署规模不会改变。这是我的 HPA 文件:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: baseinformationmanagement
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: baseinformationmanagement
  minReplicas: 1
  maxReplicas: 3
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80

我的 Kubernetes 版本:

> kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"d647ddbd755faf07169599a625faf302ffc34458", GitTreeState:"clean", BuildDate:"2019-10-02T17:01:15Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:22:30Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}

这是我的 HPA 描述:

> kubectl describe hpa baseinformationmanagement
Name:                                                     baseinformationmanagement
Namespace:                                                default
Labels:                                                   <none>
Annotations:                                              kubectl.kubernetes.io/last-applied-configuration:
                                                            {"apiVersion":"autoscaling/v2beta2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"baseinformationmanagement","name...
CreationTimestamp:                                        Sun, 27 Sep 2020 06:09:07 +0000
Reference:                                                Deployment/baseinformationmanagement
Metrics:                                                  ( current / target )
  resource memory on pods  (as a percentage of request):  49% (1337899008) / 70%
  resource cpu on pods  (as a percentage of request):     2% (13m) / 50%
Min replicas:                                             1
Max replicas:                                             3
Deployment pods:                                          2 current / 2 desired
Conditions:
  Type            Status  Reason              Message
  ----            ------  ------              -------
  AbleToScale     True    ReadyForNewScale    recommended size matches current size
  ScalingActive   True    ValidMetricFound    the HPA was able to successfully calculate a replica count from memory resource utilization (percentage of request)
  ScalingLimited  False   DesiredWithinRange  the desired count is within the acceptable range
Events:           <none>

【问题讨论】:

  • 也许 kubernetes.io/docs/tasks/run-application/… 这可以帮助你,如果你可以升级到 1.18
  • 您等了多长时间? HPA 故意在缩减方面有点保守;当我过去尝试过它时,它可能需要 5 分钟以上才能开始终止 pod。
  • 你能显示$ kubectl describe hpa bankchannel的输出吗?您是否观察到应该缩减部署的值?
  • @DavidMaze 大约 23 小时
  • @DawidKruk 我将其添加到我的问题中。

标签: kubernetes hpa


【解决方案1】:

您的 HPA 指定了内存和 CPU 目标。 Horizontal Pod Autoscaler 文档说明:

如果在 Horizo​​ntalPodAutoscaler 中指定了多个指标,则会针对每个指标进行此计算,然后选择所需副本数中的最大值。

实际的副本目标是当前副本计数以及当前和目标利用率的函数(同一链接):

desiredReplicas = ceil[currentReplicas * ( currentMetricValue / desiredMetricValue )]

特别是对于内存:currentReplicas 是 2; currentMetricValue 是 49; desiredMetricValue 是 80。所以目标副本数是

desiredReplicas = ceil[       2        * (         49        /         80         )]
desiredReplicas = ceil[       2        *                   0.61                    ]
desiredReplicas = ceil[                          1.26                              ]
desiredReplicas = 2

即使您的服务完全空闲,这也会导致(至少)2 个副本,除非该服务选择将内存释放回操作系统;这通常取决于语言运行时,并且有点超出您的控制。

仅删除内存目标并仅基于 CPU 进行自动缩放可能会更符合您的预期。

【讨论】:

  • 谢谢,这对我很有用。我可以将每个部署的内存和 CPU HPA 分开吗?
  • 我认为让两个不同的自动扩缩器针对同一个部署是没有意义的,如果这是你的问题的话。
猜你喜欢
  • 2020-04-16
  • 2020-03-07
  • 1970-01-01
  • 1970-01-01
  • 2020-10-13
  • 2020-09-30
  • 1970-01-01
  • 2021-07-01
  • 2022-01-21
相关资源
最近更新 更多