【问题标题】:Horizontal Pod Autoscale unable to read metricsHorizo​​ntal Pod Autoscale 无法读取指标
【发布时间】:2019-06-12 09:39:15
【问题描述】:

我正在使用来自here 的 Kafka Helm 图表。 我也在尝试 Horizo​​ntal Pod Autoscaler。

我在模板文件夹中添加了如下所示的 hpa.yaml 文件。

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: kafka-hpa
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: {{ include "kafka.fullname" . }}
minReplicas: {{ .Values.replicas }}
maxReplicas: 5
metrics:
- type: Resource
  resource:
    name: cpu
    targetAverageUtilization: 50
- type: Resource
  resource:
    name: memory
    targetAverageValue: 8000Mi

我也用 kind: StatefulSet 尝试了上述 YAML,但同样的问题仍然存在。

我的意图是最初拥有 3 个 Kafka pod,然后根据上面提到的 CPU 和内存 targetValues 将其扩展到 5 个。

但是,hpa 已部署,但根据我的理解,它无法读取指标,因为当前使用情况如下所述显示未知。

NAME        REFERENCE                          TARGETS                          MINPODS   MAXPODS   REPLICAS   AGE
kafka-hpa   Deployment/whopping-walrus-kafka   <unknown>/8000Mi, <unknown>/50%   3         5         0          1h . 

我是 helm 和 Kubernetes 的新手,所以我假设我的理解可能存在一些问题。

我也部署了metrics-server。

$ kubectl get deployments
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
metrics-server                   1         1         1            1           1d
whopping-walrus-kafka-exporter   1         1         1            1           1h

Pod 输出

$ kubectl get pods
NAME                                              READY     STATUS    RESTARTS   AGE
metrics-server-55cbf87bbb-vm2v5                   1/1       Running   0          15m
whopping-walrus-kafka-0                           1/1       Running   1          1h
whopping-walrus-kafka-1                           1/1       Running   0          1h
whopping-walrus-kafka-2                           1/1       Running   0          1h
whopping-walrus-kafka-exporter-5c66b5b4f9-mv5kv   1/1       Running   1          1h
whopping-walrus-zookeeper-0                       1/1       Running   0          1h

我希望 whopping-walrus-kafka pod 在负载时扩展到 5 个,但是没有与之对应的部署。

StatefulSet 输出

$ kubectl get statefulset
NAME                        DESIRED   CURRENT   AGE
original-bobcat-kafka       3         2         2m
original-bobcat-zookeeper   1         1         2m

当 hpa.yaml 中的 kind 为 StatefulSet 时描述 hpa 的输出。

$ kubectl describe hpa
Name:                                                  kafka-hpa
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Fri, 18 Jan 2019 12:13:59 +0530
Reference:                                             StatefulSet/original-bobcat-kafka
Metrics:                                               ( current / target )
  resource memory on pods:                             <unknown> / 8000Mi
  resource cpu on pods  (as a percentage of request):  <unknown> / 5%
Min replicas:                                          3
Max replicas:                                          5
Conditions:
  Type         Status  Reason          Message
  ----         ------  ------          -------
  AbleToScale  False   FailedGetScale  the HPA controller was unable to get the target's current scale: no matches for kind "StatefulSet" in group "extensions"
Events:
  Type     Reason          Age                From                       Message
  ----     ------          ----               ----                       -------
  Warning  FailedGetScale  15s (x17 over 8m)  horizontal-pod-autoscaler  no matches for kind "StatefulSet" in group "extensions"

当 hpa.yaml 中的 kind 为 Deployment 时,描述 hpa 的输出。

$ kubectl describe hpa
Name:                                                  kafka-hpa
Namespace:                                             default
Labels:                                                <none>
Annotations:                                           <none>
CreationTimestamp:                                     Fri, 18 Jan 2019 12:30:07 +0530
Reference:                                             Deployment/good-elephant-kafka
Metrics:                                               ( current / target )
  resource memory on pods:                             <unknown> / 8000Mi
  resource cpu on pods  (as a percentage of request):  <unknown> / 5%
Min replicas:                                          3
Max replicas:                                          5
Conditions:
  Type         Status  Reason          Message
  ----         ------  ------          -------
  AbleToScale  False   FailedGetScale  the HPA controller was unable to get the target's current scale: could not fetch the scale for deployments.extensions good-elephant-kafka: deployments/scale.extensions "good-elephant-kafka" not found
Events:
  Type     Reason          Age   From                       Message
  ----     ------          ----  ----                       -------
  Warning  FailedGetScale  9s    horizontal-pod-autoscaler  could not fetch the scale for deployments.extensions good-elephant-kafka: deployments/scale.extensions "good-elephant-kafka" not found

指标服务器 pod 的输出

$ kubectl describe pods metrics-server-55cbf87bbb-vm2v5
Name:           metrics-server-55cbf87bbb-vm2v5
Namespace:      default
Node:           docker-for-desktop/192.168.65.3
Start Time:     Fri, 18 Jan 2019 11:26:33 +0530
Labels:         app=metrics-server
            pod-template-hash=1176943666
            release=metrics-server
Annotations:    <none>
Status:         Running
IP:             10.1.0.119
Controlled By:  ReplicaSet/metrics-server-55cbf87bbb
Containers:
  metrics-server:
    Container ID:  docker://ee4b3d9ed1b15c2c8783345b0ffbbc565ad25f1493dec0148f245c9581443631
    Image:         gcr.io/google_containers/metrics-server-amd64:v0.3.1
    Image ID:      docker-pullable://gcr.io/google_containers/metrics-server-amd64@sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b
    Port:          <none>
    Host Port:     <none>
    Command:
      /metrics-server
      --kubelet-insecure-tls
      --kubelet-preferred-address-types=InternalIP
      --logtostderr
    State:          Running
  Started:      Fri, 18 Jan 2019 11:26:35 +0530
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
  /var/run/secrets/kubernetes.io/serviceaccount from metrics-server-token-d2g7b (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          True 
  PodScheduled   True 
Volumes:
  metrics-server-token-d2g7b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  metrics-server-token-d2g7b
    Optional:    false
    QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
             node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

如果我在某个地方出错了,请随时澄清我的理解。

我们将不胜感激。

【问题讨论】:

  • 您能否检查一下 resources: requests: memory: "64Mi" cpu: "250m" 是否已为您的 statefulset 设置。您可以使用kubectl describe statefulset &lt;sf-name&gt; 进行检查。如果没有,那可能是原因之一。
  • @PrafullLadha 是的,他们被评论了,但是我取消了他们的评论并再次测试。同样的错误仍然存​​在。
  • 能否请您描述一下您的 metrics-server pod 并分享输出?
  • @PrafullLadha 我添加了 metrics-server pod 输出

标签: kubernetes apache-kafka kubernetes-helm


【解决方案1】:

您需要在metrics-server 部署文件中添加以下命令:

containers:
   - command:
     - /metrics-server
     - --metric-resolution=30s
     - --kubelet-insecure-tls
     - --kubelet-preferred-address-types=InternalIP
     name: metrics-server

我相信 metrics-server 找不到带有 InternalIP 的 kubelet,因此出现了问题。有关详细信息,请查看我的以下答案以获取设置 HPA 的分步说明。

How to Enable KubeAPI server for HPA Autoscaling Metrics

【讨论】:

  • 我在其他帖子中读过这个决议并尝试过。
【解决方案2】:

我执行了一些操作,类似于上面@PrafullLadha 提到的那些。

修改metrics-server部署文件,添加如下代码:

containers:
 - command:
  - /metrics-server
  - --metric-resolution=30s
  - --kubelet-insecure-tls 
  - --kubelet-preferred-address-types=InternalIP`

另外,从 statefulset.yaml 文件中取消注释以下部分

resources: requests: cpu: 200m memory: 256Mi

它工作得很好。

【讨论】:

    【解决方案3】:

    如果您的部署到目前为止没有节点并且集群中可用的节点不能满足您的资源需求,也会发生此错误。在这种情况下,显然没有可用的指标。​​

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2021-06-10
      • 2019-03-31
      • 2019-06-25
      • 2019-05-20
      • 1970-01-01
      • 2023-03-26
      • 2020-02-05
      • 1970-01-01
      相关资源
      最近更新 更多