【问题标题】:nginx-ingress-controller - CrashLoopBackOff - kubernetes on proxmox (kvm)nginx-ingress-controller - CrashLoopBackOff - proxmox (kvm) 上的 kubernetes
【发布时间】:2020-07-05 12:45:07
【问题描述】:

我在 4 kvm 内运行一个 kubernetes 集群 hostet,由 proxmox 管理。安装 nginx-ingress-controller 后使用

helm install nginx-ingress stable/nginx-ingress --set controller.publishService.enabled=true -n nginx-ingress

控制器正在崩溃(crashloop)。日志并没有真正的帮助(或者我不知道在哪里查看)

谢谢彼得

这里是集群 pod:

root@sedeka78:~# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                             READY   STATUS             RESTARTS   AGE   IP            NODE       NOMINATED NODE   READINESS GATES
kube-system            coredns-66bff467f8-jv2mx                         1/1     Running            0          83m   10.244.0.9    sedeka78   <none>           <none>
kube-system            coredns-66bff467f8-vwrzb                         1/1     Running            0          83m   10.244.0.6    sedeka78   <none>           <none>
kube-system            etcd-sedeka78                                    1/1     Running            2          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-apiserver-sedeka78                          1/1     Running            2          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-controller-manager-sedeka78                 1/1     Running            4          84m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-flannel-ds-amd64-fxvfh                      1/1     Running            0          83m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-flannel-ds-amd64-h6btb                      1/1     Running            1          78m   10.10.10.79   sedeka79   <none>           <none>
kube-system            kube-flannel-ds-amd64-m6dw2                      1/1     Running            1          78m   10.10.10.80   sedeka80   <none>           <none>
kube-system            kube-flannel-ds-amd64-wgtqb                      1/1     Running            1          78m   10.10.10.81   sedeka81   <none>           <none>
kube-system            kube-proxy-5dvdg                                 1/1     Running            1          78m   10.10.10.80   sedeka80   <none>           <none>
kube-system            kube-proxy-89pf7                                 1/1     Running            0          83m   10.10.10.78   sedeka78   <none>           <none>
kube-system            kube-proxy-hhgtf                                 1/1     Running            1          78m   10.10.10.79   sedeka79   <none>           <none>
kube-system            kube-proxy-kshnn                                 1/1     Running            1          78m   10.10.10.81   sedeka81   <none>           <none>
kube-system            kube-scheduler-sedeka78                          1/1     Running            5          84m   10.10.10.78   sedeka78   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-4trgg       1/1     Running            0          80m   10.244.0.8    sedeka78   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7bfbb48676-q6c2t            1/1     Running            0          80m   10.244.0.7    sedeka78   <none>           <none>
nginx-ingress          nginx-ingress-controller-57f4b84b5-ldkk5         0/1     CrashLoopBackOff   19         45m   10.244.1.2    sedeka81   <none>           <none>
nginx-ingress          nginx-ingress-default-backend-7c868597f4-8q9n7   1/1     Running            0          45m   10.244.4.2    sedeka80   <none>           <none>
root@sedeka78:~#

这里是控制器的日志:

root@sedeka78:~# kubectl logs nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress -v10
I0705 14:31:41.152337   11692 loader.go:375] Config loaded from file:  /home/kubernetes/.kube/config
I0705 14:31:41.170664   11692 cached_discovery.go:114] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/servergroups.json
I0705 14:31:41.174651   11692 cached_discovery.go:71] returning cached discovery info from

...

I0705 14:31:41.189379   11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/batch/v1beta1/serverresources.json
I0705 14:31:41.189481   11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/batch/v1/serverresources.json
I0705 14:31:41.189560   11692 cached_discovery.go:71] returning cached discovery info from /root/.kube/cache/discovery/10.10.10.78_6443/certificates.k8s.io/v1beta1/serverresources.json
I0705 14:31:41.192043   11692 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8" 'https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5'
I0705 14:31:41.222314   11692 round_trippers.go:443] GET https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5 200 OK in 30 milliseconds
I0705 14:31:41.222588   11692 round_trippers.go:449] Response Headers:
I0705 14:31:41.222611   11692 round_trippers.go:452]     Cache-Control: no-cache, private
I0705 14:31:41.222771   11692 round_trippers.go:452]     Content-Type: application/json
I0705 14:31:41.222812   11692 round_trippers.go:452]     Date: Sun, 05 Jul 2020 12:31:41 GMT
I0705 14:31:41.223225   11692 request.go:1068] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"nginx-ingress-controller-57f4b84b5-ldkk5","generateName":"nginx-ingress-controller-57f4b84b5-","namespace":"nginx-ingress","selfLink":"/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5","uid":"778a9c24-9785-462e-9e1e-137a1aa08c87","resourceVersion":"10435","creationTimestamp":"2020-07-05T11:54:55Z","labels":{"app":"nginx-ingress","app.kubernetes.io/component":"controller","component":"controller","pod-template-hash":"57f4b84b5","release":"nginx-ingress"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"nginx-ingress-controller-57f4b84b5","uid":"b9c42590-7efb-46d2-b37c-cec3a994bf4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2020-07-05T11:54:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:app.kubernetes.io/component":{},"f:component":{},"f:pod-template-hash":{},"f:release":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b9c42590-7efb-46d2-b37c-cec3a994bf4e\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"nginx-ingress-controller\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"POD_NAME\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{".":{},"f:apiVersion":{},"f:fieldPath":{}}}},"k:{\"name\":\"POD_NAMESPACE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:fieldRef":{".":{},"f:apiVersion":{},"f:fieldPath":{}}}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2020-07-05T12:27:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},"spec":{"volumes":[{"name":"nginx-ingress-token-rmhf8","secret":{"secretName":"nginx-ingress-token-rmhf8","defaultMode":420}}],"containers":[{"name":"nginx-ingress-controller","image":"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0","args":["/nginx-ingress-controller","--default-backend-service=nginx-ingress/nginx-ingress-default-backend","--publish-service=nginx-ingress/nginx-ingress-controller","--election-id=ingress-controller-leader","--ingress-class=nginx","--configmap=nginx-ingress/nginx-ingress-controller"],"ports":[{"name":"http","containerPort":80,"protocol":"TCP"},{"name":"https","containerPort":443,"protocol":"TCP"}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}}],"resources":{},"volumeMounts":[{"name":"nginx-ingress-token-rmhf8","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"livenessProbe":{"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"readinessProbe":{"httpGet":{"path":"/healthz","port":10254,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1,"periodSeconds":10,"successThreshold":1,"failureThreshold":3},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent","securityContext":{"capabilities":{"add":["NET_BIND_SERVICE"],"drop":["ALL"]},"runAsUser":101,"allowPrivilegeEscalation":true}}],"restartPolicy":"Always","terminationGracePeriodSeconds":60,"dnsPolicy":"ClusterFirst","serviceAccountName":"nginx-ingress","serviceAccount":"nginx-ingress","nodeName":"sedeka81","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-07-05T11:54:56Z"},{"type":"Ready","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-07-05T11:54:56Z","reason":"ContainersNotReady","message":"containers with unready status: [nginx-ingress-controller]"},{"type":"ContainersReady","status":"False","lastProbeTime":null,"lastTransitionTime":"2020-07-05T11:54:56Z","reason":"ContainersNotReady","message":"containers with unready status: [nginx-ingress-controller]"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-07-05T11:54:56Z"}],"hostIP":"10.10.10.81","podIP":"10.244.1.2","podIPs":[{"ip":"10.244.1.2"}],"startTime":"2020-07-05T11:54:56Z","containerStatuses":[{"name":"nginx-ingress-controller","state":{"waiting":{"reason":"CrashLoopBackOff","message":"back-off 5m0s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-57f4b84b5-ldkk5_nginx-ingress(778a9c24-9785-462e-9e1e-137a1aa08c87)"}},"lastState":{"terminated":{"exitCode":143,"reason":"Error","startedAt":"2020-07-05T12:27:23Z","finishedAt":"2020-07-05T12:27:53Z","containerID":"docker://4b7d69c47884790031e665801e282dafd8ea5dfaf97d54c6659d894d88af5a7a"}},"ready":false,"restartCount":15,"image":"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0","imageID":"docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287","containerID":"docker://4b7d69c47884790031e665801e282dafd8ea5dfaf97d54c6659d894d88af5a7a","started":false}],"qosClass":"BestEffort"}}
I0705 14:31:41.239523   11692 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.18.5 (linux/amd64) kubernetes/e6503f8" 'https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5/log'
I0705 14:31:41.247040   11692 round_trippers.go:443] GET https://10.10.10.78:6443/api/v1/namespaces/nginx-ingress/pods/nginx-ingress-controller-57f4b84b5-ldkk5/log 200 OK in 7 milliseconds
I0705 14:31:41.247125   11692 round_trippers.go:449] Response Headers:
I0705 14:31:41.247146   11692 round_trippers.go:452]     Content-Type: text/plain
I0705 14:31:41.247164   11692 round_trippers.go:452]     Date: Sun, 05 Jul 2020 12:31:41 GMT
I0705 14:31:41.247182   11692 round_trippers.go:452]     Cache-Control: no-cache, private
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       0.32.0
  Build:         git-446845114
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.17.10

-------------------------------------------------------------------------------

I0705 12:27:23.597622       8 flags.go:204] Watching for Ingress class: nginx
W0705 12:27:23.598540       8 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0705 12:27:23.598663       8 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0705 12:27:23.599666       8 main.go:220] Creating API client for https://10.96.0.1:443

这里:

root@sedeka78:~# kubectl describe pod nginx-ingress-controller-57f4b84b5-ldkk5 -n nginx-ingress
Name:         nginx-ingress-controller-57f4b84b5-ldkk5
Namespace:    nginx-ingress
Priority:     0
Node:         sedeka81/10.10.10.81
Start Time:   Sun, 05 Jul 2020 13:54:56 +0200
Labels:       app=nginx-ingress
              app.kubernetes.io/component=controller
              component=controller
              pod-template-hash=57f4b84b5
              release=nginx-ingress
Annotations:  <none>
Status:       Running
IP:           10.244.1.2
IPs:
  IP:           10.244.1.2
Controlled By:  ReplicaSet/nginx-ingress-controller-57f4b84b5
Containers:
  nginx-ingress-controller:
    Container ID:  docker://545ed277d1a039cd36b0d18a66d1f58c8b44f3fc5e4cacdcde84cc68e763b0e8
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0
    Image ID:      docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:251e733bf41cdf726092e079d32eed51791746560fff4d59cf067508ed635287
    Ports:         80/TCP, 443/TCP
    Host Ports:    0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=nginx-ingress/nginx-ingress-default-backend
      --publish-service=nginx-ingress/nginx-ingress-controller
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --configmap=nginx-ingress/nginx-ingress-controller
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Sun, 05 Jul 2020 14:33:33 +0200
      Finished:     Sun, 05 Jul 2020 14:34:03 +0200
    Ready:          False
    Restart Count:  17
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-controller-57f4b84b5-ldkk5 (v1:metadata.name)
      POD_NAMESPACE:  nginx-ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-token-rmhf8 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  nginx-ingress-token-rmhf8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-token-rmhf8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  <unknown>            default-scheduler  Successfully assigned nginx-ingress/nginx-ingress-controller-57f4b84b5-ldkk5 to sedeka81
  Normal   Pulling    41m                  kubelet, sedeka81  Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
  Normal   Pulled     41m                  kubelet, sedeka81  Successfully pulled image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0"
  Normal   Created    40m (x3 over 41m)    kubelet, sedeka81  Created container nginx-ingress-controller
  Normal   Started    40m (x3 over 41m)    kubelet, sedeka81  Started container nginx-ingress-controller
  Normal   Killing    40m (x2 over 40m)    kubelet, sedeka81  Container nginx-ingress-controller failed liveness probe, will be restarted
  Normal   Pulled     40m (x2 over 40m)    kubelet, sedeka81  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.32.0" already present on machine
  Warning  Unhealthy  40m (x6 over 41m)    kubelet, sedeka81  Readiness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
  Warning  Unhealthy  21m (x33 over 41m)   kubelet, sedeka81  Liveness probe failed: Get http://10.244.1.2:10254/healthz: dial tcp 10.244.1.2:10254: connect: connection refused
  Warning  BackOff    97s (x148 over 38m)  kubelet, sedeka81  Back-off restarting failed container

【问题讨论】:

    标签: kubernetes crash kubernetes-ingress kvm proxmox


    【解决方案1】:

    已解决 我使用的是 DEBIAN10 (Buster),arptables 不是传统模式。

    解决办法如下:

    sudo apt-get install -y iptables arptables ebtables
    
    
    update-alternatives --set iptables /usr/sbin/iptables-nft
    update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
    update-alternatives --set arptables /usr/sbin/arptables-nft
    update-alternatives --set ebtables /usr/sbin/ebtables-nft
    

    请看这里: update-alternatives: error: alternative /usr/sbin/arptables-legacy for arptables not registered; not setting

    【讨论】:

      【解决方案2】:

      说到证书问题:

      curl [10.96.0.1:443](https://10.96.0.1/) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: [curl.haxx.se/docs/sslcerts.html](https://curl.haxx.se/docs/sslcerts.html) curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above
      

      您有两种选择可以让它发挥作用:

      1. 将 cURL 与 -k 选项一起使用,该选项允许 curl 进行不安全的连接,即 cURL 不会验证证书。

      2. 将根 CA(签署服务器证书的 CA)添加到 /etc/ssl/certs/ca-certificates.crt

      我认为您应该使用选项 2,因为它可以确保您连接到安全的 FTP 服务器。

      谈到就绪性和活跃性探测失败:

      当节点上的 CPU 消耗 100% 时,nginx-ingress-controller 立即失败,因为它没有requests CPU,所以它对 http:// 的响应时间过长: .../healthz(如果我记得的话是 1 秒)。

      您应该为 nginx-ingress-controller 使用 CPU requests,或者永远不要让节点中的 pod 使用 100% 的 CPU,这听起来无法控制。

      你也可以把法兰绒换成印花布。 删除法兰绒,使用以下命令安装印花布:

      kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/etcd.yaml
      kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/rbac.yaml
      kubectl apply -f https://docs.projectcalico.org/v3.2/getting-started/kubernetes/installation/hosted/calico.yaml
      

      请看:limits-requests-nginx-ingress-controllerlimit-range-podflannel-calico-nginx-ingress-controllerlocal-issuer

      【讨论】:

      • 谢谢!但它没有解决问题我在这个地方有 root-ca root@sedeka78:~# cd /etc/ssl/certs/ root@sedeka78:/etc/ssl/certs# ls -al | grep ca-cert* -rw-r--r-- 1 root root 200408 Jul 5 13:08 ca-certificates.crt root@sedeka78:/etc/ssl/certs# 也许我的节点之间有通信问题?我将用 calico 重新安装集群,也许这会有所帮助
      • 您是否设法使用新的网络插件重新安装集群?
      【解决方案3】:

      我无法准确指出问题所在,但 nginx 入口控制器位于 CrashLoopBackOff,因为它无法访问位于 https://10.96.0.1:443 的 Kubernetes API 服务器。 nginx 入口控制器 pod 和 Kubernetes API Server 之间可能存在一些网络或连接问题。

      尝试从另一个 pod 发送 curl https://10.96.0.1:443

      【讨论】:

      • root@sedeka79:~# curl 10.96.0.1:443 curl: (60) SSL证书问题:无法获取本地颁发者证书更多详情:curl.haxx.se/docs/sslcerts.html curl 无法验证服务器的合法性和因此无法与它建立安全连接。要了解有关这种情况以及如何解决它的更多信息,请访问上述网页。 root@sedeka79:~#
      猜你喜欢
      • 2019-02-24
      • 1970-01-01
      • 2020-01-15
      • 1970-01-01
      • 1970-01-01
      • 2021-12-27
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多