【问题标题】:Kubernetes failed to pull image k8s.gcr.ioKubernetes 拉取镜像 k8s.gcr.io 失败
【发布时间】:2021-01-21 12:50:35
【问题描述】:

我正在尝试在我的 CentOS 机器上安装 Kubernetes,当我初始化集群时,出现以下错误。

我指定我支持公司代理。我已经在目录中为 Docker 配置了它:/etc/systemd/system/docker.service.d/http-proxy.conf Docker 工作正常。

无论我怎么看,我都找不到解决这个问题的办法。

感谢您的帮助。

# kubeadm init
W1006 14:29:38.432071    7560 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 14:29:38.432147    7560 version.go:103] falling back to the local client version: v1.19.2
W1006 14:29:38.432367    7560 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
        [WARNING HTTPProxy]: Connection to "https://192.168.XXX.XXX" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". If that is not intended, adjust your proxy settings
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1

# kubeadm config images pull
W1006 17:33:41.362395   80605 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 17:33:41.362454   80605 version.go:103] falling back to the local client version: v1.19.2
W1006 17:33:41.362685   80605 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

【问题讨论】:

    标签: docker kubernetes proxy kubeadm


    【解决方案1】:

    也许你机器上的根证书已经过时了——所以它不认为 k8s.gcr.io 的证书是有效的。这条消息x509: certificate signed by unknown authority 暗示了这一点。

    尝试更新它们:yum update ca-certificates || yum reinstall ca-certificates

    【讨论】:

    • 也许还有update-ca-trust extract?我发现描述了here 的类似问题
    • 同样的结果。我正在寻找另一种获取这些图像的方法
    【解决方案2】:

    我刚刚对k8s.gcr.io进行了挖掘,并将请求给出的IP添加到/etc/hosts。

    # dig k8s.gcr.io
    
    ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.2 <<>> k8s.gcr.io
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44303
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
    
    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 512
    ;; QUESTION SECTION:
    ;k8s.gcr.io.            IN  A
    
    ;; ANSWER SECTION:
    k8s.gcr.io.     21599   IN  CNAME   googlecode.l.googleusercontent.com.
    googlecode.l.googleusercontent.com. 299 IN A    64.233.168.82
    
    ;; Query time: 72 msec
    ;; SERVER: 8.8.8.8#53(8.8.8.8)
    ;; WHEN: Tue Nov 24 11:45:37 CST 2020
    ;; MSG SIZE  rcvd: 103
    
    # cat /etc/hosts
    64.233.168.82   k8s.gcr.io
    

    现在它可以工作了!

    # kubeadm config images pull
    W1124 11:46:41.297352   50730 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
    [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.4
    [config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.4
    [config/images] Pulled k8s.gcr.io/pause:3.2
    [config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
    [config/images] Pulled k8s.gcr.io/coredns:1.7.0
    

    【讨论】:

    • 这有助于暂时解决问题,但为什么 dns 解析不起作用? resolv.conf 中的所有内容都已正确配置
    【解决方案3】:

    也使用 v1.19.2 - 我遇到了同样的错误。

    这似乎与here 提到的问题有关(我也认为在here 中)。

    我在节点上重新安装 kubeadm 并再次运行 kubeadm init 工作流 - 它现在正在使用 v1.19.3 并且错误消失了。

    所有主节点镜像都拉取成功。

    还通过以下方式验证:

    sudo kubeadm config images pull
    

    (*) 你可以运行kubeadm init--kubernetes-version=X.Y.Z(在我们的例子中是1.19.3)。

    【讨论】:

      猜你喜欢
      • 2018-07-13
      • 2019-11-26
      • 2020-02-01
      • 2018-12-06
      • 2023-02-03
      • 2020-08-09
      • 2016-08-20
      • 2018-11-16
      • 2022-08-09
      相关资源
      最近更新 更多