【问题标题】:Kubernetes Unable to mount volumes for pod with timeoutKubernetes 无法为 pod 挂载卷超时
【发布时间】:2016-12-24 03:42:39
【问题描述】:

我正在尝试将 NFS 卷挂载到我的 pod,但没有成功。

当我尝试从其他正在运行的服务器连接到它时,我有一个运行 nfs 挂载点的服务器

sudo mount -t nfs -o proto=tcp,port=2049 10.0.0.4:/export /mnt 工作正常

另一件值得一提的事情是,当我从部署中删除卷并且 pod 正在运行时。我登录它,我可以成功地使用端口 111 和 2049 远程登录到 10.0.0.4。所以看起来真的没有任何沟通问题

还有:

showmount -e 10.0.0.4
Export list for 10.0.0.4:
/export/drive 10.0.0.0/16
/export       10.0.0.0/16

所以我可以假设服务器和客户端之间没有网络或配置问题(我使用的是亚马逊,我测试的服务器与 k8s minions 在同一个安全组中)

附注: 服务器是一个简单的 ubuntu->50gb 磁盘

Kubernetes v1.3.4

所以我开始创建我的 PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 50Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.0.0.4
    path: "/export"

还有我的PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs-claim
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi

kubectl 是这样描述它们的:

  Name:       nfs
    Labels:     <none>
    Status:     Bound
    Claim:      default/nfs-claim
    Reclaim Policy: Retain
    Access Modes:   RWX
    Capacity:   50Gi
    Message:
    Source:
        Type:   NFS (an NFS mount that lasts the lifetime of a pod)
        Server: 10.0.0.4
        Path:   /export
        ReadOnly:   false
    No events.

  Name:       nfs-claim
    Namespace:  default
    Status:     Bound
    Volume:     nfs
    Labels:     <none>
    Capacity:   0
    Access Modes:
    No events.

pod 部署:

  apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: mypod
      labels:
        name: mypod
    spec:
      replicas: 1
      strategy:
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
        type: RollingUpdate
      template:
        metadata:
          name: mypod
          labels:
            # Important: these labels need to match the selector above, the api server enforces this constraint
            name: mypod
        spec:
          containers:
          - name: abcd
            image: irrelevant to the question
            ports:
            - containerPort: 80
            env:
            - name: hello
              value: world
            volumeMounts:
            - mountPath: "/mnt"
              name: nfs
          volumes:
            - name: nfs
              persistentVolumeClaim:
                claimName: nfs-claim

当我部署我的 POD 时,我得到以下信息:

Volumes:
      nfs:
        Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  nfs-claim
        ReadOnly:   false
      default-token-6pd57:
        Type:   Secret (a volume populated by a Secret)
        SecretName: default-token-6pd57
    QoS Tier:   BestEffort
    Events:
      FirstSeen LastSeen    Count   From                            SubobjectPath   Type        Reason      Message
      --------- --------    -----   ----                            -------------   --------    ------      -------
      13m       13m     1   {default-scheduler }                            Normal      Scheduled   Successfully assigned xxx-2140451452-hjeki to ip-10-0-0-157.us-west-2.compute.internal
      11m       7s      6   {kubelet ip-10-0-0-157.us-west-2.compute.internal}          Warning     FailedMount Unable to mount volumes for pod "xxx-2140451452-hjeki_default(93ca148d-6475-11e6-9c49-065c8a90faf1)": timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]
      11m       7s      6   {kubelet ip-10-0-0-157.us-west-2.compute.internal}          Warning     FailedSync  Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "xxx-2140451452-hjeki"/"default". list of unattached/unmounted volumes=[nfs]

尝试了我所知道的一切,以及我能想到的一切。我在这里错过了什么或做错了什么?

【问题讨论】:

    标签: linux amazon-web-services kubernetes


    【解决方案1】:

    我测试了 Kubernetes 的 1.3.4 和 1.3.5 版本,但 NFS 挂载对我不起作用。后来我切换到 1.2.5,那个版本给了我一些更详细的信息( kubectl describe pod ...)。事实证明,hyperkube 映像中缺少“nfs-common”。在我将 nfs-common 添加到基于主节点和工作节点上的 hyperkube 映像的所有容器实例之后,NFS 共享开始正常工作(挂载成功)。所以这里就是这种情况。我在实践中对其进行了测试,它解决了我的问题。

    【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2018-06-12
    • 1970-01-01
    • 2021-12-22
    • 2021-11-22
    • 1970-01-01
    • 1970-01-01
    • 2019-09-15
    • 2022-07-14
    相关资源
    最近更新 更多