【问题标题】:How can I define a shared persistent volume between two different deployments in k8s?如何在 k8s 中的两个不同部署之间定义共享持久卷?
【发布时间】:2021-02-23 11:38:11
【问题描述】:

我正在尝试在 k8s 中定义两个不同部署之间的共享持久卷,但遇到了一些问题:

每个部署都有 2 个 pod,在部署之间我试图配置一个共享卷 - 这意味着如果我在 deplyment1/pod1 中创建一个 txt 文件并查看 deplyment1/pod2 - 我可以'看不到文件。

第二个问题是我在另一个部署 (deplyment2) 中看不到文件 - 当前发生的情况是每个 pod 创建了自己的独立卷,而不是共享同一个卷。

最终,我的目标是在 pod 和部署之间创建一个共享卷。 请务必注意,我在 GKE 上运行。

以下是我目前的配置

部署 1:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: test
spec:
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
        - name: server
          image: app1
          ports:
            - name: grpc
              containerPort: 11111
          resources:
            requests:
              cpu: 300m
            limits:
              cpu: 500m
          volumeMounts:
            - name: test
              mountPath: /etc/test/configs
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: my-claim

部署 2:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: app2
      namespace: test
    spec:
      selector:
        matchLabels:
          app: app2
      template:
        metadata:
          labels:
            app: app2
        spec:
          containers:
            - name: server
              image: app2
              ports:
                - name: http
                  containerPort: 22222
              resources:
                requests:
                  cpu: 300m
                limits:
                  cpu: 500m
              volumeMounts:
                - name: test
                  mountPath: /etc/test/configs
          volumes:
            - name: test
              persistentVolumeClaim:
                claimName: my-claim

持久卷:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: test-pv
      namespace: test
    spec:
      capacity:
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
      - ReadWriteMany
      persistentVolumeReclaimPolicy: Retain
      storageClassName: fast
      local:
        path: /etc/test/configs
      nodeAffinity:
        required:
          nodeSelectorTerms:
          - matchExpressions:
            - key: cloud.google.com/gke-nodepool
              operator: In
              values:
              - default-pool
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: my-claim
      namespace: test
      annotations:
        volume.beta.kubernetes.io/storage-class: fast
    spec:
      accessModes:
      - ReadWriteMany
      storageClassName: fast
      resources:
        requests:
          storage: 5Gi
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: fast
    provisioner: kubernetes.io/gce-pd
    parameters:
      type: pd-ssd
      fstype: ext4
      replication-type: regional-pd

并描述pv和pvc:

     $ kubectl describe pvc -n test
        Name:          my-claim
        Namespace:     test
        StorageClass:  fast
        Status:        Bound
        Volume:        test-pv
        Labels:        <none>
        Annotations:   pv.kubernetes.io/bind-completed: yes
                       pv.kubernetes.io/bound-by-controller: yes
                       volume.beta.kubernetes.io/storage-class: fast
        Finalizers:    [kubernetes.io/pvc-protection]
        Capacity:      5Gi
        Access Modes:  RWX
        VolumeMode:    Filesystem
        Mounted By:    <none>
        Events:        <none>
      $ kubectl describe pv -n test
        Name:              test-pv
        Labels:            <none>
        Annotations:       pv.kubernetes.io/bound-by-controller: yes
        Finalizers:        [kubernetes.io/pv-protection]
        StorageClass:      fast
        Status:            Bound
        Claim:             test/my-claim
        Reclaim Policy:    Retain
        Access Modes:      RWX
        VolumeMode:        Filesystem
        Capacity:          5Gi
        Node Affinity:
          Required Terms:
            Term 0:        cloud.google.com/gke-nodepool in [default-pool]
        Message:
        Source:
            Type:  LocalVolume (a persistent volume backed by local storage on a node)
            Path:  /etc/test/configs
        Events:    <none>

【问题讨论】:

标签: kubernetes google-cloud-platform google-kubernetes-engine persistent-storage kubernetes-pvc


【解决方案1】:

GCE-PD CSI 存储驱动程序不支持ReadWriteMany。您需要使用ReadOnlyMany。对于ReadWriteMany,您需要使用 GFS 挂载。

来自docs 关于如何使用具有多个读取器的永久性磁盘

创建PersistentVolumePersistentVolumeClaim

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-readonly-pv
spec:
  storageClassName: ""
  capacity:
    storage: 10Gi
  accessModes:
    - ReadOnlyMany
  claimRef:
    namespace: default
    name: my-readonly-pvc
  gcePersistentDisk:
    pdName: my-test-disk
    fsType: ext4
    readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-readonly-pvc
spec:
  # Specify "" as the storageClassName so it matches the PersistentVolume's StorageClass.
  # A nil storageClassName value uses the default StorageClass. For details, see
  # https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  storageClassName: ""
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 10Gi

在 Pod 中使用 PersistentVolumeClaim

apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
spec:
  containers:
  - image: k8s.gcr.io/busybox
    name: busybox
    command:
      - "sleep"
      - "3600"
    volumeMounts:
    - mountPath: /test-mnt
      name: my-volume
      readOnly: true
  volumes:
  - name: my-volume
    persistentVolumeClaim:
      claimName: my-readonly-pvc
      readOnly: true

现在,您可以在不同节点上拥有多个 Pod,它们都可以以只读模式挂载这个 PersistentVolumeClaim。但是,您不能同时在多个节点上以写入模式附加永久磁盘

【讨论】:

  • 当我尝试这样做时,我遇到了问题。 ``` - test:pod/app1-69f499c8d-srff2: 0/9 个节点可用:3 cpu 不足,6 个节点有 `` 卷节点亲和性冲突。这是我的问题。当 pod 上升时,第一个 pod 会捕获音量,其他应用程序的其他 pod 无法连接到音量。你能解释一下nfs方式吗?
猜你喜欢
  • 1970-01-01
  • 2019-10-26
  • 1970-01-01
  • 2021-04-04
  • 2016-05-23
  • 2019-09-12
  • 2017-12-07
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多