【发布时间】:2021-02-23 11:38:11
【问题描述】:
我正在尝试在 k8s 中定义两个不同部署之间的共享持久卷,但遇到了一些问题:
每个部署都有 2 个 pod,在部署之间我试图配置一个共享卷 - 这意味着如果我在 deplyment1/pod1 中创建一个 txt 文件并查看 deplyment1/pod2 - 我可以'看不到文件。
第二个问题是我在另一个部署 (deplyment2) 中看不到文件 - 当前发生的情况是每个 pod 创建了自己的独立卷,而不是共享同一个卷。
最终,我的目标是在 pod 和部署之间创建一个共享卷。 请务必注意,我在 GKE 上运行。
以下是我目前的配置
部署 1:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: test
spec:
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: server
image: app1
ports:
- name: grpc
containerPort: 11111
resources:
requests:
cpu: 300m
limits:
cpu: 500m
volumeMounts:
- name: test
mountPath: /etc/test/configs
volumes:
- name: test
persistentVolumeClaim:
claimName: my-claim
部署 2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
namespace: test
spec:
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: server
image: app2
ports:
- name: http
containerPort: 22222
resources:
requests:
cpu: 300m
limits:
cpu: 500m
volumeMounts:
- name: test
mountPath: /etc/test/configs
volumes:
- name: test
persistentVolumeClaim:
claimName: my-claim
持久卷:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
namespace: test
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: fast
local:
path: /etc/test/configs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-nodepool
operator: In
values:
- default-pool
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-claim
namespace: test
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
storageClassName: fast
resources:
requests:
storage: 5Gi
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
fstype: ext4
replication-type: regional-pd
并描述pv和pvc:
$ kubectl describe pvc -n test
Name: my-claim
Namespace: test
StorageClass: fast
Status: Bound
Volume: test-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class: fast
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 5Gi
Access Modes: RWX
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
$ kubectl describe pv -n test
Name: test-pv
Labels: <none>
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: fast
Status: Bound
Claim: test/my-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity:
Required Terms:
Term 0: cloud.google.com/gke-nodepool in [default-pool]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /etc/test/configs
Events: <none>
【问题讨论】:
-
AFAIK 这是不可能的。您可以设置一个配置了 NFS 的容器并创建其他容器可以挂载的导出。github.com/kubernetes/examples/tree/master/staging/volumes/nfs
标签: kubernetes google-cloud-platform google-kubernetes-engine persistent-storage kubernetes-pvc