【问题标题】:Restart Kubernetes petset will clean the persistent volume重启 Kubernetes petset 将清理持久卷
【发布时间】:2017-03-03 06:06:23
【问题描述】:

我正在运行 3 个动物园管理员宠物集,其中卷正在使用 glusterfs 持久卷。如果是第一次启动宠物组,一切都很好。

我的一个要求是,如果 petset 被杀死,那么在我重新启动它后,它们仍将使用相同的持久卷。

我现在面临的问题是,重启petset后,持久卷中的原始数据将被清除。那么如何解决这个问题,而不是手动将文件从该卷中复制出来呢?我尝试了 reclaimPolicy 保留和删除,它们都将清理卷。谢谢。

以下是配置文件。

光伏

apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-0
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-0
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-1
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-1
    namespace: default
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  glusterfs:
    endpoints: gluster-cluster
    path: zookeeper-vol-2
    readOnly: false
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    name: glusterfsvol-zookeeper-2
    namespace: default

聚氯乙烯

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-0
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-1
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfsvol-zookeeper-2
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
       storage: 1Gi

宠物组

apiVersion: apps/v1alpha1
kind: PetSet
metadata:
  name: zookeeper
spec:
  serviceName: "zookeeper"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      terminationGracePeriodSeconds: 0
      containers:
      - name: zookeeper
        securityContext:
          privileged: true
          capabilities:
            add:
              - IPC_LOCK
        image: kuanghaochina/zookeeper-3.5.2-alpine-jdk:latest
        imagePullPolicy: Always
        ports:
          - containerPort: 2888
            name: peer
          - containerPort: 3888
            name: leader-election
          - containerPort: 2181
            name: client
        env:
        - name: ZOOKEEPER_LOG_LEVEL
          value: INFO
        volumeMounts:
        - name: glusterfsvol
          mountPath: /opt/zookeeper/data
          subPath: data
        - name: glusterfsvol
          mountPath: /opt/zookeeper/dataLog
          subPath: dataLog
  volumeClaimTemplates:
  - metadata:
      name: glusterfsvol
    spec:
      accessModes: 
        - ReadWriteMany
      resources:
        requests:
          storage: 1Gi

找到的原因是我使用zkServer-initialize.sh来强制zookeeper使用id,但是在脚本中会清理dataDir。

【问题讨论】:

  • 欢迎来到 StackOverflow - 您也许还应该分享您的配置,以确保人们可以更轻松地重现您的设置并回答您的问题。
  • 感谢您的帮助。配置文件添加完毕。

标签: kubernetes apache-zookeeper glusterfs


【解决方案1】:

找到的原因是我使用zkServer-initialize.sh来强制zookeeper使用id,但是在脚本中会清理dataDir。

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2018-12-09
    • 1970-01-01
    • 2021-01-17
    • 2019-08-26
    • 2021-08-27
    • 1970-01-01
    相关资源
    最近更新 更多