【问题标题】:Kubernetes Persistent Volume is not working on GCEKubernetes 持久卷在 GCE 上不起作用
【发布时间】:2017-10-08 04:45:24
【问题描述】:

我正在尝试使我的弹性搜索 pod 持久化,以便在重新创建部署或 pod 时保留数据。弹性搜索是 Graylog2 设置的一部分。

设置完所有内容后,我向 Graylog 发送了一些日志,我可以看到它们出现在仪表板上。但是,我删除了 elasticsearch pod,重新创建后,Graylog 仪表板上的所有数据都丢失了。

我正在使用 GCE。

这是我的持久卷配置:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: elastic-pv
  labels:
    type: gcePD
spec:
  capacity:
    storage: 200Gi
  accessModes:
    - ReadWriteOnce
  gcePersistentDisk:
    fsType: ext4
    pdName: elastic-pv-disk

持久卷声明配置:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: elastic-pvc
  labels:
    type: gcePD
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi

这是我的弹性搜索部署:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: elastic-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        type: elasticsearch
    spec:
      containers:
      - name: elastic-container
        image: gcr.io/project/myelasticsearch:v1
        imagePullPolicy: Always
        ports:
        - containerPort: 9300
          name: first-port
          protocol: TCP
        - containerPort: 9200
          name: second-port
          protocol: TCP
        volumeMounts:
            - name: elastic-pd
              mountPath: /data/db
      volumes:
      - name: elastic-pd
        persistentVolumeClaim:
          claimName: elastic-pvc

kubectl describe pod的输出:

Name:       elastic-deployment-1423685295-jt6x5
Namespace:  default
Node:       gke-sd-logger-default-pool-2b3affc0-299k/10.128.0.6
Start Time: Tue, 09 May 2017 22:59:59 +0500
Labels:     pod-template-hash=1423685295
        type=elasticsearch
Status:     Running
IP:     10.12.0.11
Controllers:    ReplicaSet/elastic-deployment-1423685295
Containers:
  elastic-container:
    Container ID:   docker://8774c747e2a56363f657a583bf5c2234ed2cff64dc21b6319fc53fdc5c1a6b2b
    Image:      gcr.io/thematic-flash-786/myelasticsearch:v1
    Image ID:       docker://sha256:7c25be62dbad39c07c413888e275ae419a66070d37e0d98bf5008e15d7720eec
    Ports:      9300/TCP, 9200/TCP
    Requests:
      cpu:      100m
    State:      Running
      Started:      Tue, 09 May 2017 23:02:11 +0500
    Ready:      True
    Restart Count:  0
    Volume Mounts:
      /data/db from elastic-pd (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qtdbb (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True
  Ready     True
  PodScheduled  True
Volumes:
  elastic-pd:
    Type:   PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  elastic-pvc
    ReadOnly:   false
  default-token-qtdbb:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-qtdbb
QoS Class:  Burstable
Tolerations:    <none>
No events.

kubectl describe pv的输出:

Name:       elastic-pv
Labels:     type=gcePD
StorageClass:
Status:     Bound
Claim:      default/elastic-pvc
Reclaim Policy: Retain
Access Modes:   RWO
Capacity:   200Gi
Message:
Source:
    Type:   GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
    PDName: elastic-pv-disk
    FSType: ext4
    Partition:  0
    ReadOnly:   false
No events.

kubectl describe pvc 的输出:

Name:       elastic-pvc
Namespace:  default
StorageClass:
Status:     Bound
Volume:     elastic-pv
Labels:     type=gcePD
Capacity:   200Gi
Access Modes:   RWO
No events.

确认真实磁盘存在:

Persistent Volume 不持久的原因可能是什么?

【问题讨论】:

  • 我想到了两件事。 elastic-pv-disk 是否已经存在于 GCP 中?是否有任何其他卷可能与声明选择器 (200Gi) 匹配?显示kubectl describe &lt;pod&gt;kubectl describe &lt;pv&gt;kubectl describe &lt;pvc&gt; 的输出可能会有所帮助。
  • @AndyShinn 请查看更新。我一直在拔头发。
  • 而且只有一个PV
  • /data/db 是 ES 实际存储数据的正确位置吗?您是否尝试过 kubectl exec -it &lt;pod&gt; bash 进入容器并确认 ES 数据到达那里?

标签: elasticsearch docker kubernetes google-cloud-platform persistent-storage


【解决方案1】:

在官方镜像中,Elasticsearch 数据存储在/usr/share/elasticsearch/data 而不是/data/db。您似乎需要将挂载更新为 /usr/share/elasticsearch/data,而不是将数据存储在持久卷上。

【讨论】:

  • 当我使用 /usr/share/elasticsearch/data 作为挂载路径时,容器由于某种原因没有运行,这是我使用 kubectl describe pod Warning BackOff 4s kubelet 时遇到的错误, gke-standard-cluster-1-default-pool-8e52b876-r0xq 后退重启失败的容器
猜你喜欢
  • 1970-01-01
  • 2019-03-25
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2020-08-18
  • 1970-01-01
  • 1970-01-01
  • 2018-12-09
相关资源
最近更新 更多