【发布时间】:2020-10-09 21:47:35
【问题描述】:
我是 Kubernetese 的新手,如果我的问题看起来含糊不清,我深表歉意。我尽量详细说明。我通过 Kubernetese 在 Google Cloud 上有一个 pod,其中有一个 GPU。这个 GPU 负责处理一组任务,比如说对图像进行分类。为此,我使用 kubernetes 创建了一个服务。我的 yaml 文件的服务部分如下所示。此外,该服务的 url 将是 http://model-server-service.default.svc.cluster.local,因为该服务的名称是 moderl-server-service
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: model-server
name: model-server
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: model-server
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: model-server
spec:
containers:
- args:
- -t
- "120"
- -b
- "0.0.0.0"
- app:flask_app
command:
- gunicorn
env:
- name: ENV
value: staging
- name: GCP
value: "2"
image: gcr.io/my-production/my-model-server: myGitHash
imagePullPolicy: Always
name: model-server
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8000
protocol: TCP
volumeMounts:
- name: model-files
mountPath: /model-server/models
# These containers are run during pod initialization
initContainers:
- name: model-download
image: gcr.io/my-production/my-model-server: myGitHash
command:
- gsutil
- cp
- -r
- gs://my-staging-models/*
- /model-files/
volumeMounts:
- name: model-files
mountPath: "/model-files"
volumes:
- name: model-files
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: model-server
name: model-server-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: model-server
sessionAffinity: None
type: ClusterIP
我的问题从这里开始。我正在创建一组新的任务。对于这组新任务,我将需要大量内存,因此我不想使用以前的服务。我想将其作为单独的新服务的一部分。带有以下 url http://model-server-heavy-service.default.svc.cluster.local 的东西。我试图创建一个新的 yaml 文件model-server-heavy.yaml。在这个新的 yaml 文件中,我将服务的名称从 model-server-service 更改为 model-server-heavy-service。另外,我将应用程序的名称和名称从model-server 更改为model-sever-heavy。所以最终的 yaml 文件看起来就像我在这篇文章末尾放的一样。不幸的是,新模型服务器无法正常工作,我收到以下关于 Kubernetes 上新模型服务器的消息。
model-server-asdhjs-asd 1/1 Running 0 21m
model-server-heavy-xnshk 0/1 **CrashLoopBackOff** 8 21m
有人可以说明我做错了什么以及我想到的替代方案是什么?为什么我会收到第二个模型服务器的消息 CrashLoopBackOff?我对第二个模型服务器做的不对是什么原因。
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: model-server-heavy
name: model-server-heavy
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: model-server-heavy
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: model-server-heavy
spec:
containers:
- args:
- -t
- "120"
- -b
- "0.0.0.0"
- app:flask_app
command:
- gunicorn
env:
- name: ENV
value: staging
- name: GCP
value: "2"
image: gcr.io/my-production/my-model-server:mgGitHash
imagePullPolicy: Always
name: model-server-heavy
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 8000
protocol: TCP
volumeMounts:
- name: model-files
mountPath: /model-server-heavy/models
# These containers are run during pod initialization
initContainers:
- name: model-download
image: gcr.io/my-production/my-model-server:myGitHash
command:
- gsutil
- cp
- -r
- gs://my-staging-models/*
- /model-files/
volumeMounts:
- name: model-files
mountPath: "/model-files"
volumes:
- name: model-files
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: model-server-heavy
name: model-server-heavy-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8000
selector:
app: model-server-heavy
sessionAffinity: None
type: ClusterIP
【问题讨论】:
-
一个具有不同服务名称的新 yaml 文件,正如您所做的那样,应该可以工作。你遇到了什么错误?
-
这里是错误:
model-server-platelet 0/1 Init:ImagePullBackOff -
该错误仅意味着它无法提取该部署中定义的映像
-
您知道是什么原因造成的吗?一项服务很好,但是当我添加一项新服务时,两项服务都返回错误。
model-server 0/1 Init:ImagePullBackOff服务2model-server-platelet 0/1 Init:ImagePullBackOff -
这是一个 pod 错误,而不是服务错误。您可以使用 pod 规范更新您的问题吗?
标签: kubernetes deep-learning google-kubernetes-engine gcloud kubernetes-ingress