【问题标题】:Creating pod and service for custom kafka connect image with kubernetes使用 kubernetes 为自定义 kafka 连接映像创建 pod 和服务
【发布时间】:2022-01-27 00:23:18
【问题描述】:

我已经成功创建了一个包含汇合集线器连接器的自定义 kafka 连接器图像。

我正在尝试创建 pod 和服务以使用 kubernetes 在 GCP 中启动它。

我应该如何配置yaml文件?我从快速入门指南中获取的下一部分代码。这是我尝试过的: Dockerfile:

FROM confluentinc/cp-kafka-connect-base:latest
ENV CONNECT_PLUGIN_PATH="/usr/share/java,/usr/share/confluent-hub-components,/usr/share/java/kafka-connect-jdbc"
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.2.6
RUN confluent-hub install --no-prompt debezium/debezium-connector-mysql:1.7.1
RUN confluent-hub install --no-prompt debezium/debezium-connector-postgresql:1.7.1
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-oracle-cdc:1.5.0
RUN wget -O /usr/share/confluent-hub-components/confluentinc-kafka-connect-jdbc/lib/mysql-connector-java-8.0.26.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar

confluent-platform.yaml 的 Modifield 部分

apiVersion: platform.confluent.io/v1beta1
kind: Connect
metadata:
  name: connect
  namespace: confluent
spec:
  replicas: 1
  image:
    application: maxprimeaery/kafka-connect-jdbc:latest   #confluentinc/cp-server-connect:7.0.1
    init: confluentinc/confluent-init-container:2.2.0-1
  configOverrides:
    server:
      - config.storage.replication.factor=1
      - offset.storage.replication.factor=1
      - status.storage.replication.factor=1
  podTemplate:
    resources:
      requests:
        cpu: 200m
        memory: 512Mi
    probe:
      liveness:
        periodSeconds: 10
        failureThreshold: 5
        timeoutSeconds: 500
    podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
      runAsNonRoot: true

这就是我在 connect-0 pod 的控制台中遇到的错误:

Events:
  Type     Reason     Age                 From               Message
  ----     ------     ----                ----               -------
  Normal   Scheduled  45m                 default-scheduler  Successfully assigned confluent/connect-0 to gke-my-kafka-cluster-default-pool-6ee97fb9-fh9w
  Normal   Pulling    45m                 kubelet            Pulling image "confluentinc/confluent-init-container:2.2.0-1"
  Normal   Pulled     45m                 kubelet            Successfully pulled image "confluentinc/confluent-init-container:2.2.0-1" in 17.447881861s
  Normal   Created    45m                 kubelet            Created container config-init-container
  Normal   Started    45m                 kubelet            Started container config-init-container
  Normal   Pulling    45m                 kubelet            Pulling image "maxprimeaery/kafka-connect-jdbc:latest"
  Normal   Pulled     44m                 kubelet            Successfully pulled image "maxprimeaery/kafka-connect-jdbc:latest" in 23.387676944s
  Normal   Created    44m                 kubelet            Created container connect
  Normal   Started    44m                 kubelet            Started container connect
  Warning  Unhealthy  41m (x5 over 42m)   kubelet            Liveness probe failed: HTTP probe failed with statuscode: 404
  Normal   Killing    41m                 kubelet            Container connect failed liveness probe, will be restarted
  Warning  Unhealthy  5m (x111 over 43m)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 404
  Warning  BackOff    17s (x53 over 22m)  kubelet            Back-off restarting failed container

我应该为自定义 kafka 连接器创建单独的 pod 和服务,还是必须配置上面的代码?

更新我的问题

我已经找到了如何在 kubernetes 中配置它,并将其添加到连接 pod

apiVersion: platform.confluent.io/v1beta1
kind: Connect
metadata:
  name: connect
  namespace: confluent
spec:
  replicas: 1
  image:
    application: confluentinc/cp-server-connect:7.0.1
    init: confluentinc/confluent-init-container:2.2.0-1
  configOverrides:
    server:
      - config.storage.replication.factor=1
      - offset.storage.replication.factor=1
      - status.storage.replication.factor=1
 build:
    type: onDemand
    onDemand:
      plugins:
        locationType: confluentHub
        confluentHub:
          - name: kafka-connect-jdbc
            owner: confluentinc
            version: 10.2.6
          - name: kafka-connect-oracle-cdc
            owner: confluentinc
            version: 1.5.0
          - name: debezium-connector-mysql
            owner: debezium
            version: 1.7.1
          - name: debezium-connector-postgresql
            owner: debezium
            version: 1.7.1
      storageLimit: 4Gi
  podTemplate:
    resources:
      requests:
        cpu: 200m
        memory: 1024Mi
    probe:
      liveness:
        periodSeconds: 180 #DONT CHANGE THIS
        failureThreshold: 5
        timeoutSeconds: 500
    podSecurityContext:
      fsGroup: 1000
      runAsUser: 1000
      runAsNonRoot: true

但我仍然无法从 Maven repo 添加 mysql-connector

我也尝试制作新的 docker 映像,但它不起作用。我也尝试了新的代码部分:

locationType: url #NOT WORKING. NO IDEA HOW TO CONFIGURE THAT
        url:
          - name: mysql-connector-java
            archivePath: https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.26/mysql-connector-java-8.0.26.jar
            checksum: sha512sum #definitely wrong

【问题讨论】:

  • 容器似乎快要死了。你能自己docker run 吗?您可以获取实际容器日志而不是 pod 事件吗?另外,默认连接图片内存最大2g,所以你可能需要增加资源请求
  • 我在文档中找到了一些关于它的信息link。但我不能将 mysql 连接器与其他资源分开添加。示例:locationType:confluentHub #works great confluentHub:-名称:kafka-connect-jdbc 所有者:confluentinc 版本:10.2.6
  • @OneCricketeer,回答了我的问题并添加了信息。
  • 很遗憾,我不熟悉 Connect CRD。您可能需要联系 Confluent 支持或论坛,询问他们是否允许自定义图像或如何下载 JAR 文件

标签: kubernetes apache-kafka apache-kafka-connect


【解决方案1】:

经过几次重试,我发现我只需要再等一会儿。

probe:
      liveness:
        periodSeconds: 180 #DONT CHANGE THIS
        failureThreshold: 5
        timeoutSeconds: 500

这部分periodSeconds: 180 将增加更多时间来制作豆荚Running,我可以使用我自己的图像。

image:
    application: maxprimeaery/kafka-connect-jdbc:5.0
    init: confluentinc/confluent-init-container:2.2.0-1

在这些更改之后,build 部分可以被删除。

【讨论】:

    猜你喜欢
    • 2020-12-25
    • 2020-08-04
    • 2019-12-24
    • 2018-09-03
    • 2022-06-16
    • 2020-01-19
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多