【问题标题】:Unable to stream data from MySQL to Postgres using Kafka无法使用 Kafka 将数据从 MySQL 流式传输到 Postgres
【发布时间】:2020-11-06 07:08:45
【问题描述】:

我是第一次尝试使用 Kafka 并使用 AWS MSK 设置 Kafka 集群。目标是将数据从 MySQL 服务器流式传输到 Postgresql。 我使用 debezium MySQL 连接器作为源和 Confluent JDBC 连接器作为接收器。

MySQL 配置:

  "connector.class": "io.debezium.connector.mysql.MySqlConnector",
  "database.server.id": "1",
  "tasks.max": "3",
  "internal.key.converter.schemas.enable": "false",
  "transforms.unwrap.add.source.fields": "ts_ms",
  "key.converter.schemas.enable": "false",
  "internal.key.converter": "org.apache.kafka.connect.json.JsonConverter",
  "internal.value.converter.schemas.enable": "false",
  "value.converter.schemas.enable": "false",
  "internal.value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "key.converter": "org.apache.kafka.connect.json.JsonConverter",
  "transforms": "unwrap",
  "transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState"

注册 Mysql 连接器后,其状态为“正在运行”并捕获 MySQL 表中所做的更改,并以以下格式在消费者控制台中显示结果:

{"id":5,"created_at":1594910329000,"userid":"asldnl3r234mvnkk","amount":"B6Eg","wallet_type":"CDW"}

我的第一个问题:表中的“金额”列是“十进制”类型并包含数值,但在消费者控制台中为什么显示为字母数字值?

对于作为目标数据库的 Postgresql,我使用了 JDBC sink 连接器,配置如下:

"name": "postgres-connector-db08",
  "connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
  "tasks.max": "1",
  "key.converter": "org.apache.kafka.connect.storage.StringConverter",
  "key.converter.schemas.enable": "false",
  "value.converter": "org.apache.kafka.connect.json.JsonConverter",
  "value.converter.schemas.enable": "false",
  "topics": "mysql-cash.kafka_test.test",
  "connection.url": "jdbc:postgresql://xxxxxx:5432/test?currentSchema=public",
  "connection.user": "xxxxxx",
  "connection.password": "xxxxxx",
  "insert.mode": "upsert",
  "auto.create": "true",
  "auto.evolve": "true"

注册 JDBC 连接器后,当我检查状态时出现错误:

{"name":"postgres-connector-db08","connector":{"state":"RUNNING","worker_id":"x.x.x.x:8083"},"tasks":[{"id":0,"state":"FAILED","worker_id":"x.x.x.x:8083","trace":"org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
 org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
 org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
 org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
 org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
 org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
 org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 java.util.concurrent.FutureTask.run(FutureTask.java:266)
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.ConnectException: Sink connector 'postgres-connector-db08' is configured with 'delete.enabled=false' and 'pk.mode=none' and therefore requires records with a non-null Struct value and non-null Struct schema, but found record at (topic='mysql-cash.kafka_test.test',partition=0,offset=0,timestamp=1594909233389) with a HashMap value and null value schema.
 io.confluent.connect.jdbc.sink.RecordValidator.lambda$requiresValue$2(RecordValidator.java:83)
 io.confluent.connect.jdbc.sink.BufferedRecords.add(BufferedRecords.java:82)
 io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:66)
 io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:74)
 org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
... 10 more
"}],"type":"sink"}

为什么会出现这个错误?我在接收器配置中遗漏了什么吗?

【问题讨论】:

    标签: postgresql jdbc apache-kafka


    【解决方案1】:

    https://docs.confluent.io/kafka-connect-jdbc/current/sink-connector/index.html#data-mapping

    The sink connector requires knowledge of schemas, so you should use a suitable converter e.g. the Avro converter that comes with Schema Registry, or the JSON converter with schemas enabled.
    

    由于 JSON 是普通的(没有架构)并且连接器配置为 "value.converter.schemas.enable": "false"(禁用架构的 JSON 转换器),因此 Avro 转换器应设置为架构注册表:https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/#applying-schema

    【讨论】:

      猜你喜欢
      • 2023-03-26
      • 2018-03-10
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-09-19
      • 2018-08-09
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多