【问题标题】:How do I configure kafka-connect w/ "securityMechanism=9, encryptionAlgorithm=2" for a db2 database connection in my docker-compose file?如何在我的 docker-compose 文件中为 db2 数据库连接配置带有“securityMechanism=9,encryptionAlgorithm=2”的 kafka-connect?
【发布时间】:2020-12-16 04:58:01
【问题描述】:

问题:
如何在我的 docker-compose 文件中为 db2 数据库连接配置“securityMechanism=9,encryptionAlgorithm=2”?

注意:当运行我的本地 kafka 安装 (kafka_2.13-2.6.0) 以连接到网络上的 db2 数据库时,我只需要修改 bin/connect-standalone.sh文件 通过像这样修改现有的“EXTRA_ARGS=”行:

(...)
EXTRA_ARGS=${EXTRA_ARGS-'-name connectStandalone -Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2'}
(...)

效果很好。

但是,当我尝试对容器化的 kafka/broker“服务”(docker-compose.yml) 使用相同的想法时,
通过使用修改后的“connect-standalone”文件内容安装卷(以替换容器中的“/usr/bin/connect-standalone”文件),它不起作用。

我确实验证了容器的文件已更改。

...当我尝试使用 kafka-jdbc-source-connector 连接到数据库时收到此异常:

Caused by: com.ibm.db2.jcc.am.SqlInvalidAuthorizationSpecException: [jcc][t4][201][11237][4.25.13] Connection authorization failure occurred.  
Reason: Security mechanism not supported. ERRORCODE=-4214, SQLSTATE=28000
  

那么,我该如何在 docker-compose.yml 中配置 securityMechanism/encryptionAlgorithm 设置?

感谢任何帮助

-sairn


这是一个 docker-compose.yml - 你可以看到我已经尝试在 broker(kafka) 服务和 kafka-connect 服务中使用修改后的“connect-standalone”文件安装卷......都没有实现想要的效果

version: '3.8'
services:
    zookeeper:
        image: confluentinc/cp-zookeeper:6.0.0
        container_name: zookeeper       
        ports:
            - "2181:2181"       
        environment:
            ZOOKEEPER_CLIENT_PORT: 2181
            ZOOKEEPER_TICK_TIME: 2000

            
    kafka:
        image: confluentinc/cp-enterprise-kafka:6.0.0
        container_name: kafka
        depends_on:
            - zookeeper
        ports:
            - "9092:9092"
        environment:
            KAFKA_BROKER_ID: 1
            KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
            KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
            KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
            KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://kafka:9092
            KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
            KAFKA_METRIC_REPORTERS: io.confluent.metrics.reporter.ConfluentMetricsReporter
            KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
            KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 100
            CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: kafka:29092
            CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: zookeeper:2181
            CONFLUENT_METRICS_REPORTER_TOPIC_REPLICAS: 1
            CONFLUENT_METRICS_ENABLE: 'true'
            CONFLUENT_SUPPORT_CUSTOMER_ID: 'anonymous'
            JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"            
        volumes:       
            - ./connect-standalone:/usr/bin/connect-standalone                           

            
    schema-registry:
        image: confluentinc/cp-schema-registry:6.0.0
        container_name: schema-registry
        hostname: schema-registry
        depends_on:
            - zookeeper
            - kafka
        ports:
            - "8081:8081"
        environment:
            SCHEMA_REGISTRY_HOST_NAME: schema-registry
            SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
            SCHEMA_REGISTRY_LISTENERS: http://schema-registry:8081

            
    kafka-connect:
        image: confluentinc/cp-kafka-connect:6.0.0
        container_name: kafka-connect       
        hostname: kafka-connect
        depends_on:
            - kafka
            - schema-registry
        ports:
            - "8083:8083"
        environment:
            CONNECT_BOOTSTRAP_SERVERS: "kafka:29092"
            CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect"
            CONNECT_REST_PORT: 8083
            CONNECT_GROUP_ID: kafka-connect
            CONNECT_CONFIG_STORAGE_TOPIC: kafka-connect-configs
            CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
            CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
            CONNECT_OFFSET_STORAGE_TOPIC: kafka-connect-offsets
            CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
            CONNECT_STATUS_STORAGE_TOPIC: kafka-connect-status
            CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
            CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
            CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
            CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: http://schema-registry:8081
            CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
            CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
            CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
            CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/confluent-hub-components"
            CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
            JVM_OPTS: "-Ddb2.jcc.securityMechanism=9 -Ddb2.jcc.encryptionAlgorithm=2"
        volumes:
            - ./kafka-connect-jdbc-10.0.1.jar:/usr/share/java/kafka-connect-jdbc/kafka-connect-jdbc-10.0.1.jar    
            - ./db2jcc-db2jcc4.jar:/usr/share/java/kafka-connect-jdbc/db2jcc-db2jcc4.jar  
            - ./connect-standalone:/usr/bin/connect-standalone              

Fwiw,连接器看起来像这样......

curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d '{
        "name": "CONNECTOR01",
        "config": {
        "connector.class":"io.confluent.connect.jdbc.JdbcSourceConnector",
        "connection.url":"jdbc:db2://THEDBURL:50000/XXXXX",
        "connection.user":"myuserid",
        "connection.password":"mypassword",
        "poll.interval.ms":"15000",
        "table.whitelist":"YYYYY.TABLEA",
        "topic.prefix":"tbl-",
        "mode":"timestamp",
        "timestamp.initial":"-1",
        "timestamp.column.name":"TIME_UPD",
        "poll.interval.ms":"15000"
        }
    }'

【问题讨论】:

  • 错误信息非常具体。目标 Db2 服务器是否与您在本地使用的服务器不同
  • 用于针对本地和容器化 kafka 实例的相同连接器 - 指向同一个 db2 服务器 - 这就是它令人困惑的原因。我意识到可能有一个明显的解释 - 因此 StackOverflow 问题
  • 工作案例和失败案例的 db2jcc4.jar / db2jcc.jar 文件是否不同?
  • 感谢提问 - 两者都是一样的:db2jcc-db2jcc4.jar

标签: apache-kafka docker-compose db2 apache-kafka-connect


【解决方案1】:

尝试使用KAFKA_OPTS 而不是JVM_OPTS

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-04-14
    • 1970-01-01
    • 1970-01-01
    • 2020-06-28
    • 2019-03-23
    • 2019-01-17
    相关资源
    最近更新 更多