【问题标题】:Accepted socket connection from /hostname:55306 (org.apache.zookeeper.server.NIOServerCnxnFactory)接受来自 /hostname:55306 (org.apache.zookeeper.server.NIOServerCnxnFactory) 的套接字连接
【发布时间】:2018-12-11 10:26:52
【问题描述】:

我已经配置了 Kafka 集群、Storm 集群和 Hadoop 集群。当他们没有工作时,一切都很好。

当我以独立模式提交storm jar(从kafka获取数据并处理,然后将其存储到Hdfs中)时,它工作正常

将其配置为服务器属性相同的代码并在服务器上运行后,它会出现以下错误:

[2018-07-03 12:54:00,370] INFO Accepted socket connection from /192.168.3.222:55306 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-07-03 12:54:00,381] INFO Client attempting to establish new session at /192.168.3.222:55306 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:00,383] INFO Established session 0x3645ed69ca40031 with negotiated timeout 20000 for client /192.168.3.222:55306 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:02,429] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)

EndOfStreamException: Unable to read additional data from client sessionid 0x3645ed69ca40031, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)

[2018-07-03 12:54:02,433] INFO Closed socket connection for client /192.168.3.222:55306 which had sessionid 0x3645ed69ca40031 
(org.apache.zookeeper.server.NIOServerCnxn)
[2018-07-03 12:54:06,000] INFO Expiring session 0x1645ed69c8c0041, timeout of 20000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:06,000] INFO Processed session termination for sessionid: 0x1645ed69c8c0041 
(org.apache.zookeeper.server.PrepRequestProcessor)

我正在使用的各个版本:

  • apache-storm-1.0.6
  • kafka_2.11-1.0.1
  • zookeeper-3.4.12
  • hadoop-2.9.1

nimbus 日志

2018-07-04 12:28:54.455 o.a.s.d.nimbus timer [INFO] Setting new assignment for topology id test-topology-1-1530686803: #org.apache.storm.daemon.common.Assignment{:master-code-dir "/usr/local/apache-services/data/storm", :node->host {"7c98bf5a-38d5-4a13-95ad-966be3a51c49" "datanode2.sakha.com"}, :executor->node+port {[2 2] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700], [1 1] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700], [3 3] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700]}, :executor->start-time-secs {[1 1] 1530687534, [2 2] 1530687534, [3 3] 1530687534}, :worker->resources {["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700] [0.0 0.0 0.0]}, :owner "hduser"}
2018-07-04 12:28:54.520 o.a.s.d.nimbus pool-14-thread-7 [INFO] Created download session for test-topology-1-1530686803-stormjar.jar with id a9762861-224e-4f40-824b-ae0efa687452

主管日志

2018-07-04 12:30:46.461 o.a.s.d.s.Container SLOT_6700 [INFO] Creating symlinks for worker-id: b9c3daa0-4f4d-42d7-9963-e93b6e6179a3 storm-id: test-topology-1-1530686803 for files(0): []
2018-07-04 12:30:46.461 o.a.s.d.s.Container SLOT_6700 [INFO] Topology jar for worker-id: b9c3daa0-4f4d-42d7-9963-e93b6e6179a3 storm-id: test-topology-1-1530686803 does not contain re sources directory /usr/local/apache-services/data/storm/supervisor/stormdist/test-topology-1-1530686803/resources.
2018-07-04 12:30:46.461 o.a.s.d.s.BasicContainer SLOT_6700 [INFO] Launching worker with assignment LocalAssignment(topology_id:test-topology-1-1530686803, executors:[ExecutorInfo(task_start:2, task_end:2), ExecutorInfo(task_start:1, task_end:1), ExecutorInfo(task_start:3, task_end:3)], resources:WorkerResources(mem_on_heap:0.0, mem_off_heap:0.0, cpu:0.0), owner:hduser) for this supervisor 7c98bf5a-38d5-4a13-95ad-966be3a51c49 on port 6700 with id b9c3daa0-4f4d-42d7-9963-e93b6e6179a3

【问题讨论】:

  • 我会看一下工作日志(路径取决于 Storm 版本,所以请发布您使用的版本)。那里可能有更多有用的信息。考虑检查 Nimbus 和 Supervisor 日志。
  • @StigRohdeDøssing 我已经添加了您需要的详细信息
  • 工作日志应该在 your-storm-directory/logs/worker-artifacts/your-topology-id/your-worker-port-number/worker.log 中。请尝试查看这些文件,那里可能会出现一些错误登录。
  • 正如你所说,如果查看工作日志我发现:java.lang.NoSuchMethodError: org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosTicket,即使在添加了 hadoop auth jar @StigRohdeDøssing

标签: hadoop apache-kafka apache-zookeeper apache-storm


【解决方案1】:

您的依赖关系树有问题。您在工作日志中发布了 java.lang.NoSuchMethodError: org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosTicket。这表明当您提交 jar 时,您的类路径中有错误的 Hadoop jar 版本,或者您可能完全丢失了这些 jar。

这是storm-hdfs https://github.com/apache/storm/blob/v1.0.6/external/storm-hdfs/pom.xml的pom。默认情况下,它针对 Hadoop 2.6.1 进行编译。如果您想使用另一个 Hadoop 版本,您需要确保将列出的 Hadoop 依赖项替换为 pom 中较新的依赖项(即您需要在 pom 中手动列出 2.9.1 版本中的 hadoop-client)。

一个很好的调试工具是在你的项目中运行mvn dependency:tree,它会让你知道你的构建中包含哪些版本的jar。

【讨论】:

  • 我所做的是,我已经手动将所需的 jar 添加到storm lib路径中(因为如果你在storm cluster中运行jar,它会从strom lib路径中获取库)或者是他们的任何其他选项也在风暴集群上运行@Sting Rohde Dossing
  • 是的,即使我已经尝试了你给出的依赖项,我仍然发现同样的错误:stackoverflow.com/q/51187185/7527189@Stig Rohde Dossing
猜你喜欢
  • 1970-01-01
  • 2013-08-20
  • 1970-01-01
  • 2017-05-26
  • 2023-03-10
  • 2019-09-26
  • 1970-01-01
  • 2010-09-11
  • 1970-01-01
相关资源
最近更新 更多