【发布时间】:2017-07-20 17:37:52
【问题描述】:
我有以下简单的代码:
import org.apache.hadoop.hbase.client.ConnectionFactory
import org.apache.hadoop.hbase.HBaseConfiguration
val hbaseconfLog = HBaseConfiguration.create()
val connectionLog = ConnectionFactory.createConnection(hbaseconfLog)
我在 spark-shell 上运行,我收到以下错误:
14:23:42 WARN zookeeper.ClientCnxn: Session 0x0 for server null, unexpected
error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:30)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
实际上有很多这样的错误,偶尔也会出现一些错误:
14:23:46 WARN client.ZooKeeperRegistry: Can't retrieve clusterId from
Zookeeper org.apache.zookeeper.KeeperException$ConnectionLossException:
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
通过 Cloudera 的 VM,我可以通过简单地重新启动 hbase-master、regionserver 和 thrift 来解决这个问题,但是在我的公司我不允许这样做,我也通过复制文件 hbase- 解决了一次site.xml 到 spark conf 目录,但我也不能,有没有办法在 spark-shell 参数中设置这个特定文件的路径?
【问题讨论】:
标签: apache-spark hbase