【问题标题】:Not able to run datanode in multinode hadoop cluster setup, need suggestion无法在多节点 hadoop 集群设置中运行 datanode,需要建议
【发布时间】:2017-05-15 08:46:06
【问题描述】:

我正在尝试设置一个多节点 hadoop 集群,但是 datanode 无法启动,需要对此进行处理。以下是详细信息。除此之外没有其他设置。到目前为止,我只有一个数据节点和一个名称节点设置。

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.name.dir</name>
  <value>/data/namenode</value>
 </property>




DATANODE SETUP:

NAMENODE setup -
CORE-SITE.xml
<property>
  <name>fs.defult.name</name>
  <value>hdfs://192.168.1.7:9000</value>
 </property>

HDFS-SITE.XML

<property>
  <name>dfs.data.dir</name>
  <value>/data/datanode</value>
 </property>

When I run namenode it runs fine however when I try to run data node on other machine whos IP is 192.168.1.8 it fails and log says 

2017-05-13 21:26:27,744 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-05-13 21:26:27,862 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-05-13 21:26:32,908 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:34,979 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:36,041 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:37,093 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:38,162 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2017-05-13 21:26:39,238 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/192.168.1.7:9000. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
@        

and datanode dies
Is anything there to setup ?
let me any other details required. Is there any other files to change? I am using centos7 to setup the env. I did formatting of namenode also more than 2-3 times, and also permissions are proper. Only connectivity issue however when I try to scp from master to slave (namenode to datanode) its works fine.
Suggest if there are any other setup to be done to make it successful!

【问题讨论】:

  • 你能发布datanode的/etc/hosts文件吗?
  • 嘿,现在我可以在更新 iptables 后正常工作;)

标签: hadoop


【解决方案1】:

您的配置的属性名称中有错字。缺少“a”:fs.defult.name (vs fs.default.name)。

【讨论】:

  • 抱歉,在此论坛中输入的拼写错误实际上在我的系统中是正确的。我正在使用centos7。我从过去 2 天开始尝试了很多,但仍然无法得到正确的答案:(...
猜你喜欢
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2011-08-13
  • 1970-01-01
  • 1970-01-01
  • 2014-04-09
  • 2015-12-20
  • 2013-07-06
相关资源
最近更新 更多