【问题标题】:Can't start NameNode daemon and DataNode daemon in Hadoop无法在 Hadoop 中启动 NameNode 守护进程和 DataNode 守护进程
【发布时间】:2015-03-15 09:46:17
【问题描述】:

我正在尝试以伪分布式模式运行 Hadoop。为此,我正在尝试遵循本教程http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html

我可以 ssh 到我的本地主机并格式化文件系统。但是,我无法通过此命令启动 NameNode 守护程序和 DataNode 守护程序:

    sbin/start-dfs.sh

当我用 sudo 执行它时,我得到:

    ubuntu@ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sudo sbin/start-dfs.sh 
    Starting namenodes on [localhost]
    localhost: Permission denied (publickey).
    localhost: Permission denied (publickey).
    Starting secondary namenodes [0.0.0.0] 
    0.0.0.0: Permission denied (publickey).

当不使用 sudo 执行时:

    ubuntu@ip-172-31-42-67:/usr/local/hadoop-2.6.0$ sbin/start-dfs.sh 
    Starting namenodes on [localhost]
    localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
    localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
    localhost: starting namenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
    localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out’ for reading: No such file or directory
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-namenode-ip-172-31-42-67.out: No such file or directory
    localhost: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
    localhost: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
    localhost: starting datanode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
    localhost: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out’ for reading: No such file or directory
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
    localhost: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-datanode-ip-172-31-42-67.out: No such file or directory
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: mkdir: cannot create directory ‘/usr/local/hadoop-2.6.0/logs’: Permission denied
    0.0.0.0: chown: cannot access ‘/usr/local/hadoop-2.6.0/logs’: No such file or directory
    0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out
    0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 159: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
    0.0.0.0: head: cannot open ‘/usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out’ for reading: No such file or directory
    0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 177: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory
    0.0.0.0: /usr/local/hadoop-2.6.0/sbin/hadoop-daemon.sh: line 178: /usr/local/hadoop-2.6.0/logs/hadoop-ubuntu-secondarynamenode-ip-172-31-42-67.out: No such file or directory

我现在也注意到,当执行 ls 来检查 hfs 目录的内容时,它会失败:

   ubuntu@ip-172-31-42-67:~/dir$ hdfs dfs -ls output/
   ls: Call From ip-172-31-42-67.us-west-2.compute.internal/172.31.42.67 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

谁能告诉我可能是什么问题?

【问题讨论】:

    标签: hadoop ssh mapreduce ssh-keys


    【解决方案1】:

    我遇到了同样的问题,我找到的唯一解决方案是:

    https://anuragsoni.wordpress.com/2015/07/05/hadoop-start-dfs-sh-localhost-permission-denied-how-to-fix/

    这建议您生成一个新的 ssh-rsa 密钥

    【讨论】:

      【解决方案2】:

      上述错误表明存在权限问题。 您必须确保 hadoop 用户对 /usr/local/hadoop 具有适当的权限。 为此,您可以尝试:

         sudo chown -R hadoop /usr/local/hadoop/
      

      或者

         sudo chmod 777 /usr/local/hadoop/
      

      【讨论】:

        【解决方案3】:

        请确保您正确执行以下“配置”,您需要编辑4个“.xml”文件:

        编辑文件hadoop-2.6.0/etc/hadoop/core-site.xml,在中间,放入:

         <property>        
            <name>fs.defaultFS</name>        
            <value>hdfs://localhost:9000</value>   
         </property>
        

        编辑文件hadoop-2.6.0/etc/hadoop/hdfs-site.xml,放入:

            <property>
                <name>dfs.replication</name>
                <value>1</value>
            </property>
        

        编辑文件hadoop-2.6.0/etc/hadoop/mapred-site.xm,粘贴以下内容并保存

            <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
            </property> 
        

        编辑文件hadoop-2.6.0/etc/hadoop/yarn-site.xml,粘贴以下内容并保存

            <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
            </property>
        

        【讨论】: