【问题标题】:No Namenode or Datanode or Secondary NameNode to stop没有 Namenode 或 Datanode 或 Secondary NameNode 停止
【发布时间】:2015-11-18 05:32:44
【问题描述】:

我按照以下链接中的步骤在我的 Ubuntu 12.04 中安装了 Hadoop。

http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php

一切都安装成功,当我运行 start-all.sh 时,只有一些服务在运行。

wanderer@wanderer-Lenovo-IdeaPad-S510p:~$ su - hduse
Password:

hduse@wanderer-Lenovo-IdeaPad-S510p:~$ cd /usr/local/hadoop/sbin

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
hduse@localhost's password: 
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
Starting secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
hduse@localhost's password: 
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ jps
7940 Jps
7545 ResourceManager
7885 NodeManager

一旦我通过运行脚本 stop-all.sh 停止服务

hduse@wanderer-Lenovo-IdeaPad-S510p:/usr/local/hadoop/sbin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
hduse@localhost's password: 
localhost: no namenode to stop
hduse@localhost's password: 
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
hduse@0.0.0.0's password: 
0.0.0.0: no secondarynamenode to stop
stopping yarn daemons
stopping resourcemanager
hduse@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop

我的配置文件

  1. 编辑 .bashrc 文件

    vi ~/.bashrc
    
    #HADOOP VARIABLES START
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_INSTALL=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export PATH=$PATH:$HADOOP_INSTALL/sbin
    export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_HOME=$HADOOP_INSTALL
    export HADOOP_HDFS_HOME=$HADOOP_INSTALL
    export YARN_HOME=$HADOOP_INSTALL
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
    export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
    export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
    #HADOOP VARIABLES END
    
  2. hdfs-site.xml

    vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml
    
    <configuration>
     <property>
      <name>dfs.replication</name>
      <value>1</value>
      <description>Default block replication.
      The actual number of replications can be specified when the file is created.
      The default is used if replication is not specified in create time.
      </description>
     </property>
     <property>
       <name>dfs.namenode.name.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
     </property>
     <property>
       <name>dfs.datanode.data.dir</name>
       <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
     </property>
    </configuration>
    
  3. hadoop-env.sh

    vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
    
    export JAVA_HOME=/usr/lib/jvm/java-8-oracle/
    export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}
    
    for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
      if [ "$HADOOP_CLASSPATH" ]; then
        export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
      else
        export HADOOP_CLASSPATH=$f
      fi
    done
    
    export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"
    export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
    export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
    
    export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"
    
    export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
    export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"
    
    # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
    export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
    export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}
    
    export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}
    export HADOOP_PID_DIR=${HADOOP_PID_DIR}
    export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
    
    # A string representing this instance of hadoop. $USER by default.
    export HADOOP_IDENT_STRING=$USER
    
  4. 核心站点.xml

    vi /usr/local/hadoop/etc/hadoop/core-site.xml
    <configuration>
     <property>
      <name>hadoop.tmp.dir</name>
      <value>/app/hadoop/tmp</value>
      <description>A base for other temporary directories.</description>
     </property>
    
     <property>
      <name>fs.default.name</name>
      <value>hdfs://localhost:54310</value>
      <description>The name of the default file system.  A URI whose
      scheme and authority determine the FileSystem implementation.  The
      uri's scheme determines the config property (fs.SCHEME.impl) naming
      the FileSystem implementation class.  The uri's authority is used to
      determine the host, port, etc. for a filesystem.</description>
     </property>
    </configuration>
    
  5. mapred-site.xml

    vi /usr/local/hadoop/etc/hadoop/mapred-site.xml
    <configuration>
     <property>
      <name>mapred.job.tracker</name>
      <value>localhost:54311</value>
      <description>The host and port that the MapReduce job tracker runs
      at.  If "local", then jobs are run in-process as a single map
      and reduce task.
      </description>
     </property>
    </configuration>
    

    $ javac-版本

    javac 1.8.0_66
    

    $ java -version

    java version "1.8.0_66"  
    Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
    Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
    

我是 Hadoop 新手,找不到问题。我在哪里可以找到 Jobtracker 和 NameNode 的日志文件以跟踪服务?

【问题讨论】:

  • 我发现了问题。我犯了一个愚蠢的错误。实际的 hadoop 用户是 hduse 而不是 hduser。我将 /usr/local/hadoop_store/hdfs 的所有权更改为 hduse。哇!!现在它正在工作!!!.....

标签: hadoop mapreduce hdfs


【解决方案1】:

如果不是 ssh 问题,请执行以下操作:

  1. 从临时目录中删除所有内容:rm -Rf /app/hadoop/tmp并格式化namenode服务器bin/hadoop namenode -format。 使用 bin/start-dfs.sh 启动 namenode 和 datanode。 在命令行输入jps,查看节点是否在运行。

  2. ls -ld directory

    检查hduser是否有权限写入hadoop_store/hdfs/namenode和datanode目录

    您可以通过sudo chmod +777 /hadoop_store/hdfs/namenode/来更改权限

【讨论】:

  • bin/hadoop namenode -format 似乎为我做了。
【解决方案2】:

如果您仔细查看 start-all.sh 命令日志,您可以很容易地看到日志文件的路径。尝试开始写入日志后的每个服务

localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduse-namenode-wanderer-Lenovo-IdeaPad-S510p.out
ocalhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduse-datanode-wanderer-Lenovo-IdeaPad-S510p.out
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduse-secondarynamenode-wanderer-Lenovo-IdeaPad-S510p.out
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduse-resourcemanager-wanderer-Lenovo-IdeaPad-S510p.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduse-nodemanager-wanderer-Lenovo-IdeaPad-S510p.out

【讨论】:

    【解决方案3】:

    您必须为 ssh 设置无密码身份验证。 hduse 用户应该能够通过 ssh 登录到 localhost 而无需密码。

    【讨论】:

      【解决方案4】:

      The namenode is not showing

      插入 $jps 命令后,namenode 未显示,但创建了 datanode。所以,为了解决问题,我们可以按照下面给出的步骤,

      它适用于 hadoop 2.7.6 的配置

      第 1 步:::(停止 hadoop)

      /usr/local/hadoop/sbin$ stop-dfs.sh

      第 2 步:::(删除 tmp 文件夹)

      /usr/local/hadoop/sbin$ sudo rm -rf /app/hadoop/tmp/

      第 3 步:::(创建新的 tmp 文件)

      /usr/local/hadoop/sbin$ sudo mkdir -p /app/hadoop/tmp

      /usr/local/hadoop/sbin$ sudo chown hduser:hadoop /app/hadoop/tmp

      /usr/local/hadoop/sbin$ chmod 750 /app/hadoop/tmp

      第四步:::(格式化namenode)

      /usr/local/hadoop/sbin$ hdfs namenode -format

      第 5 步:::(启动 dfs)

      /usr/local/hadoop/sbin$ start-all.sh

      /usr/local/hadoop/sbin$ jps

      The namenode is now showing

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 2013-10-12
        • 1970-01-01
        • 2015-03-26
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2015-10-19
        相关资源
        最近更新 更多