Hadoop简介:
----------------------------------------------------------------------------------
Hadoop是Apache软件基金会旗下的一个开源分布式计算平台 以Hadoop分布式文件系统
(HDFS Hadoop Distributed Filesystem)和MapReduce(Google MapReduce的开源实现)
为核心的Hadoop为用户提供了系统底层细节透明的分布式基础架构.
对于Hadoop的集群来讲 可以分成两大类角色:Master和Salve.一个HDFS集群是由一个
NameNode和若干个DataNode组成的 其中NameNode作为主服务器,管理文件系统的命名
空间和客户端对文件系统的访问操作;集群中的DataNode管理存储的数据,MapReduce
框架是由一个单独运行在主节点上的JobTracker和运行在每个集群从节点的TaskTracker
共同组成的,主节点负责调度构成一个作业的所有任务,这些任务分布在不同的从节点上,
主节点监控它们的执行情况,并且重新执行之前的失败任务,从节点仅负责由主节点指派
的任务,当一个Job被提交时,JobTracker接收到提交作业和配置信息之后,就会将配置信
息等分发给从节点,同时调度任务并监控TaskTracker的执行.
从上面的介绍可以看出,HDFS和MapReduce共同组成了Hadoop分布式系统体系结构的核心,
HDFS在集群上实现分布式文件系统,MapReduce在集群上实现了分布式计算和任务处理.
HDFS在MapReduce任务处理过程中提供了文件操作和存储等支持,MapReduce在HDFS的基
础上实现了任务的分发 跟踪 执行等工作,并收集结果,二者相互作用,完成了Hadoop分
布式集群的主要任务.
----------------------------------------------------------------------------------
相关软件版本
----------------------------------------------------------------------------------
OS_VER: CentOS6.3x64
JDK_VER: JDK 1.6
----------------------------------------------------------------------------------
主机规划
----------------------------------------------------------------------------------
J-720-1-Hadoop-Master | 192.168.0.125 | NameNode JobTracker
J-720-2-Hadoop-Slave1 | 192.168.0.126 | DataNode TaskTracker
J-720-3-Hadoop-Slave3 | 192.168.0.127 | DataNode TaskTracker
J-720-N-Hadoop-SlaveN | 192.168.0.XXX | DataNode TaskTracker
----------------------------------------------------------------------------------
定义系统分区标准以及软件包选择
----------------------------------------------------------------------------------
/ 81920M
/boot 200M
swap 4096M
/data 剩余空间全部划分给这个分区
Minimal 软件包选择
----------------------------------------------------------------------------------
部署前准备
----------------------------------------------------------------------------------
1:关闭iptables
2:关闭selinux
3:主机名修改统一
4:创建统一的hadoop用户
5:所有机器在同一个局域网并且能上网
6:条件允许的话 建议启用多块网卡
----------------------------------------------------------------------------------
HDFS相关概念:
----------------------------------------------------------------------------------
NameNode :管理文件系统的命名空间和客户端对文件系统的访问操作
DataNode :数据节点
SecondaryNamenode :数据源信息备份整理节点
MapReduce
JobTracker :任务管理节点
Tasktracker :任务运行节点
配置文件
core-site.xml :common属性配置
hdfs-site.xml :HDFS属性配置
mapred-site.xm :MapReduce属性配置
hadoop-env.sh :hadooop环境变量配置
----------------------------------------------------------------------------------
一:按照以上规定进行主机名修改 范例如下:
[root@J-720-3-Hadoop-Slave3 ~]# cat /etc/sysconfig/network|grep HOSTNAME
HOSTNAME=J-720-3-Hadoop-Slave3
[root@J-720-3-Hadoop-Slave3 ~]# cat /etc/hosts
127.0.0.1 J-720-3-Hadoop-Slave3 localhost localhost.localdomain localhost4
二:去掉禁止root直接登录限制 这个是我们这边特殊设定的
sed -i 's/PermitRootLogin/#PermitRootLogin/g' /etc/ssh/sshd_config
/etc/init.d/sshd restart
三:配置Master节点无密码登陆所有Slave 为了便于批量管理 这里我写了一个脚本执行此操作
[root@J-720-1-Hadoop-Master work]# cat ssh_key.sh
#----------------------------------------------------------------------------------------------
#!/bin/bash
#Date:2013-01-10
#Author:ZhangLuYa
#define var
ip_list="/home/work/ip_all.txt"
#CHECK USER
is_root=`id |awk '{print $1}'|awk -F'[=(]' '{print $2}'`
if [ $is_root = 0 ]
then
echo "Current user is `whoami`! Check OK!"
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
else
echo "Current user is `whoami`! Please use root excute!!!"
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
exit 1
fi
#Check ssh_key
cd ~
SSH_Key=`/bin/ls -al|grep '.ssh$'|wc -l`
if [ ${SSH_Key} -eq 1 ]
then
echo "ssh_key is exist!!!......"
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
#sleep 5
#/bin/mv .ssh /tmp/.ssh$(date +%Y-%m-%d-%H-%M)
#echo|/usr/bin/ssh-keygen -t dsa
else
echo "ssh_key is not exist,start create ssh_key"
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
echo|/usr/bin/ssh-keygen -t dsa
fi
#Send ssh_key to client
if [ ! -e ${ip_list} ];then
touch ${ip_list}
echo "Please input ip address,then go on......."
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
exit 1
fi
is_ip=`nl ${ip_list}|wc -l`
if [ ${is_ip} -eq 0 ]
then
echo "ip_all.txt is empty!"
exit 1
fi
Cur_u=`/usr/bin/whoami`
while read Remote_host
do
ssh-copy-id -i .ssh/id_dsa.pub "-p 51022 ${Cur_u}@${Remote_host}"
echo "$Remote_host is send ok!!!"
echo -e "\033[32;49;1m---------------------------------------------------------------------\033[39;49;0m"
done < ${ip_list}
#----------------------------------------------------------------------------------------------
四:安装JDK 所有机器上安装相同(NameNode DataNode上都进行安装)
wget --http-user=Down-XYWY --http-password=313ttFE2qag%4ia http://219.232.243.215:1999/jdk/jdk-6u45-linux-i586.bin
chmod a+x jdk-6u45-linux-i586.bin
./jdk-6u45-linux-i586.bin
mv jdk1.6.0_45 /usr/local/jdk
yum install glibc.i686 -y
cat << EOF >> /etc/profile
#set Java Environment
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=/usr/local/jdk/jre
export CLASSPATH=.:\$JAVA_HOME/lib:\$CLASSPATH:\$JRE_HOME/lib
export PATH=\$JAVA_HOME/bin:\$PATH:\$JRE_HOME/bin
EOF
source /etc/profile
[root@J-720-1-Hadoop-Master tools]# java -version
java version "1.6.0_45"
Java(TM) SE Runtime Environment (build 1.6.0_45-b06)
Java HotSpot(TM) Client VM (build 20.45-b01, mixed mode, sharing)
至此 JDK安装完毕!
五:安装Hadoop(NameNode DataNode都进行安装)
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
tar zvfx hadoop-1.2.1.tar.gz
mv hadoop-1.2.1 /usr/local/hadoop
cat << EOF >> /etc/profile
#Set Hadoop Path
export HADOOP_HOME=/usr/local/hadoop
export PATH=\$PATH:\$HADOOP_HOME/bin
EOF
source /etc/profile
mkdir -p /usr/local/hadoop/tmp
配置Hadooop
cat << EOF >> /usr/local/hadoop/conf/hadoop-env.sh
#set java environment
export JAVA_HOME=/usr/local/jdk
EOF
配置core-site.xml这里配置的是HDFS的地址和端口号
vi core-site.xml
-------------------------------------------------------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<!-- file system properties -->
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.0.125:9000</value>
</property>
</configuration>
-------------------------------------------------------------------------------
配置hdfs-site.xml
vi hdfs-site.xml
-------------------------------------------------------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
-------------------------------------------------------------------------------
配置:mapred-site.xml
修改Hadoop中MapReduce的配置文件 配置的是JobTracker的地址和端口
vi mapred-site.xml
-------------------------------------------------------------------------------
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.0.125:9001</value>
</property>
</configuration>
-------------------------------------------------------------------------------
配置Master文件
去掉localhost加入Master机器的IP地址
echo "192.168.0.125" > masters
配置Slave
去掉"localhost"加入集群中所有Slave机器的IP,也是每行一个
[root@J-720-1-Hadoop-Master conf]# cat slaves
192.168.0.126
192.168.0.127
至此 Master配置完毕!
配置Hadoop Slave,Master机器进行拷贝操作
-------------------------------------------------------------------------------
scp -P 51022 -r hadoop 192.168.0.126:/usr/local/
scp -P 51022 -r hadoop 192.168.0.127:/usr/local/
Slave上执行:
cat << EOF >> /etc/profile
#Set Hadoop Path
export HADOOP_HOME=/usr/local/hadoop
export PATH=\$PATH:\$HADOOP_HOME/bin
export HADOOP_HOME_WARN_SUPPRESS=1
EOF
source /etc/profile
-------------------------------------------------------------------------------
启动及验证:(备注:只需一次 下次启动不再需要格式化 只需 start-all.sh)
[root@J-720-1-Hadoop-Master local]# hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.
#去除警告的办法是:
echo "export HADOOP_HOME_WARN_SUPPRESS=1" >> /etc/profile
source /etc/profile
13/08/08 11:26:01 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = J-720-1-Hadoop-Master/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG: java = 1.6.0_45
************************************************************/
13/08/08 11:26:02 INFO util.GSet: Computing capacity for map BlocksMap
13/08/08 11:26:02 INFO util.GSet: VM type = 32-bit
13/08/08 11:26:02 INFO util.GSet: 2.0% max memory = 1013645312
13/08/08 11:26:02 INFO util.GSet: capacity = 2^22 = 4194304 entries
13/08/08 11:26:02 INFO util.GSet: recommended=4194304, actual=4194304
13/08/08 11:26:02 INFO namenode.FSNamesystem: fsOwner=root
13/08/08 11:26:02 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/08 11:26:02 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/08 11:26:02 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/08/08 11:26:02 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/08/08 11:26:02 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
13/08/08 11:26:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/08/08 11:26:02 INFO common.Storage: Image file /usr/local/hadoop/tmp/dfs/name/current/fsimage of size 110 bytes saved in 0 seconds.
13/08/08 11:26:02 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/usr/local/hadoop/tmp/dfs/name/current/edits
13/08/08 11:26:02 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/usr/local/hadoop/tmp/dfs/name/current/edits
13/08/08 11:26:02 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
13/08/08 11:26:02 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at J-720-1-Hadoop-Master/127.0.0.1
************************************************************/
启动:Hadoop
[root@J-720-1-Hadoop-Master ~]# start-all.sh
Warning: $HADOOP_HOME is deprecated.
starting namenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-namenode-J-720-1-Hadoop-Master.out
192.168.0.127: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-J-720-3-Hadoop-Slave3.out
192.168.0.126: starting datanode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-datanode-J-720-2-Hadoop-Slave1.out
192.168.0.125: starting secondarynamenode, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-secondarynamenode-J-720-1-Hadoop-Master.out
starting jobtracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-jobtracker-J-720-1-Hadoop-Master.out
192.168.0.127: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-J-720-3-Hadoop-Slave3.out
192.168.0.126: starting tasktracker, logging to /usr/local/hadoop/libexec/../logs/hadoop-root-tasktracker-J-720-2-Hadoop-Slave1.out
分别查看:Master/Slave /usr/local/hadoop/tmp 下文件内容:
[root@J-720-1-Hadoop-Master ~]# ll /usr/local/hadoop/tmp/
total 4
drwxr-xr-x. 4 root root 4096 Aug 8 11:54 dfs
[root@J-720-2-Hadoop-Slave1 .ssh]# ll /usr/local/hadoop/tmp/
total 8
drwxr-xr-x. 3 root root 4096 Aug 8 11:41 dfs
drwxr-xr-x. 3 root root 4096 Aug 8 11:45 mapred
[root@J-720-3-Hadoop-Slave3 ~]# ll /usr/local/hadoop/tmp/
total 8
drwxr-xr-x. 3 root root 4096 Aug 8 11:41 dfs
drwxr-xr-x. 3 root root 4096 Aug 8 11:45 mapred
一:验证hadoop 在Master上用 java自带的小工具jps查看进程
[root@J-720-1-Hadoop-Master ~]# jps
4524 JobTracker
4442 SecondaryNameNode
4279 NameNode
4649 Jps
在Slave1上用jps查看进程
[root@J-720-2-Hadoop-Slave1 .ssh]# jps
3556 Jps
3477 TaskTracker
3376 DataNode
[root@J-720-3-Hadoop-Slave3 ~]# jps
3593 Jps
3414 DataNode
3515 TaskTracker
验证方式二:
Master服务器的状态:
[root@J-720-1-Hadoop-Master ~]# hadoop dfsadmin -report
Warning: $HADOOP_HOME is deprecated.
Configured Capacity: 169103163392 (157.49 GB)
Present Capacity: 154191224832 (143.6 GB)
DFS Remaining: 154191151104 (143.6 GB)
DFS Used: 73728 (72 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 192.168.0.127:50010
Decommission Status : Normal
Configured Capacity: 84551581696 (78.74 GB)
DFS Used: 36864 (36 KB)
Non DFS Used: 7390949376 (6.88 GB)
DFS Remaining: 77160595456(71.86 GB)
DFS Used%: 0%
DFS Remaining%: 91.26%
Last contact: Thu Aug 08 12:03:05 CST 2013
Name: 192.168.0.126:50010
Decommission Status : Normal
Configured Capacity: 84551581696 (78.74 GB)
DFS Used: 36864 (36 KB)
Non DFS Used: 7520989184 (7 GB)
DFS Remaining: 77030555648(71.74 GB)
DFS Used%: 0%
DFS Remaining%: 91.1%
Last contact: Thu Aug 08 12:03:05 CST 2013
Slave服务器的状态:
[root@J-720-2-Hadoop-Slave1 .ssh]# hadoop dfsadmin -report
Warning: $HADOOP_HOME is deprecated.
Configured Capacity: 169103163392 (157.49 GB)
Present Capacity: 154191233024 (143.6 GB)
DFS Remaining: 154191151104 (143.6 GB)
DFS Used: 81920 (80 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)
Name: 192.168.0.127:50010
Decommission Status : Normal
Configured Capacity: 84551581696 (78.74 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 7390945280 (6.88 GB)
DFS Remaining: 77160595456(71.86 GB)
DFS Used%: 0%
DFS Remaining%: 91.26%
Last contact: Thu Aug 08 12:04:41 CST 2013
Name: 192.168.0.126:50010
Decommission Status : Normal
Configured Capacity: 84551581696 (78.74 GB)
DFS Used: 40960 (40 KB)
Non DFS Used: 7520985088 (7 GB)
DFS Remaining: 77030555648(71.74 GB)
DFS Used%: 0%
DFS Remaining%: 91.1%
Last contact: Thu Aug 08 12:04:42 CST 2013
网页状态查看:
http://192.168.0.125:50030/jobtracker.jsp
http://192.168.0.125:50070/dfshealth.jsp
FAQ1:修改指定的SSH端口:
vi hadoop-env.sh
-----------------------------------
export HADOOP_SSH_OPTS="-p 51022"
-----------------------------------
FAQ2:SSH无法免密码连接本机
将做好免密码登陆的机器的authorized_keys文件copy至SSH分发机即可 注意权限chmod 600
FAQ3:no datanode to stop问题
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,
namenode format清空了namenode下的数据,但是没有清空datanode下的数据,导致启动时失败,所要做
的就是每次fotmat前,清空tmp一下的所有目录.
1)修改每个Slave的namespaceID使其与Master的namespaceID一致
2)修改Master的namespaceID使其与Slave的namespaceID一致
[root@J-720-1-Hadoop-Master ~]# more /usr/local/hadoop/tmp/dfs/name/current/VERSION
#Thu Aug 08 11:59:35 CST 2013
namespaceID=756717920
cTime=0
storageType=NAME_NODE
layoutVersion=-41
FAQ4:处理速度特别的慢
出现map很快 但是reduce很慢 而且反复出现"reduce=0%"
解决方案如下:
结合解决方案5.7 然后修改"conf/hadoop-env.sh"中的"export HADOOP_HEAPSIZE=4000"
FAQ5:解决hadoop OutOfMemoryError问题
出现这种异常 明显是jvm内存不够得原因。
解决方案如下 要修改所有的datanode的jvm内存大小
Java –Xms 1024m -Xmx 4096m
一般jvm的最大内存使用应该为总内存大小的一半 我们使用的8G内存 所以设置为4096m
FAQ6:Namenode in safe mode
hadoop dfsadmin -safemode leave
FAQ7:IO写操作出现问题
在hadoop-site.xml中设置dfs.datanode.socket.write.timeout=0
FAQ8:status of 255 error
解决方案如下:单个datanode
如果一个datanode 出现问题,解决之后需要重新加入cluster而不重启cluster,方法如下:
hadoop-daemon.sh start datanode
hadoop-daemon.sh start jobtracker
FAQ9:Warning: $HADOOP_HOME is deprecated.
#去除警告的办法是:
echo "export HADOOP_HOME_WARN_SUPPRESS=1" >> /etc/profile
source /etc/profile