首页 > 其他 > 详细

hadoop-2.2.0安装配置

时间:2014-03-01 06:09:22      阅读:528      评论:0      收藏:0      [点我收藏+]

18台机器:1台namenode+17台datanode

!!!配置hadoop时候先部署namenode,再利用rsync把hadoop目录同步到所有的datanode上面!!!


1. 安装JDK:

  • mkdir -p /usr/local/java; 
  • wget http://100.100.144.187/jdk-7u51-linux-x64.gz;
  • tar xzvf jdk-7u51-linux-x64.gz -C /usr/local/java;            
  • echo "export JAVA_HOME=/usr/local/java/jdk1.7.0_51" >> /etc/profile;            
  • echo "export CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tool.jar:$JAVA_HOME/jre/lib/rt.jar:." >> /etc/profile; 
  • echo "export PATH=$JAVA_HOME/bin/:$PATH" >> /etc/profile;            
  • rm -f jdk-7u51-linux-x64.gz; 
  • java -version;
2. 设置SSH免密码:

  • ssh-keygen -t  rsa 之后一路回 车(产生秘钥)
  • 把id_rsa.pub 追加到授权的 key 里面去(cat id_rsa.pub >> authorized_keys)
  • 重启 SSH 服 务命令使其生效 :service sshd restart

3. 设置hosts:修改/etc/hosts 文件
ip1 namenode-0
ip2 datanode-0
...
ip18 datanode-17
4. 关闭防火墙:
  • service iptables stop;
  • chkconfig iptables off;
5. 修改hadoop配置文件(/data/hadoop/hadoop-2.2.0/etc/hadoop):
5.1 修改hadoop-env.sh:export JAVA_HOME=/usr/local/java/jdk1.7.0_51
5.2 修改yarn-env.sh:export JAVA_HOME=/usr/local/java/jdk1.7.0_51
5.3 修改slaves: 把datanode-0到datanode-17加入到slaves
5.4 修改core-site.xml
<configuration>
	<property>
         <name>fs.defaultFS</name>
         <value>hdfs://namenode-0:9000</value>
    </property>

    <property>
         <name>io.file.buffer.size</name>
         <value>131072</value>
    </property>

    <property>
         <name>hadoop.tmp.dir</name>
         <value>file:/home/hadoop/temp</value>
         <description>Abase for other temporary directories.</description>
    </property>

    <property>
         <name>hadoop.proxyuser.hadoop.hosts</name>
         <value>*</value>
    </property>

    <property>
         <name>hadoop.proxyuser.hadoop.groups</name>
         <value>*</value>
    </property>
</configuration>
5.5 修改hdfs-site.xml
<configuration>
	<property>
	<name>dfs.namenode.secondary.http-address</name>
         <value>namenode-0:9001</value>
	</property>

	<property>
	<name>dfs.namenode.name.dir</name>
	<value>file:/home/hadoop/dfs/name</value>
    </property>

	<property>
	    <name>dfs.datanode.data.dir</name>
	    <value>file:/home/hadoop/dfs/data</value>
    </property>

    <property>
	    <name>dfs.replication</name>
	    <value>3</value>
    </property>

     <property>
	    <name>dfs.webhdfs.enabled</name>
	    <value>true</value>
	</property>
</configuration>
5.6 修改mapred-site.xml
<configuration>
	<property>
		<name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>namenode-0:10020</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>namenode-0:19888</value>
    </property>
</configuration>
5.7 修改yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
	<property>
		<name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>

    <property>
        <name>yarn.resourcemanager.address</name>
        <value>namenode-0:8032</value>
    </property>

    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>namenode-0:8030</value>
    </property>

    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>namenode-0:8031</value>
    </property>

    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>namenode-0:8033</value>
    </property>

    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>namenode-0:8088</value>
    </property> 
</configuration>

6. 启动验证:
  • 格式化namenode:hadoop namenode -format
  • 在namenode上面启动hadoop集群:start-all.sh;
  • 在namenode执行jps命令可以看到:bubuko.com,布布扣
  • 在datanode执行jps命令,可以看到:bubuko.com,布布扣

hadoop-2.2.0安装配置,布布扣,bubuko.com

hadoop-2.2.0安装配置

原文:http://blog.csdn.net/iloveyin/article/details/20143963

(0)
(0)
   
举报
评论 一句话评论(0
关于我们 - 联系我们 - 留言反馈 - 联系我们:wmxa8@hotmail.com
© 2014 bubuko.com 版权所有
打开技术之扣,分享程序人生!