on mac os x . record some problems I met.
1 export JAVA_HOME="$(/usr/libexec/java_home)" 2 export HADOOP_HOME=/Users/admin/work/hadoop/hadoop.tar.2.2.0/hadoop-2.2.0/ 3 export HADOOP_MAPRED_HOME=/Users/admin/work/hadoop/hadoop.tar.2.2.0/hadoop-2.2.0/ 4 export HADOOP_COMMON_HOME=$HADOOP_HOME 5 export HADOOP_HDFS_HOME=$HADOOP_HOME 6 export YARN_HOME=$HADOOP_HOME 7 export HADOOP_YARN_HOME=$HADOOP_HOME 8 export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 9 export YARN_DATA=/Users/admin/work/hadoop/hadoop.tar.2.2.0/yarn_data/hdfs
(1) ssh localhost, need to open "remote login" in os preference panel.
(1.1) without password
ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa admin$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys admin$ ssh localhost Last login: Wed Feb 26 19:39:11 2014 from localhost
(1.2)create config files
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>127.0.0.1:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>127.0.0.1:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>127.0.0.1:8032</value> </property> </configuration>
mapred-site.xml
1 <configuration> 2 <property> 3 <name>mapreduce.framework.name</name> 4 <value>yarn</value> 5 </property> 6 <property> 7 <name>mapreduce.cluster.temp.dir</name> 8 <value></value> 9 <description>No description</description> 10 <final>true</final> 11 </property> 12 13 <property> 14 <name>mapreduce.cluster.local.dir</name> 15 <value></value> 16 <description>No description</description> 17 <final>true</final> 18 </property> 19 20 <property> 21 <name>mapred.job.tracker</name> 22 <value>127.0.0.1:9001</value> 23 </property> 24 25 </configuration>
core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/Users/admin/work/hadoop/hadoop.tar.2.2.0/yarn_data/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/Users/admin/work/hadoop/hadoop.tar.2.2.0/yarn_data/hdfs/datanode</value> </property> </configuration>
(2) jps. no need to show taskTracker as in hadoop 1.X
bash-3.2$ jps
46675 Jps
45000 SecondaryNameNode
45179 NodeManager
45102 ResourceManager
37206 sbt-launch-0.11.3-2.jar
350
44911 DataNode
44840 NameNode
14682 sbt-launch.jar
(3) after created the data node, just start-dfs.sh and start-yarn.sh, that‘s ok.
(4) we can test whether it‘s ok.
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
(5)
hadoop 2.2.0 with yarn installation,布布扣,bubuko.com
hadoop 2.2.0 with yarn installation
原文:http://www.cnblogs.com/enyun/p/3570076.html