参考:http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/SingleCluster.html
?
照抄:安装成功:
Hadoop2.6安装
<!--[if !supportLists]-->1.<!--[endif]-->解压目录:/usr/local/hadoop-2.6.0
<!--[if !supportLists]-->2.<!--[endif]-->vi?etc/hadoop/hadoop-env.sh?
?25?export?JAVA_HOME=/usr/local/java/jdk7
?26?export?export?HADOOP_PREFIX=/usr/local/hadoop-2.6.0
<!--[if !supportLists]-->3.<!--[endif]-->Pseudo-Distributed?Operation伪分布式
配置:
<!--[if !supportLists]-->(1)<!--[endif]-->vi?etc/hadoop/core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value>
</property>
<!--[if !supportLists]-->(2)<!--[endif]-->vi?etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
?
4.?免密码ssh设置ssh?localhost
省略。。。
<!--[if !supportLists]-->5.<!--[endif]-->执行命令:
(1)格式化文件系统:?$?bin/hdfs?namenode?-format
(2)启动NN守护和DN守护进程??$?sbin/start-dfs.sh
6.?Web接口访问http://namenode:50070/
到了这里死活打不开,想了想,可能是我的主机访问不了虚拟机,于是ping?IP?192.168.1.100可以ping通;
于是打开windows?C:\Windows\System32\drivers\etc\hosts
加入:192.168.1.100namenode
直接再打开192.168.1.100namenode
OK
7.?创建dfs目录准备执行mapred?job:?hadoop是系统用户名
??$?bin/hdfs?dfs?-mkdir?/user??$?bin/hdfs?dfs?-mkdir?/user/hadoop
8复制本地的input文件夹到?分布式文件系统中
bin/hdfs?dfs?-put?etc/hadoop?input
9执行share里的mapred程序检验运行.
bin/hadoop?jar?share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar?grep?input?output?‘dfs[a-z]+‘
<!--[if !supportLists]-->10.<!--[endif]-->复制执行结果到本地文件夹
?bin/hdfs?dfs?-get?output?output
?
查看执行结果?cat?output/*
原文:http://niewj.iteye.com/blog/2214953