삽질하고 헤맸는데, 각설하고...
윈도우에서하는것은 일단 pass하는게 속편함
virtualbox에 우분투를 깔아서 시작

간단히 테스트만 할것이었으므로 분산구성도 고려하지 않고. 기냥 성공했던것
중간중간 ssh관련 설정도 있고 했는데 모르겠다 기냥 history성으로....


1. 하둡압축풀기
hadoop 그룹과 hadoop 계정을 만들고, 해당 계정에서
tar로 풀어준다.

2. 시스템설정
자바랑 하둡 환경변수 잡아주고,
export PATH=$PATH:/jdk1.6.0_24/bin
export JAVA_HOME=/jdk1.6.0_24
export HADOOP_HOME=/home/hadoop/hadoop-0.20.203.0
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin


3. 하둡설정
하둡관련 설정 바꿔주시고 (/conf폴더)
노란 음영내용은 사용자별로 적당히 바꿔주면됨

core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop-datastore/hadoop-${user.name}</value>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:6431</value>
</property>
</configuration>


hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:6432</value>
</property>
</configuration>

hadoop-env.sh
..생략..

# The java implementation to use.  Required.
export JAVA_HOME=/jdk1.6.0_24

..생략..



4. 데이터 노드 포맷
hadoop-0.20.203.0/bin$      hadoop namenode -format

5. 기동하자
기동은 start-all.sh 정지는, stop-all.sh
jps는 자바의 프로세스를 확인하는 명령이니 참고
hadoop@konan:~/hadoop-0.20.203.0/bin$ start-all.sh
starting namenode, logging to /home/konan/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-namenode-konan.out
localhost: starting datanode, logging to /home/konan/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-datanode-konan.out
localhost: starting secondarynamenode, logging to /home/konan/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-secondarynamenode-konan.out
starting jobtracker, logging to /home/konan/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-jobtracker-konan.out
localhost: starting tasktracker, logging to /home/konan/hadoop/hadoop-0.20.203.0/bin/../logs/hadoop-hadoop-tasktracker-konan.out

hadoop@konan:~/hadoop-0.20.203.0/bin$ jps
23778 SecondaryNameNode
23675 DataNode
23991 Jps
23844 JobTracker
23949 TaskTracker


6. 몇가지 테스트
이렇게 hdfs에 폴더도 생성이 된다.
hadoop@konan:~/hadoop-0.20.203.0/bin$ hadoop dfs -ls
Found 3 items
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:36 /user/hadoop/input
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:34 /user/hadoop/ls
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:40 /user/hadoop/output
hadoop@konan:~/hadoop-0.20.203.0/bin$ hadoop dfs -mkdir sample
hadoop@konan:~/hadoop-0.20.203.0/bin$ hadoop dfs -ls
Found 4 items
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:36 /user/hadoop/input
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:34 /user/hadoop/ls
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:40 /user/hadoop/output
drwxr-xr-x   - hadoop supergroup          0 2012-01-05 18:59 /user/hadoop/sample



7. map reduce테스트
http://www.ibm.com/developerworks/kr/library/l-hadoop-3/index.html
여기를 보면 루비 스크립트를 이용해서 하는 방법이 자세히 나와있다

hadoop@konan:~/hadoop-0.20.203.0/bin$ hadoop jar /home/hadoop/hadoop-0.20.203.0/contrib/streaming/hadoop-streaming-0.20.203.0.jar -file /home/hadoop/map.rb -mapper /home/hadoop/map.rb -file /home/hadoop/reduce.rb -reducer /home/hadoop/reduce.rb -input input/* -output output
packageJobJar: [/home/hadoop/map.rb, /home/hadoop/reduce.rb, /home/hadoop/hadoop-datastore/hadoop-hadoop/hadoop-unjar1030284165018671461/] [] /tmp/streamjob305038160658224287.jar tmpDir=null
12/01/05 19:04:29 INFO mapred.FileInputFormat: Total input paths to process : 1
12/01/05 19:04:29 INFO streaming.StreamJob: getLocalDirs(): [/home/hadoop/hadoop-datastore/hadoop-hadoop/mapred/local]
12/01/05 19:04:29 INFO streaming.StreamJob: Running job: job_201201051858_0004
12/01/05 19:04:29 INFO streaming.StreamJob: To kill this job, run:
12/01/05 19:04:29 INFO streaming.StreamJob: /home/hadoop/hadoop-0.20.203.0/bin/../bin/hadoop job  -Dmapred.job.tracker=localhost:6432 -kill job_201201051858_0004
12/01/05 19:04:29 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201201051858_0004
12/01/05 19:04:30 INFO streaming.StreamJob:  map 0%  reduce 0%
12/01/05 19:04:43 INFO streaming.StreamJob:  map 100%  reduce 0%
12/01/05 19:04:55 INFO streaming.StreamJob:  map 100%  reduce 100%
12/01/05 19:05:01 INFO streaming.StreamJob: Job complete: job_201201051858_0004
12/01/05 19:05:01 INFO streaming.StreamJob: Output: output




Ps.하위버전에서 0.18 에서 설정/오류
0.20버전은 설정파일이 쪼개져있는데, 하위버전에서는 hadoop-site.xml 하나로 관리된다.

>> hadoop-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
  <name>fs.default.name</name>
 <value>hdfs://localhost:9000</value>
</property>

<property>
  <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
</property>

<property>
  <name>dfs.replication</name>
  <value>3</value>
</property>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/home/hadoop/hadoop-datastore2/hadoop-${user.name}</value>
</property>
</configuration>


동일한 환경에서 작업했음에도 불구하고 신기한건,,
오류가 났다... hadoop.tmp.dir 폴더값을 0.20과 동일한 폴더를 써서 그랬다..;;

+ Recent posts