MapReduce Job [hadoop]

Running our first MapReduce job
 
We will use the WordCount example job which reads text files and counts how often words occur.
The input is text files and the output is text files, each line of which contains a word and the count of how often it occurred, separated by a tab.
 
[root@myhostname hadoop]# hadoop dfs -mkdir /home/hadoop/MapReduce
[root@myhostname hadoop]#
[root@myhostname hadoop]# echo " MyFist MapReduce Program A" >> /tmp/mapreduce1.txt
[root@myhostname hadoop]# echo " MyFist MapReduce Program B" >> /tmp/mapreduce1.txt
[root@myhostname hadoop]# echo " MyFist MapReduce Program C" >> /tmp/mapreduce1.txt
[root@myhostname hadoop]#
[root@myhostname hadoop]# echo " MyFist MapReduce Program D" >> /tmp/mapreduce2.txt
[root@myhostname hadoop]# echo " MyFist MapReduce Program E" >> /tmp/mapreduce2.txt
[root@myhostname hadoop]# echo " MyFist MapReduce Program F" >> /tmp/mapreduce2.txt
[root@myhostname hadoop]#
[root@myhostname hadoop]#
[root@myhostname hadoop]#
[root@myhostname hadoop]# cat /tmp/mapreduce1.txt
MyFist MapReduce Program A
MyFist MapReduce Program B
MyFist MapReduce Program C
[root@myhostname hadoop]# cat /tmp/mapreduce2.txt
MyFist MapReduce Program D
MyFist MapReduce Program E
MyFist MapReduce Program F
[root@myhostname hadoop]#
 
Before we run the actual MapReduce job, we first have to copy the files from our local file system to Hadoop’s HDFS.
 
[root@myhostname hadoop]# hadoop dfs -copyFromLocal /tmp/mapreduce*.txt /home/hadoop/MapReduce
 
[root@myhostname hadoop]#
[root@myhostname hadoop]#
[root@myhostname hadoop]# hadoop dfs -ls /home/hadoop/MapReduce
 
Found 2 items
-rw-r--r-- 3 root supergroup 84 2014-07-08 06:16 /home/hadoop/MapReduce/mapreduce1.txt
-rw-r--r-- 3 root supergroup 84 2014-07-08 06:16 /home/hadoop/MapReduce/mapreduce2.txt
[root@myhostname hadoop]#
 
Now, we actually run the WordCount example job.
 
This command will read all the files in the HDFS directory /home/hadoop/MapReduce, process it, and store the result in the HDFS directory /home/hadoop/MapReduce-output
If below syntax gives java exceptions, we many need to specify the complete jar file name instead of * in between i.e., hadoop version.
 
[root@myhostname hadoop]# hadoop jar hadoop-examples-*.jar wordcount /home/hadoop/MapReduce /home/hadoop/MapReduce-output
 
14/07/08 06:19:56 INFO input.FileInputFormat: Total input paths to process : 2
14/07/08 06:19:56 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/07/08 06:19:57 INFO mapred.JobClient: Running job: job_201407070835_0008
14/07/08 06:19:58 INFO mapred.JobClient: map 0% reduce 0%
14/07/08 06:20:04 INFO mapred.JobClient: map 100% reduce 0%
14/07/08 06:20:12 INFO mapred.JobClient: map 100% reduce 33%
14/07/08 06:20:14 INFO mapred.JobClient: map 100% reduce 100%
14/07/08 06:20:15 INFO mapred.JobClient: Job complete: job_201407070835_0008
14/07/08 06:20:15 INFO mapred.JobClient: Counters: 29
14/07/08 06:20:15 INFO mapred.JobClient: Map-Reduce Framework
14/07/08 06:20:15 INFO mapred.JobClient: Spilled Records=24
14/07/08 06:20:15 INFO mapred.JobClient: Map output materialized bytes=146
14/07/08 06:20:15 INFO mapred.JobClient: Reduce input records=12
14/07/08 06:20:15 INFO mapred.JobClient: Virtual memory (bytes) snapshot=5960839168
14/07/08 06:20:15 INFO mapred.JobClient: Map input records=6
14/07/08 06:20:15 INFO mapred.JobClient: SPLIT_RAW_BYTES=254
14/07/08 06:20:15 INFO mapred.JobClient: Map output bytes=258
14/07/08 06:20:15 INFO mapred.JobClient: Reduce shuffle bytes=146
14/07/08 06:20:15 INFO mapred.JobClient: Physical memory (bytes) snapshot=472035328
14/07/08 06:20:15 INFO mapred.JobClient: Reduce input groups=9
14/07/08 06:20:15 INFO mapred.JobClient: Combine output records=12
14/07/08 06:20:15 INFO mapred.JobClient: Reduce output records=9
14/07/08 06:20:15 INFO mapred.JobClient: Map output records=24
14/07/08 06:20:15 INFO mapred.JobClient: Combine input records=24
14/07/08 06:20:15 INFO mapred.JobClient: CPU time spent (ms)=4740
14/07/08 06:20:15 INFO mapred.JobClient: Total committed heap usage (bytes)=468189184
14/07/08 06:20:15 INFO mapred.JobClient: File Input Format Counters
14/07/08 06:20:15 INFO mapred.JobClient: Bytes Read=168
14/07/08 06:20:15 INFO mapred.JobClient: FileSystemCounters
14/07/08 06:20:15 INFO mapred.JobClient: HDFS_BYTES_READ=422
14/07/08 06:20:15 INFO mapred.JobClient: FILE_BYTES_WRITTEN=164331
14/07/08 06:20:15 INFO mapred.JobClient: FILE_BYTES_READ=140
14/07/08 06:20:15 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=55
14/07/08 06:20:15 INFO mapred.JobClient: Job Counters
14/07/08 06:20:15 INFO mapred.JobClient: Launched map tasks=2
14/07/08 06:20:15 INFO mapred.JobClient: Launched reduce tasks=1
14/07/08 06:20:15 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9787
14/07/08 06:20:15 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/07/08 06:20:15 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=9326
14/07/08 06:20:15 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/07/08 06:20:15 INFO mapred.JobClient: Data-local map tasks=2
14/07/08 06:20:15 INFO mapred.JobClient: File Output Format Counters
14/07/08 06:20:15 INFO mapred.JobClient: Bytes Written=55
[root@myhostname hadoop]#
[root@myhostname hadoop]#
 
 
 
Check if the result is successfully stored in HDFS directory /home/hadoop/MapReduce-output:
 
 
[root@myhostname hadoop]# hadoop dfs -ls /home/hadoop/MapReduce-output
 
Found 3 items
-rw-r--r-- 3 root supergroup 0 2014-07-08 06:20 /home/hadoop/MapReduce-output/_SUCCESS
drwxr-xr-x - root supergroup 0 2014-07-08 06:19 /home/hadoop/MapReduce-output/_logs
-rw-r--r-- 3 root supergroup 55 2014-07-08 06:20 /home/hadoop/MapReduce-output/part-r-00000
[root@myhostname hadoop]#
[root@myhostname hadoop]#
[root@myhostname hadoop]#
[root@myhostname hadoop]# hadoop dfs -cat /home/hadoop/MapReduce-output/part-r-00000
A 1
B 1
C 1
D 1
E 1
F 1
MapReduce 6
MyFist 6
Program 6
[root@myhostname hadoop]#
 
 
If you want to modify some Hadoop settings on the fly like increasing the number of Reduce tasks, you can use the "-D" option as below.
 
hadoop jar hadoop-examples-*.jar wordcount -D mapred.reduce.tasks=10 /home/hadoop/MapReduce /home/hadoop/MapReduce-output
We will now use a sample Hadoop program to calculate the value of Pi.
Assuming the HADOOP_HOME/bin directory is in your path, type the following commands:
[root@myhostname tmp]# hadoop jar /root/hadoop/hadoop-examples-1.2.1.jar pi 4 1000
Number of Maps = 4
Samples per Map = 1000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Starting Job
14/07/23 06:16:14 INFO mapred.FileInputFormat: Total input paths to process : 4
14/07/23 06:16:15 INFO mapred.JobClient: Running job: job_201407230605_0001
14/07/23 06:16:16 INFO mapred.JobClient: map 0% reduce 0%
14/07/23 06:16:23 INFO mapred.JobClient: map 75% reduce 0%
14/07/23 06:16:24 INFO mapred.JobClient: map 100% reduce 0%
14/07/23 06:16:31 INFO mapred.JobClient: map 100% reduce 33%
14/07/23 06:16:33 INFO mapred.JobClient: map 100% reduce 100%
14/07/23 06:16:34 INFO mapred.JobClient: Job complete: job_201407230605_0001
14/07/23 06:16:34 INFO mapred.JobClient: Counters: 31
14/07/23 06:16:34 INFO mapred.JobClient: Map-Reduce Framework
14/07/23 06:16:34 INFO mapred.JobClient: Spilled Records=16
14/07/23 06:16:34 INFO mapred.JobClient: Map output materialized bytes=112
14/07/23 06:16:34 INFO mapred.JobClient: Reduce input records=8
14/07/23 06:16:34 INFO mapred.JobClient: Virtual memory (bytes) snapshot=9933852672
14/07/23 06:16:34 INFO mapred.JobClient: Map input records=4
14/07/23 06:16:34 INFO mapred.JobClient: SPLIT_RAW_BYTES=472
14/07/23 06:16:34 INFO mapred.JobClient: Map output bytes=72
14/07/23 06:16:34 INFO mapred.JobClient: Reduce shuffle bytes=112
14/07/23 06:16:34 INFO mapred.JobClient: Physical memory (bytes) snapshot=843845632
14/07/23 06:16:34 INFO mapred.JobClient: Map input bytes=96
14/07/23 06:16:34 INFO mapred.JobClient: Reduce input groups=8
14/07/23 06:16:34 INFO mapred.JobClient: Combine output records=0
14/07/23 06:16:34 INFO mapred.JobClient: Reduce output records=0
14/07/23 06:16:34 INFO mapred.JobClient: Map output records=8
14/07/23 06:16:34 INFO mapred.JobClient: Combine input records=0
14/07/23 06:16:34 INFO mapred.JobClient: CPU time spent (ms)=7270
14/07/23 06:16:34 INFO mapred.JobClient: Total committed heap usage (bytes)=773849088
14/07/23 06:16:34 INFO mapred.JobClient: File Input Format Counters
14/07/23 06:16:34 INFO mapred.JobClient: Bytes Read=472
14/07/23 06:16:34 INFO mapred.JobClient: FileSystemCounters
14/07/23 06:16:34 INFO mapred.JobClient: HDFS_BYTES_READ=944
14/07/23 06:16:34 INFO mapred.JobClient: FILE_BYTES_WRITTEN=276005
14/07/23 06:16:34 INFO mapred.JobClient: FILE_BYTES_READ=94
14/07/23 06:16:34 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=215
14/07/23 06:16:34 INFO mapred.JobClient: File Output Format Counters
14/07/23 06:16:34 INFO mapred.JobClient: Bytes Written=97
14/07/23 06:16:34 INFO mapred.JobClient: Job Counters
14/07/23 06:16:34 INFO mapred.JobClient: Launched map tasks=4
14/07/23 06:16:34 INFO mapred.JobClient: Launched reduce tasks=1
14/07/23 06:16:34 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9725
14/07/23 06:16:34 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/07/23 06:16:34 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14099
14/07/23 06:16:34 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/07/23 06:16:34 INFO mapred.JobClient: Rack-local map tasks=2
14/07/23 06:16:34 INFO mapred.JobClient: Data-local map tasks=2
Job Finished in 20.062 seconds
Estimated value of Pi is 3.14000000000000000000
[root@myhostname tmp]#
MapReduce Job on a Multi-Node Hadoop Cluster :-
--------------------------------------------
[root@master-host bin]# hadoop dfs -mkdir /home/hadoop/MapReduce
[root@master-host bin]# echo " MapReduce job on Hadoop Multi-node Cluster " > /tmp/mapreduce1.txt
[root@master-host bin]# echo " MapReduce job on Hadoop Multi-node Cluster " >> /tmp/mapreduce1.txt
[root@master-host bin]# echo " MapReduce job on Hadoop Multi-node Cluster " >> /tmp/mapreduce1.txt
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# cat /tmp/mapreduce1.txt
MapReduce job on Hadoop Multi-node Cluster
MapReduce job on Hadoop Multi-node Cluster
MapReduce job on Hadoop Multi-node Cluster
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop dfs -copyFromLocal /tmp/mapreduce*.txt /home/hadoop/MapReduce
[root@master-host bin]# hadoop dfs -ls /home/hadoop/MapReduce
 
Found 3 items
-rw-r--r-- 2 root supergroup 28 2014-07-09 09:16 /home/hadoop/MapReduce/mapreduce.txt
-rw-r--r-- 2 root supergroup 135 2014-07-09 09:16 /home/hadoop/MapReduce/mapreduce1.txt
-rw-r--r-- 2 root supergroup 84 2014-07-09 09:16 /home/hadoop/MapReduce/mapreduce2.txt
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop dfs -rm /home/hadoop/MapReduce/*2.txt
 
Deleted hdfs://master:9000/home/hadoop/MapReduce/mapreduce2.txt
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop dfs -rm /home/hadoop/MapReduce/*ce.txt
 
Deleted hdfs://master:9000/home/hadoop/MapReduce/mapreduce.txt
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop dfs -ls /home/hadoop/MapReduce
 
Found 1 items
-rw-r--r-- 2 root supergroup 135 2014-07-09 09:16 /home/hadoop/MapReduce/mapreduce1.txt
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop dfs -cat /home/hadoop/MapReduce/mapreduce1.txt
 
MapReduce job on Hadoop Multi-node Cluster
MapReduce job on Hadoop Multi-node Cluster
MapReduce job on Hadoop Multi-node Cluster
[root@master-host bin]#
[root@master-host bin]#
[root@master-host bin]# hadoop jar hadoop-examples-*.jar wordcount /home/hadoop/MapReduce /home/hadoop/Output
 
Not a valid JAR: /root/hadoop-1.2.1/bin/hadoop-examples-*.jar
[root@master-host bin]#
[root@master-host bin]# cd ..
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]# hadoop jar hadoop-examples-*.jar wordcount /home/hadoop/MapReduce /home/hadoop/Output
 
14/07/09 09:18:44 INFO input.FileInputFormat: Total input paths to process : 1
14/07/09 09:18:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/07/09 09:18:44 WARN snappy.LoadSnappy: Snappy native library not loaded
14/07/09 09:18:45 INFO mapred.JobClient: Running job: job_201407090310_0001
14/07/09 09:18:46 INFO mapred.JobClient: map 0% reduce 0%
14/07/09 09:18:53 INFO mapred.JobClient: map 100% reduce 0%
14/07/09 09:19:02 INFO mapred.JobClient: map 100% reduce 33%
14/07/09 09:19:03 INFO mapred.JobClient: map 100% reduce 100%
14/07/09 09:19:04 INFO mapred.JobClient: Job complete: job_201407090310_0001
14/07/09 09:19:04 INFO mapred.JobClient: Counters: 29
14/07/09 09:19:04 INFO mapred.JobClient: Map-Reduce Framework
14/07/09 09:19:04 INFO mapred.JobClient: Spilled Records=12
14/07/09 09:19:04 INFO mapred.JobClient: Map output materialized bytes=85
14/07/09 09:19:04 INFO mapred.JobClient: Reduce input records=6
14/07/09 09:19:04 INFO mapred.JobClient: Virtual memory (bytes) snapshot=3977613312
14/07/09 09:19:04 INFO mapred.JobClient: Map input records=3
14/07/09 09:19:04 INFO mapred.JobClient: SPLIT_RAW_BYTES=120
14/07/09 09:19:04 INFO mapred.JobClient: Map output bytes=201
14/07/09 09:19:04 INFO mapred.JobClient: Reduce shuffle bytes=85
14/07/09 09:19:04 INFO mapred.JobClient: Physical memory (bytes) snapshot=282304512
14/07/09 09:19:04 INFO mapred.JobClient: Reduce input groups=6
14/07/09 09:19:04 INFO mapred.JobClient: Combine output records=6
14/07/09 09:19:04 INFO mapred.JobClient: Reduce output records=6
14/07/09 09:19:04 INFO mapred.JobClient: Map output records=18
14/07/09 09:19:04 INFO mapred.JobClient: Combine input records=18
14/07/09 09:19:04 INFO mapred.JobClient: CPU time spent (ms)=4410
14/07/09 09:19:04 INFO mapred.JobClient: Total committed heap usage (bytes)=312999936
14/07/09 09:19:04 INFO mapred.JobClient: File Input Format Counters
14/07/09 09:19:04 INFO mapred.JobClient: Bytes Read=135
14/07/09 09:19:04 INFO mapred.JobClient: FileSystemCounters
14/07/09 09:19:04 INFO mapred.JobClient: HDFS_BYTES_READ=255
14/07/09 09:19:04 INFO mapred.JobClient: FILE_BYTES_WRITTEN=109407
14/07/09 09:19:04 INFO mapred.JobClient: FILE_BYTES_READ=85
14/07/09 09:19:04 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=55
14/07/09 09:19:04 INFO mapred.JobClient: Job Counters
14/07/09 09:19:04 INFO mapred.JobClient: Launched map tasks=1
14/07/09 09:19:04 INFO mapred.JobClient: Launched reduce tasks=1
14/07/09 09:19:04 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=9713
14/07/09 09:19:04 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/07/09 09:19:04 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=7167
14/07/09 09:19:04 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/07/09 09:19:04 INFO mapred.JobClient: Data-local map tasks=1
14/07/09 09:19:04 INFO mapred.JobClient: File Output Format Counters
14/07/09 09:19:04 INFO mapred.JobClient: Bytes Written=55
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]# hadoop dfs -ls /home/hadoop/
 
Found 2 items
drwxr-xr-x - root supergroup 0 2014-07-09 09:17 /home/hadoop/MapReduce
drwxr-xr-x - root supergroup 0 2014-07-09 09:19 /home/hadoop/Output
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]# hadoop dfs -ls /home/hadoop/Output
 
Found 3 items
-rw-r--r-- 2 root supergroup 0 2014-07-09 09:19 /home/hadoop/Output/_SUCCESS
drwxr-xr-x - root supergroup 0 2014-07-09 09:18 /home/hadoop/Output/_logs
-rw-r--r-- 2 root supergroup 55 2014-07-09 09:19 /home/hadoop/Output/part-r-00000
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]# hadoop dfs -cat /home/hadoop/Output/part-r-00000
 
Cluster 3
Hadoop 3
MapReduce 3
Multi-node 3
job 3
on 3
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host hadoop-1.2.1]#
[root@master-host logs]# ll -lhtr
total 56K
drwxr-xr-x 8 root root 4.0K Jul 8 06:19 userlogs
-rw-r--r-- 1 root root 47K Jul 9 09:18 job_201407090310_0001_conf.xml
drwxr-xr-x 3 root root 4.0K Jul 9 09:19 history
[root@master-host logs]#
[root@master-host logs]# head job_201407090310_0001_conf.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>mapreduce.job.counters.max</name><value>120</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>mapred.task.cache.levels</name><value>2</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>mapreduce.job.restart.recover</name><value>true</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>dfs.client.use.datanode.hostname</name><value>false</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>hadoop.tmp.dir</name><value>/tmp/hadoop-${user.name}</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>hadoop.native.lib</name><value>true</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value></property>
<property><!--Loaded from /tmp/hadoop-root/mapred/local/jobTracker/job_201407090310_0001.xml--><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value></property>
[root@master-host logs]#
 
 
At Slave
 
 
 
[root@slave-host logs]# hadoop dfs -cat /home/hadoop/Output/part-r-00000
 
Cluster 3
Hadoop 3
MapReduce 3
Multi-node 3
job 3
on 3
[root@slave-host logs]#
 
 
[root@slave-host logs]# ll -lhtr userlogs/job_201407090310_0001/
total 20K
-rw-r----- 1 root root 499 Jul 9 09:18 job-acls.xml
lrwxrwxrwx 1 root root 97 Jul 9 09:18 attempt_201407090310_0001_m_000002_0 -> /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_m_000002_0
lrwxrwxrwx 1 root root 97 Jul 9 09:18 attempt_201407090310_0001_m_000000_0 -> /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_m_000000_0
lrwxrwxrwx 1 root root 97 Jul 9 09:18 attempt_201407090310_0001_r_000000_0 -> /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_r_000000_0
lrwxrwxrwx 1 root root 97 Jul 9 09:19 attempt_201407090310_0001_m_000001_0 -> /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_m_000001_0
[root@slave-host logs]#
 
 
[root@slave-host logs]# cat userlogs/job_201407090310_0001/job-acls.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><!--Loaded from Unknown--><name>user.name</name><value>root</value></property>
<property><!--Loaded from Unknown--><name>mapred.job.queue.name</name><value>default</value></property>
<property><!--Loaded from Unknown--><name>mapreduce.job.acl-view-job</name><value> </value></property>
<property><!--Loaded from Unknown--><name>mapred.queue.default.acl-administer-jobs</name><value>*</value></property>
</configuration>
[root@slave-host logs]#
[root@slave-host logs]#
 
 
[root@slave-host logs]# cat /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_m_000002_0/syslog
2014-07-09 09:18:48,413 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2014-07-09 09:18:49,032 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2014-07-09 09:18:49,043 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@17579e0f
2014-07-09 09:18:49,204 INFO org.apache.hadoop.mapred.Task: Task:attempt_201407090310_0001_m_000002_0 is done. And is in the process of commiting
2014-07-09 09:18:49,280 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201407090310_0001_m_000002_0' done.
2014-07-09 09:18:49,319 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2014-07-09 09:18:49,366 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2014-07-09 09:18:49,366 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName root for UID 0 from the native implementation
[root@slave-host logs]#
 
 
[root@slave-host logs]# cat /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_r_000000_0/
log.index stderr stdout syslog 
[root@slave-host logs]# cat /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_r_000000_0/syslog
2014-07-09 09:18:53,798 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2014-07-09 09:18:54,427 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2014-07-09 09:18:54,434 INFO org.apache.hadoop.mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6acdbdf5
2014-07-09 09:18:54,546 INFO org.apache.hadoop.mapred.ReduceTask: ShuffleRamManager: MemoryLimit=130652568, MaxSingleShuffleLimit=32663142
2014-07-09 09:18:54,559 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Thread started: Thread for merging on-disk files
2014-07-09 09:18:54,559 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Thread started: Thread for merging in memory files
2014-07-09 09:18:54,559 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Thread waiting: Thread for merging on-disk files
2014-07-09 09:18:54,560 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Need another 1 map output(s) where 0 is already in progress
2014-07-09 09:18:54,560 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Thread started: Thread for polling Map Completion Events
2014-07-09 09:18:54,561 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Scheduled 0 outputs (0 slow hosts and0 dup hosts)
2014-07-09 09:18:59,563 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201407090310_0001_r_000000_0 Scheduled 1 outputs (0 slow hosts and0 dup hosts)
2014-07-09 09:19:00,627 INFO org.apache.hadoop.mapred.ReduceTask: GetMapEventsThread exiting
2014-07-09 09:19:00,628 INFO org.apache.hadoop.mapred.ReduceTask: getMapsEventsThread joined.
2014-07-09 09:19:00,629 INFO org.apache.hadoop.mapred.ReduceTask: Closed ram manager
2014-07-09 09:19:00,629 INFO org.apache.hadoop.mapred.ReduceTask: Interleaved on-disk merge complete: 0 files left.
2014-07-09 09:19:00,629 INFO org.apache.hadoop.mapred.ReduceTask: In-memory merge complete: 1 files left.
2014-07-09 09:19:00,697 INFO org.apache.hadoop.mapred.Merger: Merging 1 sorted segments
2014-07-09 09:19:00,697 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 81 bytes
2014-07-09 09:19:00,723 INFO org.apache.hadoop.mapred.ReduceTask: Merged 1 segments, 81 bytes to disk to satisfy reduce memory limit
2014-07-09 09:19:00,724 INFO org.apache.hadoop.mapred.ReduceTask: Merging 1 files, 85 bytes from disk
2014-07-09 09:19:00,727 INFO org.apache.hadoop.mapred.ReduceTask: Merging 0 segments, 0 bytes from memory into reduce
2014-07-09 09:19:00,728 INFO org.apache.hadoop.mapred.Merger: Merging 1 sorted segments
2014-07-09 09:19:00,733 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 81 bytes
2014-07-09 09:19:00,885 INFO org.apache.hadoop.mapred.Task: Task:attempt_201407090310_0001_r_000000_0 is done. And is in the process of commiting
2014-07-09 09:19:02,014 INFO org.apache.hadoop.mapred.Task: Task attempt_201407090310_0001_r_000000_0 is allowed to commit now
2014-07-09 09:19:02,035 INFO org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Saved output of task 'attempt_201407090310_0001_r_000000_0' to /home/hadoop/Output
2014-07-09 09:19:02,045 INFO org.apache.hadoop.mapred.Task: Task 'attempt_201407090310_0001_r_000000_0' done.
2014-07-09 09:19:02,054 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2014-07-09 09:19:02,114 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2014-07-09 09:19:02,115 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName root for UID 0 from the native implementation
[root@slave-host logs]#
 
[root@slave-host logs]# cat /tmp/hadoop-root/mapred/local/userlogs/job_201407090310_0001/attempt_201407090310_0001_r_000000_0/log.index
LOG_DIR:/root/hadoop-1.2.1/libexec/../logs/userlogs/job_201407090310_0001/attempt_201407090310_0001_r_000000_0
stdout:0 -1
stderr:0 -1
syslog:0 -1
[root@slave-host logs]#

  • Ask Question