Difference between revisions of "Hadoop: Contoh Program Sederhana"
Jump to navigation
Jump to search
Onnowpurbo (talk | contribs) |
Onnowpurbo (talk | contribs) |
||
(One intermediate revision by the same user not shown) | |||
Line 28: | Line 28: | ||
private final static IntWritable one = new IntWritable(1); | private final static IntWritable one = new IntWritable(1); | ||
private Text word = new Text(); | private Text word = new Text(); | ||
− | + | ||
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { | public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { | ||
String line = value.toString(); | String line = value.toString(); | ||
Line 73: | Line 73: | ||
Asumsinya HADOOP_HOME adalah root instalasi dan HADOOP_VERSION adalah versi Hadoop yang di install, compile WordCount.java dan buat jar: | Asumsinya HADOOP_HOME adalah root instalasi dan HADOOP_VERSION adalah versi Hadoop yang di install, compile WordCount.java dan buat jar: | ||
+ | |||
+ | export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common | ||
+ | export HADOOP_VERSION=2.7.1 | ||
mkdir wordcount_classes | mkdir wordcount_classes | ||
− | javac -classpath ${HADOOP_HOME}/hadoop-${HADOOP_VERSION} | + | javac -classpath ${HADOOP_HOME}/hadoop-common-${HADOOP_VERSION}.jar -d wordcount_classes WordCount.java |
jar -cvf /usr/joe/wordcount.jar -C wordcount_classes/ . | jar -cvf /usr/joe/wordcount.jar -C wordcount_classes/ . | ||
− | |||
==Penggunaan== | ==Penggunaan== |
Latest revision as of 16:58, 9 November 2015
Sumber:
- http://www.drdobbs.com/database/hadoop-writing-and-running-your-first-pr/240153197
- https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html
Source Code
Contoh source code WordCount.java untuk menghitung jumlah masing-masing kata dari sebuah input set.
cd ~ vi WordCount.java
package org.myorg; import java.io.IOException; import java.util.*; import org.apache.hadoop.fs.Path; import org.apache.hadoop.conf.*; import org.apache.hadoop.io.*; import org.apache.hadoop.mapred.*; import org.apache.hadoop.util.*; public class WordCount { public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { String line = value.toString(); StringTokenizer tokenizer = new StringTokenizer(line); while (tokenizer.hasMoreTokens()) { word.set(tokenizer.nextToken()); output.collect(word, one); } } } public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } output.collect(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { JobConf conf = new JobConf(WordCount.class); conf.setJobName("wordcount"); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(Map.class); conf.setCombinerClass(Reduce.class); conf.setReducerClass(Reduce.class); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); } }
Compile
Asumsinya HADOOP_HOME adalah root instalasi dan HADOOP_VERSION adalah versi Hadoop yang di install, compile WordCount.java dan buat jar:
export HADOOP_HOME=/usr/local/hadoop/share/hadoop/common export HADOOP_VERSION=2.7.1
mkdir wordcount_classes javac -classpath ${HADOOP_HOME}/hadoop-common-${HADOOP_VERSION}.jar -d wordcount_classes WordCount.java jar -cvf /usr/joe/wordcount.jar -C wordcount_classes/ .
Penggunaan
Asumsi
/usr/joe/wordcount/input - input directory di HDFS /usr/joe/wordcount/output - output directory di HDFS
Sample text-files as input:
bin/hadoop dfs -ls /usr/joe/wordcount/input/ /usr/joe/wordcount/input/file01 /usr/joe/wordcount/input/file02
bin/hadoop dfs -cat /usr/joe/wordcount/input/file01 Hello World Bye World
bin/hadoop dfs -cat /usr/joe/wordcount/input/file02 Hello Hadoop Goodbye Hadoop
Jalankan aplikasi
bin/hadoop jar /usr/joe/wordcount.jar org.myorg.WordCount /usr/joe/wordcount/input /usr/joe/wordcount/output
Output:
bin/hadoop dfs -cat /usr/joe/wordcount/output/part-00000 Bye 1 Goodbye 1 Hadoop 2 Hello 2 World 2