hadoop开发---WordCount-成都创新互联网站建设

关于创新互联

多方位宣传企业产品与服务 突出企业形象

公司简介 公司的服务 荣誉资质 新闻动态 联系我们

hadoop开发---WordCount

参考http://hadoop.apache.org/docs/r2.7.6/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html

创新互联-专业网站定制、快速模板网站建设、高性价比阿城网站开发、企业建站全套包干低至880元,成熟完善的模板库,直接使用。一站式阿城网站制作公司更省心,省钱,快速模板网站建设找我们,业务覆盖阿城地区。费用合理售后完善,10余年实体公司更值得信赖。

eclipse 新建maven项目

pom 文件内容

  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

  4.0.0

  hadoop_mapreduce

  WordCount

  0.0.1-SNAPSHOT

  jar

  WordCount

  http://maven.apache.org

 

    UTF-8

 

 

 

    org.apache.hadoop

    hadoop-client

    2.8.0

 

       

            jdk.tools

            jdk.tools

            1.8

            system

            C:\Program Files\Java\jdk1.8.0_151\lib\tools.jar

       

 

注: 只需要hadoop-client包,如果引入hbase相关的包,很可能出现包冲突,运行会出现异常。


WordCount类代码

package hadoop_mapreduce.WordCount;

import java.io.IOException;

import java.io.InterruptedIOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.IntWritable;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount{

public static class TokenizerMapper

extends Mapper{

private final static IntWritable one =new IntWritable(1);

private Text word = new Text();

public void map(Object key,Text value,Context context) throws IOException,InterruptedIOException, InterruptedException

{

StringTokenizer itr = new StringTokenizer (value.toString());

while(itr.hasMoreTokens()) {

word.set(itr.nextToken());

context.write(word, one);

}

}

}

public static class IntSumReducer

extends Reducer {

private IntWritable result = new IntWritable();

public void reduce(Text key, Iterable values,Context context) throws IOException,InterruptedException {

int sum = 0;

for (IntWritable val : values) {

sum += val.get();

}

result.set(sum);

context.write(key, result);

}

}

//public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException

public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException

{

/*

* IntWritable intwritable = new IntWritable(1);

Text text = new Text("abc");

System.out.println(text.toString());

System.out.println(text.getLength());

System.out.println(intwritable.get());

System.out.println(intwritable);

StringTokenizer itr = new StringTokenizer ("www baidu com");

while(itr.hasMoreTokens()) {

System.out.println(itr.nextToken());   hdfs://192.168.50.107:8020/input  hdfs://192.168.50.107:8020/output

*/

//String path = WordCount.class.getResource("/").toString(); 

//System.out.println("path = " + path);

    System.out.println("Connection end");

    //System.setProperty("hadoop.home.dir", "file://192.168.50.107/home/hadoop-user/hadoop-2.8.0");

    //String StringInput = "hdfs://192.168.50.107:8020/input/a.txt";

    //String StringOutput = "hdfs://192.168.50.107:8020/output/b.txt";

Configuration conf = new Configuration();

//conf.set("fs.hdfs.impl", "org.apache.hadoop.hdfs.DistributedFileSystem");

//conf.addResource("classpath:core-site.xml");

    //conf.addResource("classpath:hdfs-site.xml");

    //conf.addResource("classpath:mapred-site.xml");

    //conf.set("HADOOP_HOME", "/home/hadoop-user/hadoop-2.8.0");

Job job = Job.getInstance(conf,"word count");

job.setJarByClass(WordCount.class);

job.setMapperClass(TokenizerMapper.class);

job.setCombinerClass(IntSumReducer.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(IntWritable.class);

//FileInputFormat.addInputPath(job, new Path(StringInput));

//FileOutputFormat.setOutputPath(job, new Path(StringOutput));

FileInputFormat.addInputPath(job, new Path(args[0]));

FileOutputFormat.setOutputPath(job, new Path(args[1]));

System.exit(job.waitForCompletion(true)?0:1);

}

}

连接hadoop的配置文件位置如图

hadoop 开发---WordCount

eclipse执行运行会报错: HADOOP_HOME and hadoop.home.dir are unset.

编译打包,放入linux系统

mvn clean

mvn compile

mvn pacakge

我将打包生成的WordCount-0.0.1-SNAPSHOT.jar放到了/home/hadoop-user/work目录

在linux 运行 hadoop jar WordCount-0.0.1-SNAPSHOT.jar hadoop_mapreduce.WordCount.WordCount hdfs://192.168.50.107:8020/input hdfs://192.168.50.107:8020/output

注: 我这里如果不带类路径就会报错,找不到WordCount类。把要分析的文件放入hdfs的input目录中,Output目录不用自己创建。最后生成的分析结果会存在于output目录中


新闻标题:hadoop开发---WordCount
文章地址:http://kswsj.cn/article/jhioej.html

其他资讯