编写WordCount程序之一固定格式讲解

编写WordCount程序之一固定格式讲解

大家好,又见面了,我是全栈君。

WordCount因果图

MapReduce中 map和reduce函数格式

MapReduce中,map和reduce函数遵循如下常规格式:
map: (K1, V1) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
Mapper的基类:
protected void map(KEY key, VALUE value, 
    Context context) throws     IOException, InterruptedException {   
 }
Reducer的基类:
protected void reduce(KEY key, Iterable<VALUE> values,
     Context context) throws IOException, InterruptedException { 
 }

Context是上下文对象

代码模板

wordcount 代码

代码编写依据,也就是固定写法
input–>map—>reduce->output
以下java代码实现此命令的功能bin/hdfs dfs jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar input output

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    private static Log logger = LogFactory.getLog(WordCount.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = new Configuration();
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        int rv = new WordCount().run(args);
        System.exit(rv);
    }
}


map类业务处理

map 业务处理逻辑
————–input——–
<0,hadoop yarn>
————–处理———
hadoop yarn –>split->hadoop,yarn
————-output——-
<hadoop,1>
<yarn,1>

public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        //全局只有一个
        private static final IntWritable mapOutputValues =  new IntWritable(1);
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }

reduce类业务处理过程

reduce 业务处理过程 map–>shuffle–>mapreduce

------------input(map的输出结果)-----------------
<hadoop,1>
<hadoop,1>
<hadoop,1>
----------------分组----------------
将相同key的值合并到一起,放到一个集合
<hadoop,1>
<hadoop,1>    ->  <hadoop,list(1,1,1)>
<hadoop,1>
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }

优化MapReduce写法

mapReduce 继承configured类, 并实现 Tool接口
tool接口类中的run方法重写
configured 提供初始化工作。

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    private static Log logger = LogFactory.getLog(WordCountMapReduce.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = super.getConf();//优化
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);//true的时候打印日志
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        //int rv = new WordCountMapReduce().run(args);
        Configuration configuration = new Configuration();
        //使用工具类运行
        int rv  = ToolRunner.run(configuration, new WordCountMapReduce(), args);
        System.exit(rv);
    }
}

抽象出模板

package org.apache.hadoop.mapreduce;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    /**
     * Mapper Class : public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountMapper extends //
            Mapper<LongWritable, Text, Text, LongWritable> {

        private Text mapOutputKey = new Text();
        private LongWritable mapOutputValue = new LongWritable(1);

        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            
        }
    }

    /**
     * Reducer Class : public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountReducer extends //
            Reducer<Text, LongWritable, Text, LongWritable> {

        private LongWritable outputValue = new LongWritable();

        @Override
        protected void reduce(Text key, Iterable<LongWritable> values,
                Context context) throws IOException, InterruptedException {
            // temp sum
            
        }
    }

    /**
     * Driver : Create\set\submit Job
     * 
     * @param args
     * @throws Exception
     */
    public int run(String[] args) throws Exception {
        // 1.Get Configuration
        Configuration conf = super.getConf();

        // 2.Create Job
        Job job = Job.getInstance(conf);
        job.setJarByClass(getClass());

        // 3.Set Job
        // Input --> map --> reduce --> output
        // 3.1 Input
        Path inPath = new Path(args[0]);
        FileInputFormat.addInputPath(job, inPath);

        // 3.2 Map class
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        // 3.3 Reduce class
        job.setReducerClass(WordCountReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        // 3.4 Output
        Path outPath = new Path(args[1]);

        FileSystem dfs = FileSystem.get(conf);
        if (dfs.exists(outPath)) {
            dfs.delete(outPath, true);
        }

        FileOutputFormat.setOutputPath(job, outPath);

        // 4.Submit Job
        boolean isSuccess = job.waitForCompletion(true);
        return isSuccess ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        

        Configuration conf = new Configuration();
    
        
        // run job
        int status = ToolRunner.run(//
                conf,//
                new WordCountMapReduce(),//
                args);

        // exit program
        System.exit(status);
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/107708.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • android activity的跳转动画,实现activity跳转动画的若干种方式

    android activity的跳转动画,实现activity跳转动画的若干种方式第一种:(使用overridePendingTransition方法实现Activity跳转动画)在Activity中代码如下/***点击按钮实现跳转逻辑*/button1.setOnClickListener(newView.OnClickListener(){@OverridepublicvoidonClick(Viewv){/***在调用了startActivity方法之后…

    2022年5月21日
    32
  • idea tomcat catalina log乱码_xshell查看日志乱码怎么解决

    idea tomcat catalina log乱码_xshell查看日志乱码怎么解决以前一直使用Eclipse,现在试用IDEA,遇到一些坑,通过网上的答案基本都解决了,但有些答案不好,比如这个问题。1、原因分析Tomcat运行JavaWeb的程序,在IDEA控制台中输出显示,我们一般都是用UTF8编码。从Java源码到IDEA控制台,大致分为几个阶段:1)源码:即*.java原文件,是纯文本文件。编码方式在IDEA的Settings>Editor>FileEncodings中设置;2)…

    2022年9月26日
    1
  • jQuery和Vue的区别[通俗易懂]

    jQuery和Vue的区别[通俗易懂]1.jQuery首先要获取到dom对象,然后对dom对象进行进行值的修改等操作2.Vue是首先把值和js对象进行绑定,然后修改js对象的值,Vue框架就会自动把dom的值就行更新。3.可以简单的理解为Vue帮我们做了dom操作,我们以后用Vue就需要修改对象的值和做好元素和对象的绑定,Vue这个框架就会自动帮我们做好dom的相关操作4.这种dom元素跟随JS对象值的变化而变化叫做单向数据绑…

    2022年10月16日
    1
  • 目标检測的图像特征提取之(一)HOG特征

    目标检測的图像特征提取之(一)HOG特征

    2021年11月15日
    40
  • pycharm安装包时的那些事

    pycharm安装包时的那些事1.查找是否有.condarc文件.condarc以点开头,一般表示conda应用程序的配置文件,在用户的家目录windows:C:\users\username\linux:/home/username/问题一:无法找到condarc文件#创建condarc文件condaconfig–addchannelsr2.设置清华源#注意是http,不是https,以下是在命令行的代码,也可以直接在.condarc文件中修改condaconfig–add

    2022年5月17日
    36
  • 炉石传说怎么改服务器(炉石firestone怎么用)

    魔兽世界中,炉石是一个相当关键的道具,因为玩家在深入野外探险以后,如果要原路返回就太耗时间了,因此炉石很重要,那么炉石怎么用呢,下面游戏吧小编为大家带来介绍。魔兽世界炉石怎么用魔兽世界中炉石只需要玩家右键点击炉石就能使用,可以直接在背包里点,或者拖到快捷键上用快捷键点。炉石冷却时间为半个小时,使用炉石需要一定的施法时间,过程中玩家的施法动作像在双手搓东西,因此也叫搓炉石。玩家需要注意的是,炉石传送…

    2022年4月16日
    146

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号