编写WordCount程序之一固定格式讲解

编写WordCount程序之一固定格式讲解

大家好,又见面了,我是全栈君。

WordCount因果图

MapReduce中 map和reduce函数格式

MapReduce中,map和reduce函数遵循如下常规格式:
map: (K1, V1) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
Mapper的基类:
protected void map(KEY key, VALUE value, 
    Context context) throws     IOException, InterruptedException {   
 }
Reducer的基类:
protected void reduce(KEY key, Iterable<VALUE> values,
     Context context) throws IOException, InterruptedException { 
 }

Context是上下文对象

代码模板

wordcount 代码

代码编写依据,也就是固定写法
input–>map—>reduce->output
以下java代码实现此命令的功能bin/hdfs dfs jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar input output

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    private static Log logger = LogFactory.getLog(WordCount.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = new Configuration();
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        int rv = new WordCount().run(args);
        System.exit(rv);
    }
}


map类业务处理

map 业务处理逻辑
————–input——–
<0,hadoop yarn>
————–处理———
hadoop yarn –>split->hadoop,yarn
————-output——-
<hadoop,1>
<yarn,1>

public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        //全局只有一个
        private static final IntWritable mapOutputValues =  new IntWritable(1);
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }

reduce类业务处理过程

reduce 业务处理过程 map–>shuffle–>mapreduce

------------input(map的输出结果)-----------------
<hadoop,1>
<hadoop,1>
<hadoop,1>
----------------分组----------------
将相同key的值合并到一起,放到一个集合
<hadoop,1>
<hadoop,1>    ->  <hadoop,list(1,1,1)>
<hadoop,1>
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }

优化MapReduce写法

mapReduce 继承configured类, 并实现 Tool接口
tool接口类中的run方法重写
configured 提供初始化工作。

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    private static Log logger = LogFactory.getLog(WordCountMapReduce.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = super.getConf();//优化
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);//true的时候打印日志
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        //int rv = new WordCountMapReduce().run(args);
        Configuration configuration = new Configuration();
        //使用工具类运行
        int rv  = ToolRunner.run(configuration, new WordCountMapReduce(), args);
        System.exit(rv);
    }
}

抽象出模板

package org.apache.hadoop.mapreduce;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    /**
     * Mapper Class : public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountMapper extends //
            Mapper<LongWritable, Text, Text, LongWritable> {

        private Text mapOutputKey = new Text();
        private LongWritable mapOutputValue = new LongWritable(1);

        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            
        }
    }

    /**
     * Reducer Class : public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountReducer extends //
            Reducer<Text, LongWritable, Text, LongWritable> {

        private LongWritable outputValue = new LongWritable();

        @Override
        protected void reduce(Text key, Iterable<LongWritable> values,
                Context context) throws IOException, InterruptedException {
            // temp sum
            
        }
    }

    /**
     * Driver : Create\set\submit Job
     * 
     * @param args
     * @throws Exception
     */
    public int run(String[] args) throws Exception {
        // 1.Get Configuration
        Configuration conf = super.getConf();

        // 2.Create Job
        Job job = Job.getInstance(conf);
        job.setJarByClass(getClass());

        // 3.Set Job
        // Input --> map --> reduce --> output
        // 3.1 Input
        Path inPath = new Path(args[0]);
        FileInputFormat.addInputPath(job, inPath);

        // 3.2 Map class
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        // 3.3 Reduce class
        job.setReducerClass(WordCountReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        // 3.4 Output
        Path outPath = new Path(args[1]);

        FileSystem dfs = FileSystem.get(conf);
        if (dfs.exists(outPath)) {
            dfs.delete(outPath, true);
        }

        FileOutputFormat.setOutputPath(job, outPath);

        // 4.Submit Job
        boolean isSuccess = job.waitForCompletion(true);
        return isSuccess ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        

        Configuration conf = new Configuration();
    
        
        // run job
        int status = ToolRunner.run(//
                conf,//
                new WordCountMapReduce(),//
                args);

        // exit program
        System.exit(status);
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/107708.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • 8款安卓数据恢复软件测评 2019更新版【国外篇】

    8款安卓数据恢复软件测评【国外篇】相信多数人都有过手机数据丢失的悲痛经历吧,尤其是当你没有任何可用的备份的时候。前几天我也遇到了这个问题,那个着急和纠心啊,于是我就开始了全网搜索国内外一切可用的手机数据恢复软件。我把网上能找的能试用的都给试了一遍,这里给大家总结一下!和手机系统一样,现在的手机数据恢复软件也区分为安卓,iOS和WP等,目前市场上主流的是针对安…

    2022年4月9日
    151
  • 旋转太极八卦

    旋转太极八卦太极八卦图,以同圆内的圆心为界,画出相等的两个阴阳鱼表示万物相互关系。阴鱼用黑色,阳鱼用白色,这是白天与黑夜的表示法。阳鱼的头部有个阴眼,阴鱼的头部有个阳眼,表示万物都在相互转化,互相渗透,阴中有阳,阳中有阴,阴阳相合,相生相克,即现代哲学中和矛盾对立统一规律表示法。哈哈,装了个逼。其实我就是想教大家用css3画出旋转太极八卦。仅此而已。实现效果如下图:Html的代码很简单,就一行…

    2022年5月13日
    48
  • HTML5 body设置全屏背景图片 如何让body的背景图片自适应整个屏—-实战经验[通俗易懂]

    HTML5 body设置全屏背景图片 如何让body的背景图片自适应整个屏—-实战经验[通俗易懂]用什么代码实现?不允许有白色底色产生,因为手机高度不一样错误的写法:加到div中结合图片设置min-height,但是页面不会回弹话不多说直接上代码终极方案html,body{width:100%;height:100%}再加一段body{font-family:&amp;amp;amp;quot;华文细黑&amp;amp;amp;quot;;background:url(&amp;amp;amp;quo

    2022年6月14日
    41
  • 获得支付牌照的第三方支付机构有哪些?_第三方网络支付牌照

    获得支付牌照的第三方支付机构有哪些?_第三方网络支付牌照这篇文章为大家介绍第三方支付都包含哪些业务,以及第三方支付的各方玩家都是谁?牌照发放在中国很多行业发展都会走到前面,等行业发展到一定的规模,政府才会根据行业的发展情况进行监管。典型的行业有:第三方

    2022年8月1日
    8
  • EM算法定义及推导

    EM算法是一种迭代算法,传说中的上帝算法,俗人可望不可及。用以含有隐变量的概率模型参数的极大似然估计,或极大后验概率估计EM算法定义输入:观测变量数据X,隐变量数据Z,联合分布$P(X,Z|\th

    2021年12月30日
    56
  • 用nginx转发请求tomcat 如何配置访问日志获取真实ip

    用nginx转发请求tomcat 如何配置访问日志获取真实ip

    2021年8月24日
    102

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号