编写WordCount程序之一固定格式讲解

编写WordCount程序之一固定格式讲解

大家好,又见面了,我是全栈君。

WordCount因果图

MapReduce中 map和reduce函数格式

MapReduce中,map和reduce函数遵循如下常规格式:
map: (K1, V1) → list(K2, V2)
reduce: (K2, list(V2)) → list(K3, V3)
Mapper的基类:
protected void map(KEY key, VALUE value, 
    Context context) throws     IOException, InterruptedException {   
 }
Reducer的基类:
protected void reduce(KEY key, Iterable<VALUE> values,
     Context context) throws IOException, InterruptedException { 
 }

Context是上下文对象

代码模板

wordcount 代码

代码编写依据,也就是固定写法
input–>map—>reduce->output
以下java代码实现此命令的功能bin/hdfs dfs jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar input output

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

    private static Log logger = LogFactory.getLog(WordCount.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = new Configuration();
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        int rv = new WordCount().run(args);
        System.exit(rv);
    }
}


map类业务处理

map 业务处理逻辑
————–input——–
<0,hadoop yarn>
————–处理———
hadoop yarn –>split->hadoop,yarn
————-output——-
<hadoop,1>
<yarn,1>

public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        //全局只有一个
        private static final IntWritable mapOutputValues =  new IntWritable(1);
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }

reduce类业务处理过程

reduce 业务处理过程 map–>shuffle–>mapreduce

------------input(map的输出结果)-----------------
<hadoop,1>
<hadoop,1>
<hadoop,1>
----------------分组----------------
将相同key的值合并到一起,放到一个集合
<hadoop,1>
<hadoop,1>    ->  <hadoop,list(1,1,1)>
<hadoop,1>
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }

优化MapReduce写法

mapReduce 继承configured类, 并实现 Tool接口
tool接口类中的run方法重写
configured 提供初始化工作。

package com.lizh.hadoop.mapreduce;

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    private static Log logger = LogFactory.getLog(WordCountMapReduce.class);
    //step1 Mapper class
    
    public static class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable>{
        private Text mapoutputKey = new Text();
        private static final IntWritable mapOutputValues =  new IntWritable(1);//全局只有一个
        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            // TODO Auto-generated method stubl
            
            
            String linevalue = value.toString();
            StringTokenizer stringTokenizer = new StringTokenizer(linevalue);
            while(stringTokenizer.hasMoreTokens()){
                String workvalue = stringTokenizer.nextToken();
                mapoutputKey.set(workvalue);
                context.write(mapoutputKey, mapOutputValues);
                logger.info("-----WordCountMapper-----"+mapOutputValues.get());
            }
        }
        
    }
    
    //step2 Reduce class
    public static class WordCountReduces extends Reducer<Text, IntWritable, Text, IntWritable>{

        private IntWritable reduceOutputValues =  new IntWritable();
        
        @Override
        protected void reduce(Text key, Iterable<IntWritable> vaues,Context context)
                throws IOException, InterruptedException {
            int sum =0;
            for(IntWritable iv:vaues){
                sum=sum+iv.get();
            }
            reduceOutputValues.set(sum);
            context.write(key, reduceOutputValues);
        }
        
    }
    
    //step3 driver component job 
    
    public int run(String[] args) throws Exception{
        //1 get configration file core-site.xml hdfs-site.xml 
        Configuration configuration = super.getConf();//优化
        
        //2 create job
        Job job = Job.getInstance(configuration, this.getClass().getSimpleName());
        //3 run jar
        job.setJarByClass(this.getClass());
        
        //4 set job
        //input-->map--->reduce-->output
        //4.1 input
        Path path = new Path(args[0]);
        FileInputFormat.addInputPath(job, path);
        
        //4.2 map
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(IntWritable.class);
        
        //4.3 reduce
        job.setReducerClass(WordCountReduces.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);
        
        //4.4 output
        Path outputpath = new Path(args[1]);
        FileOutputFormat.setOutputPath(job, outputpath);
        
        //5 submit job
        boolean rv = job.waitForCompletion(true);//true的时候打印日志
        
        return rv ? 0:1;
        
    }
    
    public static void main(String[] args) throws Exception{
        
        //int rv = new WordCountMapReduce().run(args);
        Configuration configuration = new Configuration();
        //使用工具类运行
        int rv  = ToolRunner.run(configuration, new WordCountMapReduce(), args);
        System.exit(rv);
    }
}

抽象出模板

package org.apache.hadoop.mapreduce;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;

public class WordCountMapReduce extends Configured implements Tool {

    /**
     * Mapper Class : public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountMapper extends //
            Mapper<LongWritable, Text, Text, LongWritable> {

        private Text mapOutputKey = new Text();
        private LongWritable mapOutputValue = new LongWritable(1);

        @Override
        protected void map(LongWritable key, Text value, Context context)
                throws IOException, InterruptedException {
            
        }
    }

    /**
     * Reducer Class : public class Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
     * 
     * @param args
     */
    public static class WordCountReducer extends //
            Reducer<Text, LongWritable, Text, LongWritable> {

        private LongWritable outputValue = new LongWritable();

        @Override
        protected void reduce(Text key, Iterable<LongWritable> values,
                Context context) throws IOException, InterruptedException {
            // temp sum
            
        }
    }

    /**
     * Driver : Create\set\submit Job
     * 
     * @param args
     * @throws Exception
     */
    public int run(String[] args) throws Exception {
        // 1.Get Configuration
        Configuration conf = super.getConf();

        // 2.Create Job
        Job job = Job.getInstance(conf);
        job.setJarByClass(getClass());

        // 3.Set Job
        // Input --> map --> reduce --> output
        // 3.1 Input
        Path inPath = new Path(args[0]);
        FileInputFormat.addInputPath(job, inPath);

        // 3.2 Map class
        job.setMapperClass(WordCountMapper.class);
        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(LongWritable.class);

        // 3.3 Reduce class
        job.setReducerClass(WordCountReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(LongWritable.class);

        // 3.4 Output
        Path outPath = new Path(args[1]);

        FileSystem dfs = FileSystem.get(conf);
        if (dfs.exists(outPath)) {
            dfs.delete(outPath, true);
        }

        FileOutputFormat.setOutputPath(job, outPath);

        // 4.Submit Job
        boolean isSuccess = job.waitForCompletion(true);
        return isSuccess ? 0 : 1;
    }

    public static void main(String[] args) throws Exception {
        

        Configuration conf = new Configuration();
    
        
        // run job
        int status = ToolRunner.run(//
                conf,//
                new WordCountMapReduce(),//
                args);

        // exit program
        System.exit(status);
    }
}

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/107708.html原文链接:https://javaforall.net

(0)
上一篇 2022年3月13日 上午10:35
下一篇 2022年3月13日 上午10:35


相关推荐

  • 腾讯押注微信AI期待翻盘,生态或成创新枷锁

    腾讯押注微信AI期待翻盘,生态或成创新枷锁

    2026年3月12日
    1
  • 2026企业级AI智能体架构对比:RPA+大模型融合在财务场景的表现

    2026企业级AI智能体架构对比:RPA+大模型融合在财务场景的表现

    2026年3月14日
    2
  • 灰度发布和灰度测试

    灰度发布和灰度测试灰度测试是什么意思 如果您对互联网软件开发行业了解不多 您可能对这个词不太熟悉 事实上 灰度测试是指如果软件要在不久的将来推出新功能 或者进行重大修改 你必须首先做少量的试验工作 然后慢慢增加数量 直到这个新功能覆盖所有系统用户 即新功能上的黑白之间都有灰色 因此这种方法通常也称为灰度测试 灰度测试又名金丝雀发布 灰度发布 一种在黑白之间发布平滑过渡的方式 可以对其执行 A B 测试 也就是说 一些

    2026年3月18日
    1
  • 简单人脸识别一之使用opencv+cnn网络实现人脸识别

    简单人脸识别一之使用opencv+cnn网络实现人脸识别最近在研究目标检测这个方向,看到网上有很多的人脸识别帖子,所以也想着上上手看看。当时是做了三个模型出来,第一个就是网上很通用普遍的opencv+简单三层cnn网络来实现的,说实话效果真的一般吧!具体的下面再细细陈述。第二个是把三层cnn网络换成了残差网络。因为自己刚好也是学习了残差网络。就想着生搬硬套过来,但效果说实话很迷,时好时坏,把我是整蒙逼了,后面也会提的。最后一个是用opencv+MTCN…

    2022年5月11日
    49
  • 逻辑回归(logistics regression)

    逻辑回归(logistics regression)逻辑回归 logisticsreg nbsp nbsp nbsp nbsp 前几章分别讲了多元线性回归的推理思路和求解过程 解析解求解和梯度下降求解 文章并不以代码和公式推导过程为重点 目的是跟大家一起理解算法 前两章的内容是学习算法的基础 所以本章会在前两章的基础上讨论逻辑回归 logisticsreg 逻辑回归也属于有监督机器学习 nbsp nbsp nbsp nbsp 之前我们了解到了多元线性回

    2026年3月20日
    1
  • js 除法 取整「建议收藏」

    js 除法 取整「建议收藏」1.丢弃小数部分,保留整数部分 js:parseInt(7/2) 2.向上取整,有小数就整数部分加1 js:Math.ceil(7/2) 3,四舍五入. js:Math.round(7/2) 4,向下取整 js:Math.floor(7/2)都是JS内置对象

    2022年6月21日
    102

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号