Spark Streaming Join

Spark Streaming Join多数据源Join思路多数据源Join大致有以下三种思路:数据源端Join,如Android/IOS客户端在上报用户行为数据时就获取并带上用户基础信息。计算引擎上Join,如用SparkStreaming、Flink做Join。结果端Join,如用HBase/ES做Join,Join键做Rowkey/_id,各字段分别写入列簇、列或field。三种思路各有优劣,使用时注意…

大家好,又见面了,我是你们的朋友全栈君。

多数据源Join思路

多数据源Join大致有以下三种思路:

  • 数据源端Join,如Android/IOS客户端在上报用户行为数据时就获取并带上用户基础信息。

  • 计算引擎上Join,如用Spark Streaming、Flink做Join。

  • 结果端Join,如用HBase/ES做Join,Join键做Rowkey/_id,各字段分别写入列簇、列或field。

三种思路各有优劣,使用时注意一下。这里总结在计算引擎Spark Streaming上做Join。

Stream-Static Join

流与完全静态数据Join

流与完全静态数据Join。有两种方式,一种是RDD Join方式,另一种是Broadcast Join(也叫Map-Side Join)方式。

RDD Join 方式

思路:RDD Join RDD 。

package com.bigData.spark

import com.alibaba.fastjson.{ 
   JSON, JSONException, JSONObject}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{ 
   Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.{ 
   ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{ 
   Durations, StreamingContext}

/** * Author: Wang Pei * License: Copyright(c) Pei.Wang * Summary: * * Stream-Static Join * * spark 2.2.2 * */
case class UserInfo(userID:String,userName:String,userAddress:String)
object StreamStaicJoin { 
   
  def main(args: Array[String]): Unit = { 
   

    //设置日志等级
    Logger.getLogger("org").setLevel(Level.WARN)

    //Kafka 参数
    val kafkaParams= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "testTopic3_consumer_v1")

    //spark环境
    val sparkConf = new SparkConf().setAppName(this.getClass.getSimpleName.replace("$","")).setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Durations.seconds(10))

    /** 1) 静态数据: 用户基础信息*/
    val userInfo=ssc.sparkContext.parallelize(Array(
      UserInfo("user_1","name_1","address_1"),
      UserInfo("user_2","name_2","address_2"),
      UserInfo("user_3","name_3","address_3"),
      UserInfo("user_4","name_4","address_4"),
      UserInfo("user_5","name_5","address_5")
    )).map(item=>(item.userID,item))


    /** 2) 流式数据: 用户发的tweet数据*/
    /** 数据示例: * eventTime:事件时间、retweetCount:转推数、language:语言、userID:用户ID、favoriteCount:点赞数、id:事件ID * {"eventTime": "2018-11-05 10:04:00", "retweetCount": 1, "language": "chinese", "userID": "user_1", "favoriteCount": 1, "id": 4909846540155641457} */

    val kafkaDStream=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](Set("testTopic3"),kafkaParams)
    ).map(item=>parseJson(item.value())).map(item=>{ 
   
      val userID = item.getString("userID")
      val eventTime = item.getString("eventTime")
      val language= item.getString("language")
      val favoriteCount = item.getInteger("favoriteCount")
      val retweetCount = item.getInteger("retweetCount")
      (userID,(userID,eventTime,language,favoriteCount,retweetCount))
    })


    /** 3) 流与静态数据做Join (RDD Join 方式)*/
    kafkaDStream.foreachRDD(_.join(userInfo).foreach(println))

    ssc.start()
    ssc.awaitTermination()

  }

  /**json解析*/
  def parseJson(log:String):JSONObject={ 
   
    var ret:JSONObject=null
    try{ 
   
      ret=JSON.parseObject(log)
    }catch { 
   
      //异常json数据处理
      case e:JSONException => println(log)
    }
    ret
  }

}

stream_static_rdd_join.png

Broadcast Join 方式

思路:RDD遍历每一条数据,去匹配广播变量中的值。

package com.bigData.spark

import com.alibaba.fastjson.{ 
   JSON, JSONException, JSONObject}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{ 
   Level, Logger}
import org.apache.spark.{ 
   SparkConf, SparkContext}
import org.apache.spark.streaming.kafka010.{ 
   ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{ 
   Durations, StreamingContext}

/** * Author: Wang Pei * License: Copyright(c) Pei.Wang * Summary: * * Stream-Static Join * * spark 2.2.2 * */
case class UserInfo(userID:String,userName:String,userAddress:String)
object StreamStaticJoin2 { 
   
  def main(args: Array[String]): Unit = { 
   

    //设置日志等级
    Logger.getLogger("org").setLevel(Level.WARN)

    //Kafka 参数
    val kafkaParams= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "testTopic3_consumer_v1")

    //spark环境
    val sparkConf = new SparkConf().setAppName(this.getClass.getSimpleName.replace("$","")).setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Durations.seconds(10))

    /** 1) 静态数据: 用户基础信息。 将用户基础信息广播出去。*/
    val broadcastUserInfo=ssc.sparkContext.broadcast(
      Map(
        "user_1"->UserInfo("user_1","name_1","address_1"),
        "user_2"->UserInfo("user_2","name_2","address_2"),
        "user_3"->UserInfo("user_3","name_3","address_3"),
        "user_4"->UserInfo("user_4","name_4","address_4"),
        "user_5"->UserInfo("user_5","name_5","address_5")
      ))


    /** 2) 流式数据: 用户发的tweet数据*/
    /** 数据示例: * eventTime:事件时间、retweetCount:转推数、language:语言、userID:用户ID、favoriteCount:点赞数、id:事件ID * {"eventTime": "2018-11-05 10:04:00", "retweetCount": 1, "language": "chinese", "userID": "user_1", "favoriteCount": 1, "id": 4909846540155641457} */
    val kafkaDStream=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](List("testTopic3"),kafkaParams)
    ).map(item=>parseJson(item.value())).map(item=>{ 
   
      val userID = item.getString("userID")
      val eventTime = item.getString("eventTime")
      val language= item.getString("language")
      val favoriteCount = item.getInteger("favoriteCount")
      val retweetCount = item.getInteger("retweetCount")
      (userID,(userID,eventTime,language,favoriteCount,retweetCount))
    })


    /** 3) 流与静态数据做Join (Broadcast Join 方式)*/
    val result=kafkaDStream.mapPartitions(part=>{ 
   
      val userInfo = broadcastUserInfo.value
      part.map(item=>{ 
   
        (item._1,(item._2,userInfo.getOrElse(item._1,null)))})
    })

    result.foreachRDD(_.foreach(println))


    ssc.start()
    ssc.awaitTermination()

  }

  /**json解析*/
  def parseJson(log:String):JSONObject={ 
   
    var ret:JSONObject=null
    try{ 
   
      ret=JSON.parseObject(log)
    }catch { 
   
      //异常json数据处理
      case e:JSONException => println(log)
    }
    ret
  }

}

stream_static_rdd_join2.png

流与半静态数据Join

半静态数据指的是放在Redis等的数据,会被更新。

思路:RDD 每个Partition连接一次Redis,遍历Partition中每条数据,根据k,去Redis中查找v。

package com.bigData.spark

import com.alibaba.fastjson.{ 
   JSON, JSONException, JSONObject}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{ 
   Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.{ 
   ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{ 
   Durations, StreamingContext}
import redis.clients.jedis.Jedis

/** * Author: Wang Pei * License: Copyright(c) Pei.Wang * Summary: * * Stream-Static Join * * spark 2.2.2 * */
object StreamStaicJoin3 { 
   
  def main(args: Array[String]): Unit = { 
   

    //设置日志等级
    Logger.getLogger("org").setLevel(Level.WARN)

    //Kafka 参数
    val kafkaParams= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "testTopic3_consumer_v1")

    //spark环境
    val sparkConf = new SparkConf().setAppName(this.getClass.getSimpleName.replace("$","")).setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Durations.seconds(10))

    /** 1) 半静态数据: 用户基础信息,在Redis中*/
    /** HMSET user_1 userID "user_1" name "name_1" address "address_1" */
    /** HMSET user_2 userID "user_2" name "name_2" address "address_2" */


    /** 2) 流式数据: 用户发的tweet数据*/
    /** 数据示例: * eventTime:事件时间、retweetCount:转推数、language:语言、userID:用户ID、favoriteCount:点赞数、id:事件ID * {"eventTime": "2018-11-05 10:04:00", "retweetCount": 1, "language": "chinese", "userID": "user_1", "favoriteCount": 1, "id": 4909846540155641457} */

    val kafkaDStream=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](Set("testTopic3"),kafkaParams)
    ).map(item=>parseJson(item.value())).map(item=>{ 
   
      val userID = item.getString("userID")
      val eventTime = item.getString("eventTime")
      val language= item.getString("language")
      val favoriteCount = item.getInteger("favoriteCount")
      val retweetCount = item.getInteger("retweetCount")
      (userID,(userID,eventTime,language,favoriteCount,retweetCount))
    })

    /** 3) 流与半静态数据做Join (RDD Join 方式)*/
    val result=kafkaDStream.mapPartitions(part=>{ 
   
      val redisCli=connToRedis("localhost",6379,3000,10)
      part.map(item=>{ 
   
        (item._1,(item._2,redisCli.hmget(item._1,"userID","name","address")))
      })
    })

    result.foreachRDD(_.foreach(println))


    ssc.start()
    ssc.awaitTermination()

  }

  /**json解析*/
  def parseJson(log:String):JSONObject={ 
   
    var ret:JSONObject=null
    try{ 
   
      ret=JSON.parseObject(log)
    }catch { 
   
      //异常json数据处理
      case e:JSONException => println(log)
    }
    ret
  }

  /**连接到redis*/
  def connToRedis(redisHost:String,redisPort:Int,timeout:Int,dbNum:Int): Jedis ={ 
   
    val redisCli=new Jedis(redisHost,redisPort,timeout)
    redisCli.connect()
    redisCli.select(dbNum)
    redisCli
  }

}

stream_static_join3.png

Stream-Stream Join

流与流Join。

思路:DStream Join DStream。

package com.bigData.spark

import com.alibaba.fastjson.{ 
   JSON, JSONException, JSONObject}
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{ 
   Level, Logger}
import org.apache.spark.{ 
   SparkConf, SparkContext}
import org.apache.spark.streaming.kafka010.{ 
   ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{ 
   Durations, StreamingContext}

/** * Author: Wang Pei * License: Copyright(c) Pei.Wang * Summary: * * Stream-Stream Join * * spark 2.2.2 * */
object StreamStreamJoin { 
   
  def main(args: Array[String]): Unit = { 
   

    //设置日志等级
    Logger.getLogger("org").setLevel(Level.WARN)

    //Kafka 参数
    val kafkaParams1= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "testTopic3_consumer_v1")

    val kafkaParams2= Map[String, Object](
      "bootstrap.servers" -> "localhost:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "auto.offset.reset" -> "latest",
      "enable.auto.commit" -> (true: java.lang.Boolean),
      "group.id" -> "testTopic4_consumer_v1")


    //spark环境
    val sparkConf = new SparkConf().setAppName(this.getClass.getSimpleName.replace("$","")).setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Durations.seconds(10))

    /** 1) 流式数据: 用户发的tweet数据*/
    /** 数据示例: * eventTime:事件时间、retweetCount:转推数、language:语言、userID:用户ID、favoriteCount:点赞数、id:事件ID * {"eventTime": "2018-11-05 10:04:00", "retweetCount": 1, "language": "chinese", "userID": "user_1", "favoriteCount": 1, "id": 4909846540155641457} */

    val kafkaDStream1=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](List("testTopic3"),kafkaParams1)
    ).map(item=>parseJson(item.value())).map(item=>{ 
   
      val userID = item.getString("userID")
      val eventTime = item.getString("eventTime")
      val language= item.getString("language")
      val favoriteCount = item.getInteger("favoriteCount")
      val retweetCount = item.getInteger("retweetCount")
      (userID,(userID,eventTime,language,favoriteCount,retweetCount))
    })

    /** 2) 流式数据: 用户发的tweet数据*/
    /** 数据示例: * eventTime:事件时间、retweetCount:转推数、language:语言、userID:用户ID、favoriteCount:点赞数、id:事件ID * {"eventTime": "2018-11-05 10:04:00", "retweetCount": 1, "language": "chinese", "userID": "user_1", "favoriteCount": 1, "id": 4909846540155641457} */

    val kafkaDStream2=KafkaUtils.createDirectStream[String,String](
      ssc,
      LocationStrategies.PreferConsistent,
      ConsumerStrategies.Subscribe[String,String](List("testTopic4"),kafkaParams2)
    ).map(item=>parseJson(item.value())).map(item=>{ 
   
      val userID = item.getString("userID")
      val eventTime = item.getString("eventTime")
      val language= item.getString("language")
      val favoriteCount = item.getInteger("favoriteCount")
      val retweetCount = item.getInteger("retweetCount")
      (userID,(userID,eventTime,language,favoriteCount,retweetCount))
    })

    /** 3) Stream-Stream Join*/
    val joinedDStream = kafkaDStream1.leftOuterJoin(kafkaDStream2)

    joinedDStream.foreachRDD(_.foreach(println))

    ssc.start()
    ssc.awaitTermination()

  }

  /**json解析*/
  def parseJson(log:String):JSONObject={ 
   
    var ret:JSONObject=null
    try{ 
   
      ret=JSON.parseObject(log)
    }catch { 
   
      //异常json数据处理
      case e:JSONException => println(log)
    }
    ret
  }

}

stream_stream_join.png

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/147411.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • LVS+Keepalived实现高可用负载均衡

    LVS+Keepalived实现高可用负载均衡

    2021年8月26日
    55
  • 1.23 lseek函数[通俗易懂]

    1.23 lseek函数[通俗易懂]参考:牛客网C++高薪求职项目《Linux高并发服务器开发》1.22read、write函数专属优惠链接:https://www.nowcoder.com/courses/cover/live/504?coupon=AvTPnSG

    2022年6月24日
    26
  • php调用第三方api接口_uniapp ios原生插件开发

    php调用第三方api接口_uniapp ios原生插件开发1)flyio是什么?2)flyio能干什么?3)flyio使用

    2025年10月5日
    3
  • ios uiview和calayer_ipad的assistive touch在哪里

    ios uiview和calayer_ipad的assistive touch在哪里IOS笔记CALayer的position和anchorPointCALayer有2个非常重要的属性:position和anchorPoint@propertyCGPointposition;用来设置CALayer在父层中的位置以父层的左上角为原点(0,0)@propertyCGPointanchorPoint;称为”定位点”,“锚点”决定着CALayer身上的哪个点会在poistion属性所指的位置以自己的左上角为原点(0,0)它的x,y取值范围都是0~1默认值为(0.5,0.

    2022年10月8日
    4
  • java先序中序后序遍历二叉树_二叉树的前序中序后续

    java先序中序后序遍历二叉树_二叉树的前序中序后续1.前序遍历    前序遍历(DLR,lchild,data,rchild),是二叉树遍历的一种,也叫做先根遍历、先序遍历、前序周游,可记做根左右。前序遍历首先访问根结点然后遍历左子树,最后遍历右子树。前序遍历首先访问根结点然后遍历左子树,最后遍历右子树。在遍历左、右子树时,仍然先访问根结点,然后遍历左子树,最后遍历右子树。若二叉树为空则结束返回,否则:(1)访问根结点。(2)前序遍历左子树。(3…

    2025年11月17日
    7
  • Http状态码406(Not Acceptable) 错误问题解决方法

    Http状态码406(Not Acceptable) 错误问题解决方法状态码406:HTTP协议状态码的一种,表示无法使用请求的内容特性来响应请求的网页。说白了就是后台的返回结果前台无法解析就报406错误。示例代码中请求代码,后台代码均正常,且有返回信息。如下图:$.ajax({url:’http://localhost:8080/findDsrwByDsrwid’,type:’post’,…

    2022年6月16日
    222

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号