热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

65、SparkStreaming:数据接收原理剖析与源码分析

一、数据接收原理二、源码分析入口包org.apache.spark.streaming.receiver下ReceiverSupervisorImpl类的onStart()方法##

一、数据接收原理

技术分享图片

二、源码分析

入口包org.apache.spark.streaming.receiver下ReceiverSupervisorImpl类的onStart()方法

###
override
protected def onStart() {
// 这里的blockGenerator很重要,和数据接收有关,其运行在worker的executor端负责数据接收后的一些存取工作,以及配合ReceiverTracker
// 在Executor上,启动Receiver之前,就会先启动这个Receiver相关的一个blockGenerator,该组件,在数据接收中,极其重要
blockGenerator.start()
}

ReceiverSupervisorImpl类的onStart()方法,调用了blockGenerator.start()方法,跟进去看看

###org.apache.spark.streaming.receiver/BlockGenerator.scala
def start() {
// BlockGenerator.start()方法,其实就是启动内部两个关键的后台线程,
// 一个是blockIntervalTimer,负责将currentBuffer中的原始数据,打包成一个个的block
// 另一个是blockPushingThread,负责将blocksForPushing中的block,调用pushArrayBuffer()方法
blockIntervalTimer.start()
blockPushingThread.start()
logInfo(
"Started BlockGenerator")
}

blockGenerator.start()方法,调用了blockIntervalTimer.start()和blockPushingThread.start()方法
先看看有关变量的定义

###org.apache.spark.streaming.receiver/BlockGenerator.scala
private val blockInterval = conf.getLong("spark.streaming.blockInterval", 200)
// blockInterval,是有一个默认值的,spark.streaming.blockInterval,默认是200ms,每隔200ms,就会调用updateCurrentBuffer函数
private val blockIntervalTimer =
new RecurringTimer(clock, blockInterval, updateCurrentBuffer, "BlockGenerator")
// blocksForPushing队列的长度,可以调节的,spark.streaming.blockQueueSize,默认10个,可大可小
private val blockQueueSize = conf.getInt("spark.streaming.blockQueueSize", 10)
// blocksForPushing队列,
private val blocksForPushing = new ArrayBlockingQueue[Block](blockQueueSize)
// blockPushingThread,后台线程,启动之后,就会调用keepPushingBlocks()方法,这个方法中,就会每隔一段时间,去blocksForPushing队列中取block
private val blockPushingThread = new Thread() { override def run() { keepPushingBlocks() } }
// 这个currentBuffer,就是用于存放原始的数据
@volatile private var currentBuffer = new ArrayBuffer[Any]

blockIntervalTimer.start()就是一个线程,这个方法就不看了
重点看下blockPushingThread.start()方法,这个线程开始运行,会调用keepPushingBlocks()方法,代码如下

###org.apache.spark.streaming.receiver/BlockGenerator.scala
private val blockPushingThread = new Thread() { override def run() { keepPushingBlocks() } }

看keepPushingBlocks()方法

###org.apache.spark.streaming.receiver/BlockGenerator.scala
private def keepPushingBlocks() {
logInfo(
"Started block pushing thread")
try {
while(!stopped) {
// 从blocksForPushing这个队列中,poll出来当前队列队首的block,对于阻塞队列,默认设置100ms的超时
Option(blocksForPushing.poll(100, TimeUnit.MILLISECONDS)) match {
// 如果拿到了block,调用pushBlock去推送block
case Some(block) => pushBlock(block)
case NOne=>
}
}
// Push out the blocks that are still left
logInfo("Pushing out the last " + blocksForPushing.size() + " blocks")
while (!blocksForPushing.isEmpty) {
logDebug(
"Getting block ")
val block
= blocksForPushing.take()
pushBlock(block)
logInfo(
"Blocks left to push " + blocksForPushing.size())
}
logInfo(
"Stopped block pushing thread")
}
catch {
case ie: InterruptedException =>
logInfo(
"Block pushing thread was interrupted")
case e: Exception =>
reportError(
"Error in block pushing thread", e)
}
}

可以看到keepPushingBlocks()方法,如果拿到了block,调用pushBlock()方法
看看pushBlock()方法

###org.apache.spark.streaming.receiver/BlockGenerator.scala
private def pushBlock(block: Block) {
listener.onPushBlock(block.id, block.buffer)
logInfo(
"Pushed block " + block.id)
}

pushBlock()方法会调用listener.onPushBlock()方法,这个listener是BlockGeneratorListener,onPushBlock()在ReceiverSupervisorImpl类中,
看ReceiverSupervisorImpl类的onPushBlock()方法:

###org.apache.spark.streaming.receiver/ReceiverSupervisorImpl.scala
// onPushBlock就会去调用pushArrayBuffer去推送block
def onPushBlock(blockId: StreamBlockId, arrayBuffer: ArrayBuffer[_]) {
pushArrayBuffer(arrayBuffer, None, Some(blockId))
}


onPushBlock就会去调用pushArrayBuffer()方法
看pushArrayBuffer()方法:

###org.apache.spark.streaming.receiver/ReceiverSupervisorImpl.scala
def pushArrayBuffer(
arrayBuffer: ArrayBuffer[_],
metadataOption: Option[Any],
blockIdOption: Option[StreamBlockId]
) {
pushAndReportBlock(ArrayBufferBlock(arrayBuffer), metadataOption, blockIdOption)
}

接着看pushAndReportBlock()方法:

###org.apache.spark.streaming.receiver/ReceiverSupervisorImpl.scala
def pushAndReportBlock(
receivedBlock: ReceivedBlock,
metadataOption: Option[Any],
blockIdOption: Option[StreamBlockId]
) {
val blockId
= blockIdOption.getOrElse(nextBlockId)
val numRecords
= receivedBlock match {
case ArrayBufferBlock(arrayBuffer) => arrayBuffer.size
case _ => -1
}
val time
= System.currentTimeMillis
// 还用receivedBlockHandler,去调用storeBlock方法,存储block到BlockManager中,这里,也可以看出预写日志的机制
val blockStoreResult = receivedBlockHandler.storeBlock(blockId, receivedBlock)
logDebug(s
"Pushed block $blockId in ${(System.currentTimeMillis - time)} ms")
// 封装一个ReceiverBlockInfo对象,里面有一个streamId
val blockInfo = ReceivedBlockInfo(streamId, numRecords, blockStoreResult)
// 调用了ReceiverTracker的Acrot的ask方法,发送AddBlock消息
val future = trackerActor.ask(AddBlock(blockInfo))(askTimeout)
Await.result(future, askTimeout)
logDebug(s
"Reported block $blockId")
}

这里主要看receivedBlockHandler.storeBlock()方法和trackerActor.ask(AddBlock(blockInfo))(askTimeout)
首先看receivedBlockHandler.storeBlock(),看看receivedBlockHandler是什么

###org.apache.spark.streaming.receiver/ReceiverSupervisorImpl.scala
private val receivedBlockHandler: ReceivedBlockHandler = {
// 如果开启了预写日志机制,spark.streaming.receiver.writeAheadLog.enable,默认false
// 如果为true,那么receivedBlockHandler就是WriteAheadLogBasedBlockHandler
// 如果没有开启预写日志机制,那么receivedBlockHandler就是BlockManagerBasedBlockHandler
if (env.conf.getBoolean("spark.streaming.receiver.writeAheadLog.enable", false)) {
if (checkpointDirOption.isEmpty) {
throw new SparkException(
"Cannot enable receiver write-ahead log without checkpoint directory set. " +
"Please use streamingContext.checkpoint() to set the checkpoint directory. " +
"See documentation for more details.")
}
new WriteAheadLogBasedBlockHandler(env.blockManager, receiver.streamId,
receiver.storageLevel, env.conf, hadoopConf, checkpointDirOption.get)
}
else {
new BlockManagerBasedBlockHandler(env.blockManager, receiver.storageLevel)
}

接着分别看BlockManagerBasedBlockHandler和WriteAheadLogBasedBlockHandler的storeBlock()方法
先看WriteAheadLogBasedBlockHandler

###org.apache.spark.streaming.receiver/ReceivedBlockHandler.scala
def storeBlock(blockId: StreamBlockId, block: ReceivedBlock): ReceivedBlockStoreResult
= {
// Serialize the block so that it can be inserted into both
// 先用BlockManager序列化数据
val serializedBlock = block match {
case ArrayBufferBlock(arrayBuffer) =>
blockManager.dataSerialize(blockId, arrayBuffer.iterator)
case IteratorBlock(iterator) =>
blockManager.dataSerialize(blockId, iterator)
case ByteBufferBlock(byteBuffer) =>
byteBuffer
case _ =>
throw new Exception(s"Could not push $blockId to block manager, unexpected block type")
}
// Store the block in block manager
// 将数据保存到BlockManager中去,默认的持久化策略,StorageLevel,是带_SER,_2的,会序列化,复制一份副本到其他Executor的BlockManager,以供容错
val storeInBlockManagerFuture = Future {
val putResult
=
blockManager.putBytes(blockId, serializedBlock, effectiveStorageLevel, tellMaster
= true)
if (!putResult.map { _._1 }.contains(blockId)) {
throw new SparkException(
s
"Could not store $blockId to block manager with storage level $storageLevel")
}
}
// Store the block in write ahead log
// 将block存入预写日志,通过logManager的writeToLog()方法
val storeInWriteAheadLogFuture = Future {
logManager.writeToLog(serializedBlock)
}
// Combine the futures, wait for both to complete, and return the write ahead log segment
val combinedFuture = storeInBlockManagerFuture.zip(storeInWriteAheadLogFuture).map(_._2)
val segment
= Await.result(combinedFuture, blockStoreTimeout)
WriteAheadLogBasedStoreResult(blockId, segment)
}

再看BlockManagerBasedBlockHandler

###org.apache.spark.streaming.receiver/ReceivedBlockHandler.scala
// 直接将数据保存到BlockManager中,就可以了
def storeBlock(blockId: StreamBlockId, block: ReceivedBlock): ReceivedBlockStoreResult = {
val putResult: Seq[(BlockId, BlockStatus)]
= block match {
case ArrayBufferBlock(arrayBuffer) =>
blockManager.putIterator(blockId, arrayBuffer.iterator, storageLevel, tellMaster
= true)
case IteratorBlock(iterator) =>
blockManager.putIterator(blockId, iterator, storageLevel, tellMaster
= true)
case ByteBufferBlock(byteBuffer) =>
blockManager.putBytes(blockId, byteBuffer, storageLevel, tellMaster
= true)
case o =>
throw new SparkException(
s
"Could not store $blockId to block manager, unexpected block type ${o.getClass.getName}")
}
if (!putResult.map { _._1 }.contains(blockId)) {
throw new SparkException(
s
"Could not store $blockId to block manager with storage level $storageLevel")
}
BlockManagerBasedStoreResult(blockId)
}

接着看trackerActor.ask(AddBlock(blockInfo))(askTimeout),会发一个AddBlock消息到ReceiverTracker,进入看一下:

###org.apache.spark.streaming.scheduler/ReceiverTracker.scala
private def addBlock(receivedBlockInfo: ReceivedBlockInfo): Boolean = {
receivedBlockTracker.addBlock(receivedBlockInfo)
}

接着看receivedBlockTracker的addBlock方法,除了这个方法之外,还看receivedBlockTracker的几个重要变量,
先看方法:

###org.apache.spark.streaming.scheduler/ReceivedBlockTracker.scala
def addBlock(receivedBlockInfo: ReceivedBlockInfo): Boolean
= synchronized {
try {
writeToLog(BlockAdditionEvent(receivedBlockInfo))
getReceivedBlockQueue(receivedBlockInfo.streamId)
+= receivedBlockInfo
logDebug(s
"Stream ${receivedBlockInfo.streamId} received " +
s
"block ${receivedBlockInfo.blockStoreResult.blockId}")
true
}
catch {
case e: Exception =>
logError(s
"Error adding block $receivedBlockInfo", e)
false
}
}

再看变量

###org.apache.spark.streaming.scheduler/ReceivedBlockTracker.scala
// 封装了streamId到block的映射
private val streamIdToUnallocatedBlockQueues = new mutable.HashMap[Int, ReceivedBlockQueue]
// 封装了time到block的映射
private val timeToAllocatedBlocks = new mutable.HashMap[Time, AllocatedBlocks]
// 如果开启了预写机制机制,这还有LogManager,ReceiverTracker接收到数据时,也会判断,
// 如果开启了预写日志机制,写一份到预写日志中
private val logManagerOption = createLogManager()


推荐阅读
  • 本文介绍了lua语言中闭包的特性及其在模式匹配、日期处理、编译和模块化等方面的应用。lua中的闭包是严格遵循词法定界的第一类值,函数可以作为变量自由传递,也可以作为参数传递给其他函数。这些特性使得lua语言具有极大的灵活性,为程序开发带来了便利。 ... [详细]
  • 本文介绍了使用Java实现大数乘法的分治算法,包括输入数据的处理、普通大数乘法的结果和Karatsuba大数乘法的结果。通过改变long类型可以适应不同范围的大数乘法计算。 ... [详细]
  • HDU 2372 El Dorado(DP)的最长上升子序列长度求解方法
    本文介绍了解决HDU 2372 El Dorado问题的一种动态规划方法,通过循环k的方式求解最长上升子序列的长度。具体实现过程包括初始化dp数组、读取数列、计算最长上升子序列长度等步骤。 ... [详细]
  • 本文讨论了如何优化解决hdu 1003 java题目的动态规划方法,通过分析加法规则和最大和的性质,提出了一种优化的思路。具体方法是,当从1加到n为负时,即sum(1,n)sum(n,s),可以继续加法计算。同时,还考虑了两种特殊情况:都是负数的情况和有0的情况。最后,通过使用Scanner类来获取输入数据。 ... [详细]
  • 本文介绍了C#中数据集DataSet对象的使用及相关方法详解,包括DataSet对象的概述、与数据关系对象的互联、Rows集合和Columns集合的组成,以及DataSet对象常用的方法之一——Merge方法的使用。通过本文的阅读,读者可以了解到DataSet对象在C#中的重要性和使用方法。 ... [详细]
  • 本文介绍了OC学习笔记中的@property和@synthesize,包括属性的定义和合成的使用方法。通过示例代码详细讲解了@property和@synthesize的作用和用法。 ... [详细]
  • 本文详细介绍了Linux中进程控制块PCBtask_struct结构体的结构和作用,包括进程状态、进程号、待处理信号、进程地址空间、调度标志、锁深度、基本时间片、调度策略以及内存管理信息等方面的内容。阅读本文可以更加深入地了解Linux进程管理的原理和机制。 ... [详细]
  • 1,关于死锁的理解死锁,我们可以简单的理解为是两个线程同时使用同一资源,两个线程又得不到相应的资源而造成永无相互等待的情况。 2,模拟死锁背景介绍:我们创建一个朋友 ... [详细]
  • 《数据结构》学习笔记3——串匹配算法性能评估
    本文主要讨论串匹配算法的性能评估,包括模式匹配、字符种类数量、算法复杂度等内容。通过借助C++中的头文件和库,可以实现对串的匹配操作。其中蛮力算法的复杂度为O(m*n),通过随机取出长度为m的子串作为模式P,在文本T中进行匹配,统计平均复杂度。对于成功和失败的匹配分别进行测试,分析其平均复杂度。详情请参考相关学习资源。 ... [详细]
  • 动态规划算法的基本步骤及最长递增子序列问题详解
    本文详细介绍了动态规划算法的基本步骤,包括划分阶段、选择状态、决策和状态转移方程,并以最长递增子序列问题为例进行了详细解析。动态规划算法的有效性依赖于问题本身所具有的最优子结构性质和子问题重叠性质。通过将子问题的解保存在一个表中,在以后尽可能多地利用这些子问题的解,从而提高算法的效率。 ... [详细]
  • 高质量SQL书写的30条建议
    本文提供了30条关于优化SQL的建议,包括避免使用select *,使用具体字段,以及使用limit 1等。这些建议是基于实际开发经验总结出来的,旨在帮助读者优化SQL查询。 ... [详细]
  • 本文介绍了指针的概念以及在函数调用时使用指针作为参数的情况。指针存放的是变量的地址,通过指针可以修改指针所指的变量的值。然而,如果想要修改指针的指向,就需要使用指针的引用。文章还通过一个简单的示例代码解释了指针的引用的使用方法,并思考了在修改指针的指向后,取指针的输出结果。 ... [详细]
  • 猜字母游戏
    猜字母游戏猜字母游戏——设计数据结构猜字母游戏——设计程序结构猜字母游戏——实现字母生成方法猜字母游戏——实现字母检测方法猜字母游戏——实现主方法1猜字母游戏——设计数据结构1.1 ... [详细]
  • CentOS 7部署KVM虚拟化环境之一架构介绍
    本文介绍了CentOS 7部署KVM虚拟化环境的架构,详细解释了虚拟化技术的概念和原理,包括全虚拟化和半虚拟化。同时介绍了虚拟机的概念和虚拟化软件的作用。 ... [详细]
  • 本文讨论了Alink回归预测的不完善问题,指出目前主要针对Python做案例,对其他语言支持不足。同时介绍了pom.xml文件的基本结构和使用方法,以及Maven的相关知识。最后,对Alink回归预测的未来发展提出了期待。 ... [详细]
author-avatar
zjy135781012
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有