1-问题
我有一个火花程序,利用Kryo,但不是作为火花力学的一部分。更具体地说,我使用的是火花结构化流媒体,连接到Kafka。
我阅读来自卡夫卡的二进制值,并自己解码它。
在尝试用Kryo反序列化数据时,我遇到了一个异常。但是,只有当我打包我的程序并在火花独立集群上运行它时,这个问题才会发生。也就是说,在intellij中运行它时不会发生这种情况,比如在Spark (开发模式)中。
我得到的例外如下:
由: com.elsevier.entellect.commons.package$RawData : com.esotericsoftware.kryo.KryoException:无法找到类引起
注意,RawData是我自己的一个case类,位于我的多项目构建的一个子项目中。
若要了解上下文,请参阅下面的更多细节:
2-build.sbt:
lazy val commonSettings = Seq(
organization := "com.elsevier.entellect",
version := "0.1.0-SNAPSHOT",
scalaVersion := "2.11.12",
resolvers += Resolver.mavenLocal,
updateOptions := updateOptions.value.withLatestSnapshots(false)
)
lazy val entellectextractors = (project in file("."))
.settings(commonSettings).aggregate(entellectextractorscommon, entellectextractorsfetchers, entellectextractorsmappers, entellectextractorsconsumers)
lazy val entellectextractorscommon = project
.settings(
commonSettings,
libraryDependencies ++= Seq(
"com.esotericsoftware" % "kryo" % "5.0.0-RC1",
"com.github.romix.akka" %% "akka-kryo-serialization" % "0.5.0" excludeAll(excludeJpountz),
"org.apache.kafka" % "kafka-clients" % "1.0.1",
"com.typesafe.akka" %% "akka-stream" % "2.5.16",
"com.typesafe.akka" %% "akka-http-spray-json" % "10.1.4",
"com.typesafe.akka" % "akka-slf4j_2.11" % "2.5.16",
"ch.qos.logback" % "logback-classic" % "1.2.3"
)
)
lazy val entellectextractorsfetchers = project
.settings(
commonSettings,
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-stream-kafka" % "0.22",
"com.typesafe.slick" %% "slick" % "3.2.3",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.3",
"com.lightbend.akka" %% "akka-stream-alpakka-slick" % "0.20")
)
.dependsOn(entellectextractorscommon)
lazy val entellectextractorsconsumers = project
.settings(
commonSettings,
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-stream-kafka" % "0.22")
)
.dependsOn(entellectextractorscommon)
lazy val entellectextractorsmappers = project
.settings(
commonSettings,
mainClass in assembly := Some("entellect.extractors.mappers.NormalizedDataMapper"),
assemblyMergeStrategy in assembly := {
case PathList("META-INF", "services", "org.apache.spark.sql.sources.DataSourceRegister") => MergeStrategy.concat
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first},
dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-core" % "2.9.5",
dependencyOverrides += "com.fasterxml.jackson.core" % "jackson-databind" % "2.9.5",
dependencyOverrides += "com.fasterxml.jackson.module" % "jackson-module-scala_2.11" % "2.9.5",
dependencyOverrides += "org.apache.jena" % "apache-jena" % "3.8.0",
libraryDependencies ++= Seq(
"org.apache.jena" % "apache-jena" % "3.8.0",
"edu.isi" % "karma-offline" % "0.0.1-SNAPSHOT",
"org.apache.spark" % "spark-core_2.11" % "2.3.1" % "provided",
"org.apache.spark" % "spark-sql_2.11" % "2.3.1" % "provided",
"org.apache.spark" %% "spark-sql-kafka-0-10" % "2.3.1"
//"com.datastax.cassandra" % "cassandra-driver-core" % "3.5.1"
))
.dependsOn(entellectextractorscommon)
lazy val excludeJpountz = ExclusionRule(organization = "net.jpountz.lz4", name = "lz4")包含火花代码的子项目是entellectextractorsmappers。包含无法找到的case类RawData的子项目是entellectextractorscommon。entellectextractorsmappers显式地依赖于entellectextractorscommon。
3-在本地独立集群上提交和在本地开发模式下运行时的区别:
当我向集群提交时,我的火花依赖关系如下:
"org.apache.spark" % "spark-core_2.11" % "2.3.1" % "provided",
"org.apache.spark" % "spark-sql_2.11" % "2.3.1" % "provided",当我在本地开发模式下运行(没有提交脚本)时,它们会以这样的方式运行
"org.apache.spark" % "spark-core_2.11" % "2.3.1",
"org.apache.spark" % "spark-sql_2.11" % "2.3.1",也就是说,在本地开发中,我需要有依赖项,当以独立模式提交到集群时,它们已经在集群中了,因此我按照提供的方式将它们放在集群中。
4-如何提交
spark-submit --class entellect.extractors.mappers.DeNormalizedDataMapper --name DeNormalizedDataMapper --master spark://MaatPro.local:7077 --deploy-mode cluster --executor-memory 14G --num-executors 1 --conf spark.sql.shuffle.partitions=7 "/Users/maatari/IdeaProjects/EntellectExtractors/entellectextractorsmappers/target/scala-2.11/entellectextractorsmappers-assembly-0.1.0-SNAPSHOT.jar"5-我如何使用Kryo
5.1-声明和登记
在entellectextractors公用项目中,我有一个包对象,其内容如下:
package object commons {
case class RawData(modelName: String,
modelFile: String,
sourceType: String,
deNormalizedVal: String,
normalVal: Map[String, String])
object KryoContext {
lazy val kryoPool = new Pool[Kryo](true, false, 16) {
protected def create(): Kryo = {
val kryo = new Kryo()
kryo.setRegistrationRequired(false)
kryo.addDefaultSerializer(classOf[scala.collection.Map[_,_]], classOf[ScalaImmutableAbstractMapSerializer])
kryo.addDefaultSerializer(classOf[scala.collection.generic.MapFactory[scala.collection.Map]], classOf[ScalaImmutableAbstractMapSerializer])
kryo.addDefaultSerializer(classOf[RawData], classOf[ScalaProductSerializer])
kryo
}
}
lazy val outputPool = new Pool[Output](true, false, 16) {
protected def create: Output = new Output(4096)
}
lazy val inputPool = new Pool[Input](true, false, 16) {
protected def create: Input = new Input(4096)
}
}
object ExecutionContext {
implicit lazy val system = ActorSystem()
implicit lazy val mat = ActorMaterializer()
implicit lazy val ec = system.dispatcher
}
}5.2-使用
在entellectextractorsmappers (火花程序所在的地方)中,我使用mapMartition.在它中,我有一种方法来解码来自kafka的数据,它使用Kryo这样的方式:
def decodeData(rowOfBinaryList: List[Row], kryoPool: Pool[Kryo], inputPool: Pool[Input]): List[RawData] = {
val kryo = kryoPool.obtain()
val input = inputPool.obtain()
val data = rowOfBinaryList.map(r => r.getAs[Array[Byte]]("message")).map{ binaryMsg =>
input.setInputStream(new ByteArrayInputStream(binaryMsg))
val value = kryo.readClassAndObject(input).asInstanceOf[RawData]
input.close()
value
}
kryoPool.free(kryo)
inputPool.free(input)
data
}注意:对象KryoContext + Lazy确保每个JVM实例化一次kryoPool。不过,我不认为这个问题来自于此。。
我在其他地方出现了一个提示,关于spark使用的classLoaders问题?但不一定能真正理解到底发生了什么。
如果有人能给我一些建议,那会有帮助,因为我不知道从哪里开始。为什么它会在本地模式下工作,而不是在集群模式下工作,所提供的是否会扰乱依赖关系,并给Kryo造成一些问题?是SBT程序集合并策略搞砸了吗?
很多提示可能,如果有人能帮我缩小范围,那就太棒了!
发布于 2018-10-03 22:18:51
到目前为止,
我已经解决了这个问题,通过捡起“封闭”类加载器,我认为这是一个从星火。这是在准备了几个关于Kryo和Spark之间的类装载机问题的评论之后:
lazy val kryoPool = new Pool[Kryo](true, false, 16) {
protected def create(): Kryo = {
val cl = Thread.currentThread().getContextClassLoader()
val kryo = new Kryo()
kryo.setClassLoader(cl)
kryo.setRegistrationRequired(false)
kryo.addDefaultSerializer(classOf[scala.collection.Map[_,_]], classOf[ScalaImmutableAbstractMapSerializer])
kryo.addDefaultSerializer(classOf[scala.collection.generic.MapFactory[scala.collection.Map]], classOf[ScalaImmutableAbstractMapSerializer])
kryo.addDefaultSerializer(classOf[RawData], classOf[ScalaProductSerializer])
kryo
}
}https://stackoverflow.com/questions/52572757
复制相似问题