我是sbt/装配新手。我正在尝试解决一些依赖问题,似乎唯一的方法是通过自定义合并策略。但是,每当我尝试添加合并策略时,我都会得到一个看似随机的编译MatchError:
[error] (*:assembly) scala.MatchError: org/apache/spark/streaming/kafka/KafkaUtilsPythonHelper$$anonfun$13.class (of class java.lang.String)我显示了这个卡夫卡库的匹配错误,但是如果我完全取出这个库,我会在另一个库上得到一个MatchError。如果我取出所有的库,我就会在我自己的代码中得到一个MatchError。如果我取出"assemblyMergeStrategy“块,所有这些都不会发生。我显然错过了一些极其基本的东西,但在我的生活中,我找不到它,也找不到其他有这个问题的人。我已经尝试过较早的mergeStrategy语法,但就我从文档中读取的情况而言,这是现在编写它的正确方法。请帮帮忙?
这是我的项目/汇编。
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")我的project.sbt文件:
name := "Clerk"
version := "1.0"
scalaVersion := "2.11.6"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.6.1" % "provided",
"org.apache.spark" %% "spark-sql" % "1.6.1" % "provided",
"org.apache.spark" %% "spark-streaming" % "1.6.1" % "provided",
"org.apache.kafka" %% "kafka" % "0.8.2.1",
"ch.qos.logback" % "logback-classic" % "1.1.7",
"net.logstash.logback" % "logstash-logback-encoder" % "4.6",
"com.typesafe.scala-logging" %% "scala-logging" % "3.1.0",
"org.apache.spark" %% "spark-streaming-kafka" % "1.6.1",
("org.apache.spark" %% "spark-streaming-kafka" % "1.6.1").
exclude("org.spark-project.spark", "unused")
)
assemblyMergeStrategy in assembly := {
case PathList("org.slf4j", "impl", xs @ _*) => MergeStrategy.first
}
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)发布于 2016-04-08 21:53:16
您缺少了合并策略模式匹配的默认大小写:
assemblyMergeStrategy in assembly := {
case PathList("org.slf4j", "impl", xs @ _*) => MergeStrategy.first
case x =>
val oldStrategy = (assemblyMergeStrategy in assembly).value
oldStrategy(x)
}https://stackoverflow.com/questions/36509450
复制相似问题