首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >SBT部件-重复数据删除

SBT部件-重复数据删除
EN

Stack Overflow用户
提问于 2016-08-09 23:49:25
回答 2查看 2.5K关注 0票数 1

我得到了以下SBT文件:

代码语言:javascript
复制
.
-- root
     -- plugins.sbt
-- build.sbt

其中plugins.sbt包含以下内容:

代码语言:javascript
复制
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.12.0")

和包含以下内容的build.sbt

代码语言:javascript
复制
import sbt.Keys._

resolvers in ThisBuild ++= Seq("Apache Development Snapshot Repository" at "https://repository.apache.org/content/repositories/snapshots/", Resolver.sonatypeRepo("public"))

name := "flink-experiment"

lazy val commonSettings = Seq(
    organization := "my.organisation",
    version := "0.1.0-SNAPSHOT"
)

val flinkVersion = "1.1.0"
val sparkVersion = "2.0.0"
val kafkaVersion = "0.8.2.1"

val hadoopDependencies = Seq(
    "org.apache.avro" % "avro" % "1.7.7" % "provided",
    "org.apache.avro" % "avro-mapred" % "1.7.7" % "provided"
)

val flinkDependencies = Seq(
    "org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
    "org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
    "org.apache.flink" %% "flink-connector-kafka-0.8" % flinkVersion exclude("org.apache.kafka", "kafka_${scala.binary.version}")
)

val sparkDependencies = Seq(
    "org.apache.spark" %% "spark-core" % sparkVersion % "provided",
    "org.apache.spark" %% "spark-streaming" % sparkVersion % "provided",
    "org.apache.spark" %% "spark-streaming-kafka-0-8" % sparkVersion exclude("org.apache.kafka", "kafka_${scala.binary.version}")
)

val kafkaDependencies = Seq(
    "org.apache.kafka" %% "kafka" % "0.8.2.1"
)

val toolDependencies = Seq(
    "com.github.scopt" %% "scopt" % "3.5.0"
)

val testDependencies = Seq(
    "org.scalactic" %% "scalactic" % "2.2.6",
    "org.scalatest" %% "scalatest" % "2.2.6" % "test"
)

lazy val root = (project in file(".")).
    settings(commonSettings: _*).
    settings(
        libraryDependencies ++= hadoopDependencies,
        libraryDependencies ++= flinkDependencies,
        libraryDependencies ++= sparkDependencies,
        libraryDependencies ++= kafkaDependencies,
        libraryDependencies ++= toolDependencies,
        libraryDependencies ++= testDependencies
    ).
    enablePlugins(AssemblyPlugin)

run in Compile <<= Defaults.runTask(fullClasspath in Compile, mainClass in(Compile, run), runner in(Compile, run))

mainClass in assembly := Some("my.organization.experiment.Experiment")
assemblyOption in assembly := (assemblyOption in assembly).value.copy(includeScala = false)

现在,sbt clean assembly遗憾地给出了以下例外:

代码语言:javascript
复制
[error] (root/*:assembly) deduplicate: different file contents found in the following:
[error] /home/kevin/.ivy2/cache/org.apache.spark/spark-streaming-kafka-0-8_2.10/jars/spark-streaming-kafka-0-8_2.10-2.0.0.jar:org/apache/spark/unused/UnusedStubClass.class
[error] /home/kevin/.ivy2/cache/org.apache.spark/spark-tags_2.10/jars/spark-tags_2.10-2.0.0.jar:org/apache/spark/unused/UnusedStubClass.class
[error] /home/kevin/.ivy2/cache/org.spark-project.spark/unused/jars/unused-1.0.0.jar:org/apache/spark/unused/UnusedStubClass.class

我该如何解决这个问题呢?

EN

回答 2

Stack Overflow用户

发布于 2016-08-10 03:21:24

https://github.com/sbt/sbt-assembly#excluding-jars-and-files

您可以定义assemblyMergeStrategy,也可以丢弃您列出的所有文件,因为它们都在“未使用”包中。

票数 2
EN

Stack Overflow用户

发布于 2016-08-10 03:49:07

您可以覆盖冲突的默认策略:

代码语言:javascript
复制
    val defaultMergeStrategy: String => MergeStrategy = { 
    case x if Assembly.isConfigFile(x) =>
      MergeStrategy.concat
    case PathList(ps @ _*) if Assembly.isReadme(ps.last) || Assembly.isLicenseFile(ps.last) =>
      MergeStrategy.rename
    case PathList("META-INF", xs @ _*) =>
      (xs map {_.toLowerCase}) match {
        case ("manifest.mf" :: Nil) | ("index.list" :: Nil) | ("dependencies" :: Nil) =>
          MergeStrategy.discard
        case ps @ (x :: xs) if ps.last.endsWith(".sf") || ps.last.endsWith(".dsa") =>
          MergeStrategy.discard
        case "plexus" :: xs =>
          MergeStrategy.discard
        case "services" :: xs =>
          MergeStrategy.filterDistinctLines
        case ("spring.schemas" :: Nil) | ("spring.handlers" :: Nil) =>
          MergeStrategy.filterDistinctLines
        case _ => MergeStrategy.deduplicate
      }
    case _ => MergeStrategy.deduplicate
  }

正如您所看到的,程序集默认策略是MergeStrategy.deduplicate,您可以添加一个新的case case UnusedStubClass => MergeStrategy.first

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/38855272

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档