我编写了参考DataframeGenerator实例的单元测试,它允许您动态生成模拟数据
在成功执行以下命令之后
sbt clean
sbt update
sbt compile运行以下命令之一时,输出中显示的错误
sbt assembly
sbt test -- -oF输出
...
[info] SearchClicksProcessorTest:
17/11/24 14:19:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/11/24 14:19:07 WARN SparkContext: Using an existing SparkContext; some configuration may not take effect.
17/11/24 14:19:18 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
17/11/24 14:19:18 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
17/11/24 14:19:19 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
[info] - testExplodeMap *** FAILED ***
[info] ExceptionInInitializerError was thrown during property evaluation.
[info] Message: "None"
[info] Occurred when passed generated values (
[info]
[info] )
[info] - testFilterByClicks *** FAILED ***
[info] NoClassDefFoundError was thrown during property evaluation.
[info] Message: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
[info] Occurred when passed generated values (
[info]
[info] )
[info] - testGetClicksData *** FAILED ***
[info] NoClassDefFoundError was thrown during property evaluation.
[info] Message: Could not initialize class org.apache.spark.rdd.RDDOperationScope$
[info] Occurred when passed generated values (
[info]
[info] )
...
[info] *** 3 TESTS FAILED ***
[error] Failed: Total 6, Failed 3, Errors 0, Passed 3
[error] Failed tests:
[error] com.company.spark.ml.pipelines.search.SearchClicksProcessorTest
[error] (root/test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 73 s, completed 24 Nov, 2017 2:19:28 PM我尝试过的那些失败的东西
我的问题是
编辑-1我的单元测试类包含以下几种方法
class SearchClicksProcessorTest extends FunSuite with Checkers {
import spark.implicits._
test("testGetClicksData") {
val schemaIn = StructType(List(
StructField("rank", IntegerType),
StructField("city_id", IntegerType),
StructField("target", IntegerType)
))
val schemaOut = StructType(List(
StructField("clicked_res_rank", IntegerType),
StructField("city_id", IntegerType),
))
val dataFrameGen = DataframeGenerator.arbitraryDataFrame(spark.sqlContext, schemaIn)
val property = Prop.forAll(dataFrameGen.arbitrary) { dfIn: DataFrame =>
dfIn.cache()
val dfOut: DataFrame = dfIn.transform(SearchClicksProcessor.getClicksData)
dfIn.schema === schemaIn &&
dfOut.schema === schemaOut &&
dfIn.filter($"target" === 1).count() === dfOut.count()
}
check(property)
}
}而build.sbt看起来像这样
// core settings
organization := "com.company"
scalaVersion := "2.11.11"
name := "repo-name"
version := "0.0.1"
// cache options
offline := false
updateOptions := updateOptions.value.withCachedResolution(true)
// aggregate options
aggregate in assembly := false
aggregate in update := false
// fork options
fork in Test := true
//common libraryDependencies
libraryDependencies ++= Seq(
scalaTest,
typesafeConfig,
...
scalajHttp
)
libraryDependencies ++= allAwsDependencies
libraryDependencies ++= SparkDependencies.allSparkDependencies
assemblyMergeStrategy in assembly := {
case m if m.toLowerCase.endsWith("manifest.mf") => MergeStrategy.discard
...
case _ => MergeStrategy.first
}
lazy val module-1 = project in file("directory-1")
lazy val module-2 = (project in file("directory-2")).
dependsOn(module-1).
aggregate(module-1)
lazy val root = (project in file(".")).
dependsOn(module-2).
aggregate(module-2)发布于 2017-11-30 06:08:46
请在阅读这个答案之前先阅读对原问题的评论
faster-xml.jackson解决方案也不适合我;因为需要进行更多的更改(ExceptionInInitializerError消失了,但出现了其他错误)。DataFrame(与这里使用的StructType不同)。我创造了他们
spark.sparkContext.parallelize(Seq(MyType)).toDF()
其中MyType是一个case class,根据DataFrame的模式case class生成的模式的数据类型是正确的,但是字段的nullability经常不匹配;修复此问题的方法是找到这里。在这里,我公然承认,我不确定什么是正确的修复:faster-xml.jackson依赖或创建DataFrame的替代方式,所以请在理解/调查这个问题时,请随时填补这些错误
发布于 2021-10-17 15:41:58
我有过一个类似的问题案例,在调查之后我发现,在lazy之前添加一个val解决了我的问题。我的估计是,使用Scalatest运行Scala程序会调用一些不同的初始化序列。普通的scala执行以自顶向下的行号初始化vals --嵌套的object {...}块以相同的方式初始化--使用与Scalatest相同的编码,执行在val行号高于object { ... }之前,在嵌套的object { ... }块中初始化vals。
我知道这是非常模糊的,但是用val和lazy作为前缀来推迟初始化可以解决这里的测试问题。
这里的关键是,它不发生在正常执行中,只发生在测试执行中,而在我的示例中,只有在使用这种形式的taps时才会发生这种情况:
...
.tap(x =>
hook_feld_erweiterungen_hook(
abc = theProblematicVal
)
)
...https://stackoverflow.com/questions/47469833
复制相似问题