我想写一个Spark1.6UDF,它使用以下地图:
case class MyRow(mapping: Map[(Int, Int), Double])
val data = Seq(
MyRow(Map((1, 1) -> 1.0))
)
val df = sc.parallelize(data).toDF()
df.printSchema()
root
|-- mapping: map (nullable = true)
| |-- key: struct
| |-- value: double (valueContainsNull = false)
| | |-- _1: integer (nullable = false)
| | |-- _2: integer (nullable = false)(顺便提一句:我觉得上面的输出很奇怪,因为键的类型打印在值的类型下面,为什么呢?)
现在,我将我的UDF定义为:
val myUDF = udf((inputMapping: Map[(Int,Int), Double]) =>
inputMapping.map { case ((i1, i2), value) => ((i1 + i2), value) }
)
df
.withColumn("udfResult", myUDF($"mapping"))
.show()但这给了我:
java.lang.ClassCastException: org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema cannot be cast to scala.Tuple2因此,我尝试用自定义的(Int,Int)替换case class,因为如果我想将struct传递给UDF,通常是这样做的:
case class MyTuple2(i1: Int, i2: Int)
val myUDF = udf((inputMapping: Map[MyTuple2, Double]) =>
inputMapping.map { case (MyTuple2(i1, i2), value) => ((i1 + i2), value) }
)奇怪的是:
org.apache.spark.sql.AnalysisException: cannot resolve 'UDF(mapping)' due to data type mismatch: argument 1 requires map<struct<i1:int,i2:int>,double> type, however, 'mapping' is of map<struct<_1:int,_2:int>,double> type.当类型匹配时,我不理解上面的异常。
我找到的唯一(丑陋的)解决方案是传递一个org.apache.spark.sql.Row,然后“提取”结构的元素:
val myUDF = udf((inputMapping: Map[Row, Double]) => inputMapping
.map { case (key, value) => ((key.getInt(0), key.getInt(1)), value) } // extract Row into Tuple2
.map { case ((i1, i2), value) => ((i1 + i2), value) }
)发布于 2017-01-23 13:08:19
据我所知,在这个上下文中使用Row是不可避免的:在映射(或另一个元组/案例类/数组)中使用的元组(或case类).是一个嵌套结构,因此当传递到UDF时,它将被表示为一个Row。
我建议的唯一改进是使用Row.unapply来稍微简化代码:
val myUDF = udf((inputMapping: Map[Row, Double]) => inputMapping
.map { case (Row(i1: Int, i2: Int), value) => (i1 + i2, value) }
)https://stackoverflow.com/questions/41806914
复制相似问题