我们如何在Spark中分解多个数组列?我有一个包含5个字符串化数组列的数据帧,我想在所有5列上进行分解。为了简单起见,用3列显示示例。
如果我有以下输入行:
col1 col2 col3
["b_val1","b_val2"] ["at_val1","at_val2","at_val3"] ["male","female"]我想对所有3个数组列进行分解,因此输出应该如下所示:
b_val1 at_val1 male
b_val1 at_val1 female
b_val2 at_val1 male
b_val2 at_val1 female
b_val1 at_val2 male
b_val1 at_val2 female
b_val2 at_val2 male
b_val2 at_val2 female
b_val1 at_val3 male
b_val1 at_val3 female
b_val2 at_val3 male
b_val2 at_val3 female我尝试了以下几种方法:
SELECT
timestamp,
explode(from_json(brandList, 'array<string>')) AS brand,
explode(from_json(articleTypeList, 'array<string>')) AS articleTypeList,
explode(from_json(gender, 'array<string>')) AS gender,
explode(from_json(masterCategoryList, 'array<string>')) AS masterCategoryList,
explode(from_json(subCategoryList, 'array<string>')) AS subCategoryList,
isLandingPage,
...
from table但这是不允许的,我得到了以下错误- Exception in thread "main" org.apache.spark.sql.AnalysisException: Only one generator allowed per select clause but found 5: explode(jsontostructs(brandList)), explode(jsontostructs(articleTypeList)), explode(jsontostructs(gender)), explode(jsontostructs(masterCategoryList)), explode(jsontostructs(subCategoryList));
发布于 2020-09-08 14:23:16
使用withColumn获取所需的输出。
让我们创建一个包含3列arraytype的示例dataframe,并执行分解操作:
import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
val rdd=spark.sparkContext.makeRDD(List(Row(Array(1,2,3),Array("a","b","c"),Array("1a","1b","1c"))))
val schema=new StructType().add("arraycolumn1",ArrayType(IntegerType)).add("arraycolumn2",ArrayType(StringType)).add("arraycolumn3",ArrayType(StringType))
var df=spark.createDataFrame(rdd,schema)
df.show(5,false)
+------------+------------+------------+
|arraycolumn1|arraycolumn2|arraycolumn3|
+------------+------------+------------+
|[1, 2, 3] |[a, b, c] |[1a, 1b, 1c]|
+------------+------------+------------+
val explodedDF=df.withColumn("column1",explode('arraycolumn1)).withColumn("column2",explode('arraycolumn2)).withColumn("column3",explode('arraycolumn3))
explodedDF.select('column1,'column2,'column3).show(5,false)
+-------+-------+-------+
|column1|column2|column3|
+-------+-------+-------+
|1 |a |1a |
|1 |a |1b |
|1 |a |1c |
|1 |b |1a |
|1 |b |1b |
+-------+-------+-------+
only showing top 5 rows
// Let's do the above steps with less lines of code
var exploded=df.columns.foldLeft(df)((df,column)=>df.withColumn(column,explode(col(column))))
exploded.select(df.columns.map(col(_)):_*).show(false)
//using spark-sql
df.createOrReplaceTempView("arrayTable")
spark.sql("""
select column1,column2,column3 from arraytable
LATERAL VIEW explode(arraycolumn1) as column1
LATERAL VIEW explode(arraycolumn2) as column2
LATERAL VIEW explode(arraycolumn3) as column3""").showhttps://stackoverflow.com/questions/63787515
复制相似问题