首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >分解多列SparkSQL

分解多列SparkSQL
EN

Stack Overflow用户
提问于 2020-09-08 13:18:09
回答 1查看 1.1K关注 0票数 0

我们如何在Spark中分解多个数组列?我有一个包含5个字符串化数组列的数据帧,我想在所有5列上进行分解。为了简单起见,用3列显示示例。

如果我有以下输入行:

代码语言:javascript
复制
col1                  col2                              col3
["b_val1","b_val2"]   ["at_val1","at_val2","at_val3"]   ["male","female"]

我想对所有3个数组列进行分解,因此输出应该如下所示:

代码语言:javascript
复制
b_val1   at_val1   male
b_val1   at_val1   female
b_val2   at_val1   male
b_val2   at_val1   female
b_val1   at_val2   male
b_val1   at_val2   female
b_val2   at_val2   male
b_val2   at_val2   female
b_val1   at_val3   male
b_val1   at_val3   female
b_val2   at_val3   male
b_val2   at_val3   female

我尝试了以下几种方法:

代码语言:javascript
复制
SELECT
timestamp,
explode(from_json(brandList, 'array<string>')) AS brand,
explode(from_json(articleTypeList, 'array<string>')) AS articleTypeList,
explode(from_json(gender, 'array<string>')) AS gender,
explode(from_json(masterCategoryList, 'array<string>')) AS masterCategoryList,
explode(from_json(subCategoryList, 'array<string>')) AS subCategoryList,
isLandingPage,
...
from table

但这是不允许的,我得到了以下错误- Exception in thread "main" org.apache.spark.sql.AnalysisException: Only one generator allowed per select clause but found 5: explode(jsontostructs(brandList)), explode(jsontostructs(articleTypeList)), explode(jsontostructs(gender)), explode(jsontostructs(masterCategoryList)), explode(jsontostructs(subCategoryList));

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-09-08 14:23:16

使用withColumn获取所需的输出。

让我们创建一个包含3列arraytype的示例dataframe,并执行分解操作:

代码语言:javascript
复制
import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._

val rdd=spark.sparkContext.makeRDD(List(Row(Array(1,2,3),Array("a","b","c"),Array("1a","1b","1c"))))

val schema=new StructType().add("arraycolumn1",ArrayType(IntegerType)).add("arraycolumn2",ArrayType(StringType)).add("arraycolumn3",ArrayType(StringType))

var df=spark.createDataFrame(rdd,schema)

df.show(5,false)

   +------------+------------+------------+                                        
   |arraycolumn1|arraycolumn2|arraycolumn3|
   +------------+------------+------------+
   |[1, 2, 3]   |[a, b, c]   |[1a, 1b, 1c]|
   +------------+------------+------------+

val explodedDF=df.withColumn("column1",explode('arraycolumn1)).withColumn("column2",explode('arraycolumn2)).withColumn("column3",explode('arraycolumn3))

  explodedDF.select('column1,'column2,'column3).show(5,false)

      +-------+-------+-------+
      |column1|column2|column3| 
      +-------+-------+-------+
      |1      |a      |1a     |
      |1      |a      |1b     |
      |1      |a      |1c     |
      |1      |b      |1a     |
      |1      |b      |1b     |
      +-------+-------+-------+
      only showing top 5 rows

 
// Let's do the above steps with less lines of code

  var exploded=df.columns.foldLeft(df)((df,column)=>df.withColumn(column,explode(col(column))))

 exploded.select(df.columns.map(col(_)):_*).show(false)


//using spark-sql

df.createOrReplaceTempView("arrayTable")

 spark.sql("""
  select column1,column2,column3 from arraytable
  LATERAL VIEW explode(arraycolumn1) as column1
  LATERAL VIEW explode(arraycolumn2) as column2
  LATERAL VIEW explode(arraycolumn3) as column3""").show
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63787515

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档