我有一个火花数据,它看起来是这样的:
root
|-- 0000154d-7585-5eb283ff985c: struct (nullable = true)
| |-- collaborative_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- content_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- curated_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- discovery_score: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- original_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- recipe_id: array (nullable = true)
| | |-- element: long (containsNull = true)
|-- 00005426-2675-68085cd359c7: struct (nullable = true)
| |-- collaborative_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- content_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- curated_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- discovery_score: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- original_rank: array (nullable = true)
| | |-- element: long (containsNull = true)
| |-- recipe_id: array (nullable = true)
| | |-- element: long (containsNull = true)每一列都是一个用户id,例如0000154d-7585-5eb283ff985c,每一行由15000个用户组成(它们来自每个包含15000个用户的json文件)。
我想转换它,使每个用户id都是一行,而每个子列collaborative_rank, content_rank, curated_rank, discovery_score, original_rank and recipe_id是一个列,数组是值。我是新来的火花,有什么无痛的方法可以做到吗?
编辑:
作为参考,我正在读取的输入.json文件如下所示:
{"0000154d-7585-4096-a71a-5eb283ff985c": {"recipe_id": [1, 2, 3], "collaborative_rank": [1, 2, 3], "curated_rank": [1, 2, 3], "discovery_score": [1]}, "00005426-2675-4940-8394-e8085cd359c7": {"recipe_id": [] ... }等。
发布于 2020-07-31 05:26:00
如果不希望将其转换为rdd并执行UDF,则可以考虑堆叠数据。
df = spark.read.json(r'C:\stackoverflow\samples\inp.json')
stack_characteristics = str(len(df.columns))+','+','.join([f"'{v}',`{v}`" for v in df.columns])
df.select(expr(f'''stack({stack_characteristics})''').alias('userId','vals')).\
select('userId', 'vals.*').show()
+--------------------+------------------+------------+---------------+---------+
| userId|collaborative_rank|curated_rank|discovery_score|recipe_id|
+--------------------+------------------+------------+---------------+---------+
|0000154d-7585-409...| [1, 2, 3]| [1, 2, 3]| [1]|[1, 2, 3]|
|00005426-2675-494...| [1, 2, 3]| [1, 2, 3]| [1]|[1, 2, 3]|
+--------------------+------------------+------------+---------------+---------+发布于 2020-07-31 04:19:32
AFAIK,下面的代码可以解决您的问题。考虑到投入json,
{"0000154d-7585-4096-a71a-5eb283ff985c": {"recipe_id": [1, 2, 3], "collaborative_rank": [1, 2, 3], "curated_rank": [1, 2, 3], "discovery_score": [1] }}from pyspark.sql import Row
#read an input data
df=spark.read.json("/home/sathya/Desktop/stackoverflo/input.json")
#method to extract keys to columns
def extract_json(row):
out_array = []
data_dict = row.asDict()
for k in data_dict.keys():
out_array.append(Row(k, data_dict[k][0], data_dict[k][1],data_dict[k][2],data_dict[k][3]))
return Row(*out_array)
#flatmap columns and extracting the data
rdd = df.rdd.flatMap(extract_json)
#df creation
df1=spark.createDataFrame(rdd)
df1.selectExpr("_1 as user_id","_2 as recipe_id", "_3 as collaborative_rank", "_4 as curated_rank", "_5 as discovery_score").show(truncate=False)
/*
+------------------------------------+---------+------------------+------------+---------------+
|user_id |recipe_id|collaborative_rank|curated_rank|discovery_score|
+------------------------------------+---------+------------------+------------+---------------+
|0000154d-7585-4096-a71a-5eb283ff985c|[1, 2, 3]|[1, 2, 3] |[1] |[1, 2, 3] |
+------------------------------------+---------+------------------+------------+---------------+
*/https://stackoverflow.com/questions/63178179
复制相似问题