假设我有以下星火框架:
+-------------------+--------+
|timestamp |UserName|
+-------------------+--------+
|2021-08-11 04:05:06|A |
|2021-08-11 04:15:06|B |
|2021-08-11 09:15:26|A |
|2021-08-11 11:04:06|B |
|2021-08-11 14:55:16|A |
|2021-08-13 04:12:11|B |
+-------------------+--------+我希望根据每个用户的事件计数为所需的时间分辨率构建时间序列数据。
UserName上分组后,根据所需的时间帧分辨率进行计数,需要用火花帧来保持时间框架。(可能使用Apache Spark结构化流中的事件时间聚合和水印 )UDF或通过toPandas()进行黑客攻击并不感兴趣。因此,假设在24小时(每日)时间框架内,预期结果应类似于groupBy之后的结果:
+------------------------------------------+-------------+-------------+
|window_frame_24_Hours | username A | username B |
+------------------------------------------+-------------+-------------+
|{2021-08-11 00:00:00, 2021-08-11 23:59:59}|3 |2 |
|{2021-08-12 00:00:00, 2021-08-12 23:59:59}|0 |0 |
|{2021-08-13 00:00:00, 2021-08-13 23:59:59}|0 |1 |
+------------------------------------------+-------------+-------------+Edit1:在12小时时间框架内的\分辨率:
+------------------------------------------+-------------+-------------+
|window_frame_12_Hours | username A | username B |
+------------------------------------------+-------------+-------------+
|{2021-08-11 00:00:00, 2021-08-11 11:59:59}|2 |2 |
|{2021-08-11 12:00:00, 2021-08-11 23:59:59}|1 |0 |
|{2021-08-12 00:00:00, 2021-08-12 11:59:59}|0 |0 |
|{2021-08-12 12:00:00, 2021-08-12 23:59:59}|0 |0 |
|{2021-08-13 00:00:00, 2021-08-13 11:59:59}|0 |1 |
|{2021-08-13 12:00:00, 2021-08-13 23:59:59}|0 |0 |
+------------------------------------------+-------------+-------------+发布于 2022-01-31 11:19:02
按time 窗户 '1 day' + UserName分组以计数,然后按窗口框架和枢轴用户名分组:
from pyspark.sql import functions as F
result = df.groupBy(
F.window("timestamp", "1 day").alias("window_frame_24_Hours"),
"UserName"
).count().groupBy("window_frame_24_Hours").pivot("UserName").agg(
F.first("count")
).na.fill(0)
result.show(truncate=False)
#+------------------------------------------+---+---+
#|window_frame_24_Hours |A |B |
#+------------------------------------------+---+---+
#|{2021-08-13 00:00:00, 2021-08-14 00:00:00}|0 |1 |
#|{2021-08-11 00:00:00, 2021-08-12 00:00:00}|3 |2 |
#+------------------------------------------+---+---+如果需要缺少的日期,则必须使用sequence on min和max timestamp生成所有日期,然后与原始数据连接:
intervals_df = df.withColumn(
"timestamp",
F.date_trunc("day", "timestamp")
).selectExpr(
"sequence(min(timestamp), max(timestamp + interval 1 day), interval 1 day) as dates"
).select(
F.explode(
F.expr("transform(dates, (x, i) -> IF(i!=0, struct(date_trunc('dd', dates[i-1]) as start, dates[i] as end), null))")
).alias("frame")
).filter("frame is not null").crossJoin(
df.select("UserName").distinct()
)
result = intervals_df.alias("a").join(
df.alias("b"),
F.col("timestamp").between(F.col("frame.start"), F.col("frame.end"))
& (F.col("a.UserName") == F.col("b.UserName")),
"left"
).groupBy(
F.col("frame").alias("window_frame_24_Hours")
).pivot("a.UserName").agg(
F.count("b.UserName")
)
result.show(truncate=False)
#+------------------------------------------+----------+----------+
#|window_frame_24_Hours |username_A|username_B|
#+------------------------------------------+----------+----------+
#|{2021-08-13 00:00:00, 2021-08-14 00:00:00}|0 |1 |
#|{2021-08-11 00:00:00, 2021-08-12 00:00:00}|3 |2 |
#|{2021-08-12 00:00:00, 2021-08-13 00:00:00}|0 |0 |
#+------------------------------------------+----------+----------+https://stackoverflow.com/questions/70924818
复制相似问题