我正在将HDFS中的数据读入火花数据。根据Status值,我需要将Passed/Failed/Aborted的值1/0/-1插入到quality列中,或者我们是否有可能计算Pass %。
df = sparkSession.read.json(hdfsPath)
+-----------+---------+
| ID|Status |
+-----------+---------+
|Tsz3650419c| Passed|
|Tsz3650420c| Failed|
|Tsz3650421c| Passed|
|Tsz3650422c| Passed|
|Tsz3650423c| Aborted|发布于 2018-01-29 17:39:09
如果数据如下所示:
from pyspark.sql.functions import avg, col, when
df = spark.createDataFrame([
("Tsz3650419c", "Passed"), ("Tsz3650420c", "Failed"),
("Tsz3650421c", "Passed"), ("Tsz3650422c", "Passed"),
("Tsz3650423c", "Aborted")
]).toDF("ID", "Status")定义级别:
levels = ["Passed", "Failed", "Aborted"]
exprs = [
avg((col("Status") == level).cast("double")*100).alias(level)
for level in levels]
df.groupBy("ID").agg(*exprs).show()
# +-----------+------+------+-------+
# | ID|Passed|Failed|Aborted|
# +-----------+------+------+-------+
# |Tsz3650422c| 1.0| 0.0| 0.0|
# |Tsz3650419c| 1.0| 0.0| 0.0|
# |Tsz3650423c| 0.0| 0.0| 1.0|
# |Tsz3650420c| 0.0| 1.0| 0.0|
# |Tsz3650421c| 1.0| 0.0| 0.0|
# +-----------+------+------+-------+其中avg((col("Status") == level).cast("double"))是列具有特定值的记录的一小部分。您可以在Count number of non-NaN entries in each column of Spark dataframe with Pyspark中找到其他详细信息
您还可以像这里所示的percentage count per group and pivot with pyspark那样进行枢轴和计算计数。
https://stackoverflow.com/questions/48506775
复制相似问题