我需要一些帮助,在scala中使用spark命令行将txt文件转换为txt文件。下面是我通过使用DF创建的数组:
scala> tempDF.collect()
res6: Array[org.apache.spark.sql.Row] = Array([bid|gender|department], [1|M|Informatics], [2|M|Low], [3|M|BusinessAdministration], [5|M|Mathematics], [6|M|Low], [7|M|Economics], [8|M|Economics], [9|M|Economics], [10|M|Economics], [11|M|Informatics], [13|M|Physics], [14|M|Informatics], [15|M|Informatics], [16|M|Economics], [17|M|Informatics], [18|M|Economics], [19|M|BusinessAdministration], [20|M|Mathematics], [21|M|Mathematics], [22|M|Economics], [23|M|Economics], [24|M|BusinessAdministration], [25|M|Informatics], [26|M|Statistics], [27|M|BusinessAdministration], [28|M|Economics], [29|M|Physics], [30|M|Physics], [31|M|Informatics], [32|M|Mathematics], [33|M|Economics], [34|M|BusinessAdministration], [35|M|Economics], [36|M|BusinessAdministration], [37|M|Mathema...现在,我怎样才能将以下元组(如每一列的值)转换为以下元组(如“1欧元-M-信息”)?在txt文件中,第0行为“bid性别部门”。
发布于 2020-11-30 06:48:47
用csv格式编写dataframe,指定您需要一个标头:
tempDF.write.format("csv").option("header", "true").save("file.txt")发布于 2020-11-30 02:06:41
Spark支持使用.csv方法读取分隔文件。
尝试使用指定的分隔符读取csv文件,如下所示。
spark.
read.
option("header","true").
option("delimiter","|").
csv("<txt file path>").
show()
//+---+------+-----------+
//|bid|gender| department|
//+---+------+-----------+
//| 1| M|Informatics|
//| 2| M| Low|
//+---+------+-----------+https://stackoverflow.com/questions/65065701
复制相似问题