我正在尝试使用crealytics/spark-excel库将不同的Java数据集写入到一个包含多张工作表的excel文件中。
<dependency>
<groupId>com.crealytics</groupId>
<artifactId>spark-excel_2.11</artifactId>
<version>0.13.0</version>
</dependency>如何为这些单独的excel工作表提供名称?
这是我想要做的:
import org.apache.spark.api.java.JavaRDD;
SparkSession spark = SparkSession.builder().appName("LineQuery").getOrCreate();
Dataset<Row> df1 = spark.sql("SELECT * FROM my_table1");
Dataset<Row> df2 = spark.sql("SELECT * FROM my_table2");
df1.write().format("com.crealytics.spark.excel").option("sheetName","My Sheet 1").option("header", "true").save("hdfs://127.0.0.1:9000/var/www/" + outFile + ".xls");
df2.write().format("com.crealytics.spark.excel").option("sheetName","My Sheet 2").option("header", "true").mode(SaveMode.Append).save("hdfs://127.0.0.1:9000/var/www/" + outFile + ".xls");发布于 2020-03-03 15:42:22
改用dataAddress选项
示例:
>>> df = spark.createDataFrame([(11, 12), (21, 22)])
>>> df.show()
+---+---+
| _1| _2|
+---+---+
| 11| 12|
| 21| 22|
+---+---+
>>> df.where("_1 == 11").write.format("com.crealytics.spark.excel").option("dataAddress", "my sheet 1[#All]").option("header", "true").mode("append").save("/tmp/excel-df.xlsx")
>>> df.where("_1 == 21").write.format("com.crealytics.spark.excel").option("dataAddress", "my sheet 2[#All]").option("header", "true").mode("append").save("/tmp/excel-df.xlsx")https://stackoverflow.com/questions/60500266
复制相似问题