我想直接在代码中覆盖spark.sql.shuffle.partitions参数:
val sparkSession = SparkSession
.builder()
.appName("SPARK")
.getOrCreate()
sparkSession.conf.set("spark.sql.shuffle.partitions", 2)但此设置不会生效,因为在日志中我收到以下警告消息:
WARN OffsetSeqMetadata:66 - Updating the value of conf 'spark.sql.shuffle.partitions' in current session from '2' to '200'.虽然在spark-submit外壳中传递的相同参数可以正常工作:
#!/bin/bash
/app/spark-2/bin/spark-submit \
--queue root.dev \
--master yarn \
--deploy-mode cluster \
--driver-memory 5G \
--executor-memory 4G \
--executor-cores 2 \
--num-executors 4 \
--conf spark.app.name=SPARK \
--conf spark.executor.memoryOverhead=2048 \
--conf spark.yarn.maxAppAttempts=1 \
--conf spark.sql.shuffle.partitions=2 \
--class com.dev.MainClass有什么想法吗?
发布于 2021-04-30 19:49:54
在您的Spark结构化流作业的检查点文件中,存储了一些sparkSession配置。
例如,在文件夹"offset“中,最新批次的内容可能如下所示:
v1
{"batchWatermarkMs":0,"batchTimestampMs":1619782960476,"conf":{"spark.sql.streaming.stateStore.providerClass":"org.apache.spark.sql.execution.streaming.state.HDFSBackedStateStoreProvider","spark.sql.streaming.join.stateFormatVersion":"2","spark.sql.streaming.stateStore.compression.codec":"lz4","spark.sql.streaming.flatMapGroupsWithState.stateFormatVersion":"2","spark.sql.streaming.multipleWatermarkPolicy":"min","spark.sql.streaming.aggregation.stateFormatVersion":"2","spark.sql.shuffle.partitions":"200"}}
4其中,它存储配置spark.sql.shuffle.partitions的值,在我的示例中,该值被设置为默认值200。
在code中,您将看到,如果此配置值在检查点文件的元数据中可用,则会替换该配置值。
如果您真的需要更改分区,要么删除所有检查点文件,要么手动将最后一个检查点文件中的值更改为2。
https://stackoverflow.com/questions/67332724
复制相似问题