首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Spark FileAlreadyExistsException on Stage故障

Spark FileAlreadyExistsException on Stage故障
EN

Stack Overflow用户
提问于 2019-08-13 13:47:55
回答 1查看 7.5K关注 0票数 6

我正在尝试在重新分区后将数据帧写入s3位置。但是,每当写入阶段失败,并且Spark重试该阶段时,它就会抛出FileAlreadyExistsException。

当我重新提交作业时,如果spark在一次尝试中完成了这个阶段,它就会工作得很好。

下面是我的代码块

代码语言:javascript
复制
df.repartition(<some-value>).write.format("orc").option("compression", "zlib").mode("Overwrite").save(path)

我认为Spark应该在重试之前从失败的阶段中删除文件。我知道如果我们将重试设置为零,这个问题就会得到解决,但spark阶段预计会失败,这不是一个合适的解决方案。

以下是错误:

代码语言:javascript
复制
Job aborted due to stage failure: Task 0 in stage 6.1 failed 4 times, most recent failure: Lost task 0.3 in stage 6.1 (TID 740, ip-address, executor 170): org.apache.hadoop.fs.FileAlreadyExistsException: File already exists:s3://<bucket-name>/<path-to-object>/part-00000-c3c40a57-7a50-41da-9ce2-555753cab63a-c000.zlib.orc
    at com.amazon.ws.emr.hadoop.fs.s3.upload.plan.RegularUploadPlanner.checkExistenceIfNotOverwriting(RegularUploadPlanner.java:36)
    at com.amazon.ws.emr.hadoop.fs.s3.upload.plan.RegularUploadPlanner.plan(RegularUploadPlanner.java:30)
    at com.amazon.ws.emr.hadoop.fs.s3.upload.plan.UploadPlannerChain.plan(UploadPlannerChain.java:37)
    at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.create(S3NativeFileSystem.java:601)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:932)
    at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
    at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.create(EmrFileSystem.java:242)
    at org.apache.orc.impl.PhysicalFsWriter.<init>(PhysicalFsWriter.java:95)
    at org.apache.orc.impl.WriterImpl.<init>(WriterImpl.java:170)
    at org.apache.orc.OrcFile.createWriter(OrcFile.java:843)
    at org.apache.orc.mapreduce.OrcOutputFormat.getRecordWriter(OrcOutputFormat.java:50)
    at org.apache.spark.sql.execution.datasources.orc.OrcOutputWriter.<init>(OrcOutputWriter.scala:43)
    at org.apache.spark.sql.execution.datasources.orc.OrcFileFormat$$anon$1.newInstance(OrcFileFormat.scala:121)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:120)
    at org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.<init>(FileFormatDataWriter.scala:108)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:233)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:168)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:

我正在使用带EMR的Spark 2.4,请建议解决方案。

编辑1:请注意这个问题与覆盖模式无关,我已经在使用它了。正如问题标题所暗示的,问题出在阶段失败的情况下的剩余文件。可能是Spark UI清除了它。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2019-08-13 14:17:34

在Spark配置中设置spark.hadoop.orc.overwrite.output.file=true

您可以在此处找到有关此配置的更多详细信息- OrcConf.java

票数 5
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/57471781

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档