我正在以RDD的形式在Spark中创建一个文档集合,并使用来自Elasticsearch的Spark读写库。创建集合的群集很大,因此当它向ES写入时,我会得到下面指示ES的错误,这并不让我感到意外。这似乎没有使这项工作失败。这些任务可能会被重新尝试,并最终取得成功。在Spark中,报告的作业已成功完成。
以下是许多报告的任务失败错误之一,但也没有报告作业失败:
2017-03-20 10:48:27,745 WARN org.apache.spark.scheduler.TaskSetManager [task-result-getter-2] - Lost task 568.1 in stage 81.0 (TID 18982, ip-172-16-2-76.ec2.internal): org.apache.spark.util.TaskCompletionListenerException: Could not write all entries [41/87360] (maybe ES was overloaded?). Bailing out...
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:112)
at org.apache.spark.scheduler.Task.run(Task.scala:102)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)我使用的库是
org.elasticsearch" % "elasticsearch-spark_2.10" % "2.1.2"发布于 2017-05-15 18:40:42
您能按照这个链接- https://www.elastic.co/guide/en/elasticsearch/hadoop/current/spark.html在火花属性或您的弹性搜索属性,您需要增加记录的最大数量,可以转储在一个帖子,这应该解决您的问题。
https://stackoverflow.com/questions/42909860
复制相似问题