首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >火花广播失败了

火花广播失败了
EN

Stack Overflow用户
提问于 2018-09-11 16:52:15
回答 1查看 2.2K关注 0票数 2

我是很新的火花,并试图过滤一个RDD的基础上,另一个,如描述的here

我的过滤器数据在S3中的CSV文件中。这个CSV文件是1.7GB,有100 m行。每一行都有一个独特的10个字符长的id。我的计划是将这些is从这个CSV文件中提取到内存集中,然后广播这个集合并使用它过滤另一个RDD。

我的代码如下所示:

代码语言:javascript
复制
val sparkContext: SparkContext = new SparkContext()

val filterSet = sparkContext
  .textFile("s3://.../filter.csv") // this is the 1.7GB csv file
  .map(_.split(",")(0)) // each string here has exactly 10 chars (A-Z|0-9)
  .collect()
  .toSet // ~100M 10 char long strings in set.

val filterSetBC = sparkContext.broadcast(filterSet) // THIS LINE IS FAILING

val otherRDD = ...

otherRDD
  .filter(item => filterSetBC.value.contains(item.id))
  .saveAsTextFile("s3://...")

我在10 m4.2xsize (16 vCore,32 GB内存) EC2实例上运行AWS代码,并获得以下错误。

代码语言:javascript
复制
18/09/06 17:15:33 INFO UnifiedMemoryManager: Will not store broadcast_2 as the required space (16572507620 bytes) exceeds our memory limit (13555256524 bytes)
18/09/06 17:15:33 WARN MemoryStore: Not enough space to cache broadcast_2 in memory! (computed 10.3 GB so far)
18/09/06 17:15:33 INFO MemoryStore: Memory use = 258.6 KB (blocks) + 1024.0 KB (scratch space shared across 1 tasks(s)) = 1282.6 KB. Storage limit = 12.6 GB.
18/09/06 17:15:33 WARN BlockManager: Persisting block broadcast_2 to disk instead.
18/09/06 17:18:54 WARN BlockManager: Putting block broadcast_2 failed due to exception java.lang.ArrayIndexOutOfBoundsException: 1073741865.
18/09/06 17:18:54 WARN BlockManager: Block broadcast_2 could not be removed as it was not found on disk or in memory
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1073741865
    at com.esotericsoftware.kryo.util.IdentityObjectIntMap.clear(IdentityObjectIntMap.java:382)
    at com.esotericsoftware.kryo.util.MapReferenceResolver.reset(MapReferenceResolver.java:65)
    at com.esotericsoftware.kryo.Kryo.reset(Kryo.java:865)
    at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:630)
    at org.apache.spark.serializer.KryoSerializationStream.writeObject(KryoSerializer.scala:241)
    at org.apache.spark.serializer.SerializationStream.writeAll(Serializer.scala:140)
    at org.apache.spark.serializer.SerializerManager.dataSerializeStream(SerializerManager.scala:174)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1$$anonfun$apply$7.apply(BlockManager.scala:1101)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1$$anonfun$apply$7.apply(BlockManager.scala:1099)
    at org.apache.spark.storage.DiskStore.put(DiskStore.scala:68)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1099)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1018)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1083)
    at org.apache.spark.storage.BlockManager.putIterator(BlockManager.scala:841)
    at org.apache.spark.storage.BlockManager.putSingle(BlockManager.scala:1404)
    at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:123)
    at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
    at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
    at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:62)
    at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1482)

据我所知,从日志中,我试图播放的集大约在15 is左右。正常情况下,100Mx10字符是~1GB,但是对于一些java开销,我希望它在5-6GB左右。

问题1:为什么我的集合数据如此庞大?我怎么才能把它降到最低呢?

不过,我还是将我的执行程序配置为使用22 2GB (executor- memory ) +2GB (spark.executor.memoryOverhead)内存。

问题2:为什么说它超过了内存限制(12.6GB)?这个12.6GB的限制从何而来?

我想我把spark-submit参数搞得一团糟。以下是它们:

代码语言:javascript
复制
--deploy-mode cluster 
--class com.example.MySparkJob
--master yarn
--driver-memory 24G
--executor-cores 15
--executor-memory 22G
--num-executors 9
--deploy-mode client
--conf spark.default.parallelism=1200
--conf spark.speculation=true
--conf spark.rdd.compress=true
--conf spark.files.fetchTimeout=180s
--conf spark.network.timeout=300s
--conf spark.yarn.max.executor.failures=5000
--conf spark.dynamicAllocation.enabled=true   // also tried without this parameter, no changes
--conf spark.driver.maxResultSize=0
--conf spark.executor.memoryOverhead=2G
--conf spark.serializer=org.apache.spark.serializer.KryoSerializer
--conf spark.kryo.registrator=com.example.MyKryoRegistrator
--driver-java-options -XX:+UseCompressedOops
EN

回答 1

Stack Overflow用户

发布于 2019-02-12 22:25:45

第一,请不要分配如此巨大的驱动内存,4GB就足够了,第二个执行者核心15就足够了(这将给更多的执行者,而不是只有少数)第3,如果你有更多的内存,执行者从9增加到45 (如果不是,那么让执行者18和执行者男子16)

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52280738

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档