首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Apache纱线模式启动时间过长(10+ secs)

Apache纱线模式启动时间过长(10+ secs)
EN

Stack Overflow用户
提问于 2015-05-07 01:07:10
回答 4查看 8.4K关注 0票数 16

我运行的火花应用程序的纱线-客户端或纱线-集群模式。

但似乎要花很长时间才能启动。

初始化星火上下文需要10+秒。

这是正常的吗?或者它能被优化?

环境情况如下:

  • Hadoop: Hortonworks HDP 2.2 (Hadoop2.6)(带有3个数据节点的小型测试集群)
  • 火花: 1.3.1
  • Client: Windows 7,但在CentOS 6.6上类似的结果

下面是应用程序日志的启动部分。(编辑了一些私人资料)

第一行的‘MainProcessor:初始化上下文’和最后一行的‘MainProcessor:删除以前的输出文件’是应用程序的日志。其他在两者之间的是火花本身。在显示此日志之后执行应用程序逻辑。

代码语言:javascript
复制
15/05/07 09:18:31 INFO Main: Initializing context
15/05/07 09:18:31 INFO SparkContext: Running Spark version 1.3.1
15/05/07 09:18:31 INFO SecurityManager: Changing view acls to: myuser,myapp
15/05/07 09:18:31 INFO SecurityManager: Changing modify acls to: myuser,myapp
15/05/07 09:18:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(myuser, myapp); users with modify permissions: Set(myuser, myapp)
15/05/07 09:18:31 INFO Slf4jLogger: Slf4jLogger started
15/05/07 09:18:31 INFO Remoting: Starting remoting
15/05/07 09:18:31 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@mymachine:54449]
15/05/07 09:18:31 INFO Utils: Successfully started service 'sparkDriver' on port 54449.
15/05/07 09:18:31 INFO SparkEnv: Registering MapOutputTracker
15/05/07 09:18:32 INFO SparkEnv: Registering BlockManagerMaster
15/05/07 09:18:32 INFO DiskBlockManager: Created local directory at C:\Users\myuser\AppData\Local\Temp\spark-2d3db9d6-ea78-438e-956f-be9c1dcf3a9d\blockmgr-e9ade223-a4b8-4d9f-b038-efd66adf9772
15/05/07 09:18:32 INFO MemoryStore: MemoryStore started with capacity 1956.7 MB
15/05/07 09:18:32 INFO HttpFileServer: HTTP File server directory is C:\Users\myuser\AppData\Local\Temp\spark-ff40d73b-e8ab-433e-88c4-35da27fb6278\httpd-def9220f-ac3a-4dd2-9ac1-2c593b94b2d9
15/05/07 09:18:32 INFO HttpServer: Starting HTTP Server
15/05/07 09:18:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/07 09:18:32 INFO AbstractConnector: Started SocketConnector@0.0.0.0:54450
15/05/07 09:18:32 INFO Utils: Successfully started service 'HTTP file server' on port 54450.
15/05/07 09:18:32 INFO SparkEnv: Registering OutputCommitCoordinator
15/05/07 09:18:32 INFO Server: jetty-8.y.z-SNAPSHOT
15/05/07 09:18:32 INFO AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/05/07 09:18:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
15/05/07 09:18:32 INFO SparkUI: Started SparkUI at http://mymachine:4040
15/05/07 09:18:32 INFO SparkContext: Added JAR file:/D:/Projects/MyApp/MyApp.jar at http://10.111.111.199:54450/jars/MyApp.jar with timestamp 1430957912240
15/05/07 09:18:32 INFO RMProxy: Connecting to ResourceManager at cluster01/10.111.111.11:8050
15/05/07 09:18:32 INFO Client: Requesting a new application from cluster with 3 NodeManagers
15/05/07 09:18:32 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (23040 MB per container)
15/05/07 09:18:32 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/05/07 09:18:32 INFO Client: Setting up container launch context for our AM
15/05/07 09:18:32 INFO Client: Preparing resources for our AM container
15/05/07 09:18:32 INFO Client: Source and destination file systems are the same. Not copying hdfs://cluster01/apps/spark/spark-assembly-1.3.1-hadoop2.6.0.jar
15/05/07 09:18:32 INFO Client: Setting up the launch environment for our AM container
15/05/07 09:18:33 INFO SecurityManager: Changing view acls to: myuser,myapp
15/05/07 09:18:33 INFO SecurityManager: Changing modify acls to: myuser,myapp
15/05/07 09:18:33 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(myuser, myapp); users with modify permissions: Set(myuser, myapp)
15/05/07 09:18:33 INFO Client: Submitting application 2 to ResourceManager
15/05/07 09:18:33 INFO YarnClientImpl: Submitted application application_1430956687773_0002
15/05/07 09:18:34 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:34 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: default
     start time: 1430957906540
     final status: UNDEFINED
     tracking URL: http://cluster01:8088/proxy/application_1430956687773_0002/
     user: myapp
15/05/07 09:18:35 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:36 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:37 INFO Client: Application report for application_1430956687773_0002 (state: ACCEPTED)
15/05/07 09:18:37 INFO YarnClientSchedulerBackend: ApplicationMaster registered as Actor[akka.tcp://sparkYarnAM@cluster02:39698/user/YarnAM#-1579648782]
15/05/07 09:18:37 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> cluster01, PROXY_URI_BASES -> http://cluster01:8088/proxy/application_1430956687773_0002), /proxy/application_1430956687773_0002
15/05/07 09:18:37 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
15/05/07 09:18:38 INFO Client: Application report for application_1430956687773_0002 (state: RUNNING)
15/05/07 09:18:38 INFO Client: 
     client token: N/A
     diagnostics: N/A
     ApplicationMaster host: cluster02
     ApplicationMaster RPC port: 0
     queue: default
     start time: 1430957906540
     final status: UNDEFINED
     tracking URL: http://cluster01:8088/proxy/application_1430956687773_0002/
     user: myapp
15/05/07 09:18:38 INFO YarnClientSchedulerBackend: Application application_1430956687773_0002 has started running.
15/05/07 09:18:38 INFO NettyBlockTransferService: Server created on 54491
15/05/07 09:18:38 INFO BlockManagerMaster: Trying to register BlockManager
15/05/07 09:18:38 INFO BlockManagerMasterActor: Registering block manager mymachine:54491 with 1956.7 MB RAM, BlockManagerId(<driver>, mymachine, 54491)
15/05/07 09:18:38 INFO BlockManagerMaster: Registered BlockManager
15/05/07 09:18:43 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@cluster02:44996/user/Executor#-786778979] with ID 1
15/05/07 09:18:43 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
15/05/07 09:18:43 INFO MainProcessor: Deleting previous output files

谢谢。

更新

我想我已经找到了(可能是部分的,但主要的)原因。

它介于以下几个字之间:

代码语言:javascript
复制
15/05/08 11:36:32 INFO BlockManagerMaster: Registered BlockManager
15/05/08 11:36:38 INFO YarnClientSchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@cluster04:55237/user/Executor#-149550753] with ID 1

当我在集群端读取日志时,发现了以下几行:(上面一行的确切时间不同,但这是机器之间的区别)

代码语言:javascript
复制
15/05/08 11:36:23 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 5000
15/05/08 11:36:28 INFO impl.AMRMClientImpl: Received new token for : cluster04:45454

似乎火花故意睡了5秒。

我已经阅读了星火的源代码,在org.apache.spark.deploy.yarn.ApplicationMaster.scala,中,launchReporterThread()有相应的代码。它循环调用allocator.allocateResources()和Thread.sleep()。对于睡眠,它读取配置变量scapk.yarn.Scheder.Hearbeat.Interval-ms(默认值为5000,即5秒)。根据评论,“我们希望在不引起太多请求的情况下对RM做出合理的响应”。因此,除非纱线立即满足分配要求,否则将浪费5秒。

当我将配置变量修改为1000时,它只等待了1秒。

以下是更改后的日志行:

代码语言:javascript
复制
15/05/08 11:47:21 INFO yarn.ApplicationMaster: Started progress reporter thread - sleep time : 1000
15/05/08 11:47:22 INFO impl.AMRMClientImpl: Received new token for : cluster04:45454

节省了4秒。

因此,当不想等待5秒时,可以更改spark.yarn.scheduler.heartbeat.interval-ms.

我希望它所带来的额外费用将是微不足道的。

更新

联合爱尔兰共和军的一个相关问题已经开始并得到解决。请参阅https://issues.apache.org/jira/browse/SPARK-7533

EN

回答 4

Stack Overflow用户

回答已采纳

发布于 2015-05-07 11:16:05

这是很典型的。我的系统运行火花提交大约20秒,直到得到一个SparkContext。

正如它在文档中提到的那样,解决方案是将您的驱动程序转换为RPC服务器。这样您就可以初始化一次,然后其他应用程序就可以使用驱动程序的上下文作为服务。

我正在用我的申请做这件事。我正在使用http4s并将我的驱动程序转换成一个web服务器。

票数 3
EN

Stack Overflow用户

发布于 2018-07-28 06:22:28

为了快速创建火花上下文

在电子病历上测试:

  1. cd /usr/lib/spark/jars/; zip /tmp/yarn-archive.zip *.jar
  2. cd path/to/folder/of/someOtherDependancy/jarFolder/; zip /tmp/yarn-archive.zip jar-file.jar
  3. 用于测试完整性和详细调试的zip -Tv /tmp/yarn-archive.zip
  4. 如果yarn-archive.zip已经存在于hdfs上,那么hdfs dfs -rm -r -f -skipTrash /user/hadoop/yarn-archive.zip hdfs dfs -put /tmp/yarn-archive.zip /user/hadoop/ hdfs dfs -put /tmp/yarn-archive.zip /user/hadoop/
  5. --conf spark.yarn.archive="hdfs:///user/hadoop/yarn-archive.zip"spark-submit中使用此参数

这样做的原因是,主人不必把所有的罐子都分发给奴隶。它可以从一些常见的hdfs路径中获得,这里是hdfs:///user/hadoop/yarn-archive.zip

我意识到它可以节省您的时间3-5秒,这一次也取决于集群中节点的数量。节点越多,节省的时间就越多。

票数 4
EN

Stack Overflow用户

发布于 2017-05-25 22:35:43

您可以检查Apache,它是Spark前面的REST。

您可以使用一个会话多个请求来处理这个Spark/Livy会话。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/30090226

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档