首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何在木星电池中重新运行星火(火星雨)代码?

如何在木星电池中重新运行星火(火星雨)代码?
EN

Stack Overflow用户
提问于 2017-01-19 07:12:08
回答 1查看 536关注 0票数 0

我在朱庇特中创建了一个“SparkSession”(使用pyspark),然后读取一个.csv文件。

但是,当我试图重新运行第二次读取.csv文件的代码块时,我的代码在第一次运行时运行良好,我不知道为什么会出现以下错误:

代码语言:javascript
复制
   ---------------------------------------------------------------------------
    Py4JJavaError                             Traceback (most recent call last)
    <ipython-input-14-f65a29e5e6d3> in <module>()
    ----> 1 ccRaw.take(3)

    C:\Spark\spark-2.0.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py in take(self, num)
       1308 
       1309             p = range(partsScanned, min(partsScanned + numPartsToTry, totalParts))
    -> 1310             res = self.context.runJob(self, takeUpToNumLeft, p)
       1311 
       1312             items += res

    C:\Spark\spark-2.0.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\context.py in runJob(self, rdd, partitionFunc, partitions, allowLocal)
        939         # SparkContext#runJob.
        940         mappedRDD = rdd.mapPartitions(partitionFunc)
    --> 941         port = self._jvm.PythonRDD.runJob(self._jsc.sc(), mappedRDD._jrdd, partitions)
        942         return list(_load_from_socket(port, mappedRDD._jrdd_deserializer))
        943 

    C:\Spark\spark-2.0.0-bin-hadoop2.7\python\lib\py4j-0.10.1-src.zip\py4j\java_gateway.py in __call__(self, *args)
        931         answer = self.gateway_client.send_command(command)
        932         return_value = get_return_value(
    --> 933             answer, self.gateway_client, self.target_id, self.name)
        934 
        935         for temp_arg in temp_args:

    C:\Spark\spark-2.0.0-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\utils.py in deco(*a, **kw)
         61     def deco(*a, **kw):
         62         try:
    ---> 63             return f(*a, **kw)
         64         except py4j.protocol.Py4JJavaError as e:
         65             s = e.java_exception.toString()

    C:\Spark\spark-2.0.0-bin-hadoop2.7\python\lib\py4j-0.10.1-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
        310                 raise Py4JJavaError(
        311                     "An error occurred while calling {0}{1}{2}.\n".
    --> 312                     format(target_id, ".", name), value)
        313             else:
        314                 raise Py4JError(

    Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
    : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 8, localhost): java.net.SocketException: Connection reset by peer: socket write error
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(Unknown Source)
        at java.net.SocketOutputStream.write(Unknown Source)
        at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
        at java.io.BufferedOutputStream.flush(Unknown Source)
        at java.io.DataOutputStream.flush(Unknown Source)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:331)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
        at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)

    Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1659)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1618)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1607)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:632)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1871)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1884)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1897)
        at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:441)
        at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:280)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:128)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:211)
        at java.lang.Thread.run(Unknown Source)
    Caused by: java.net.SocketException: Connection reset by peer: socket write error
        at java.net.SocketOutputStream.socketWrite0(Native Method)
        at java.net.SocketOutputStream.socketWrite(Unknown Source)
        at java.net.SocketOutputStream.write(Unknown Source)
        at java.io.BufferedOutputStream.flushBuffer(Unknown Source)
        at java.io.BufferedOutputStream.flush(Unknown Source)
        at java.io.DataOutputStream.flush(Unknown Source)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:331)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1857)
        at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.

scala:269)

这是我用来读取.csv文件的代码:

代码语言:javascript
复制
    # =========================  Setup Spark ============================#

import os
import sys

# NOTE: Please change the folder paths to your current setup.
#Windows
if sys.platform.startswith('win'):
    #Where you downloaded the resource bundle
    os.chdir("C:/Users/home")
    #Where you installed spark.    
    os.environ['SPARK_HOME'] = 'C:/Spark/spark-2.0.0-bin-hadoop2.7'

os.curdir

# Create a variable for our root path
SPARK_HOME = os.environ['SPARK_HOME']

#Add the following paths to the system path. Please check your installation
#to make sure that these zip files actually exist. The names might change
#as versions change.
sys.path.insert(0,os.path.join(SPARK_HOME,"python"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","pyspark.zip"))
sys.path.insert(0,os.path.join(SPARK_HOME,"python","lib","py4j-0.10.1-src.zip"))

#Initialize SparkSession and SparkContext
from pyspark.sql import SparkSession
from pyspark import SparkContext

#Create a Spark Session
SpSession2 = SparkSession \
    .builder \
    .master("local") \
    .appName("SparkPrjt1") \
    .config("spark.executor.memory", "1g") \
    .config("spark.driver.allowMultipleContexts","true")\
    .config("spark.cores.max","2") \
    .config("spark.sql.warehouse.dir", "file:///C:/tmp/spark-warehouse")\
    .getOrCreate()

#Get the Spark Context from Spark Session    
SpContext = SpSession2.sparkContext
from pyspark import SparkConf


# testData = SpContext.parallelize([3,6,4,2])
# testData.count()

 #---------------------------------------------------------------------
 #   Load Data from the data file
 #---------------------------------------------------------------------
 ccRaw = SpContext.textFile("C:\\Users\\home\\credit-card-default-1000.csv")

 ccRaw.take(3)
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-01-19 07:52:46

我会将初始化放在第一个单元中,所有其他单元都放在另一个单元中。每次您想要重新运行时,只需跳过初始化单元格。

好吧,让我看看

代码语言:javascript
复制
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
    : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8.0 failed 1 times, most recent failure: Lost task 0.0 in stage 8.0 (TID 8, localhost): java.net.SocketException: Connection reset by peer: socket write error

这个阶段很可能是因为记忆不足而失败的。该文件是否包含大量数据?

看上去就像别人和你一样有同样的麻烦

Apache Spark: pyspark crash for large dataset

票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/41735827

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档