首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >将spark sql数据文件导出到csv时出错

将spark sql数据文件导出到csv时出错
EN

Stack Overflow用户
提问于 2016-12-01 09:26:29
回答 1查看 2.7K关注 0票数 2

为了理解如何在python中导出,我参考了以下链接

我的代码:

代码语言:javascript
复制
df = sqlContext.createDataFrame(routeRDD, ['Consigner', 'AverageScore', 'Trips'])
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')

我用火花提交加载作业,并将下面的罐子传递到主人url上

代码语言:javascript
复制
spark-csv_2.11-1.5.0.jar, commons-csv-1.4.jar

我收到以下错误

代码语言:javascript
复制
df.select('Consigner', 'AverageScore', 'Trips').write.format('com.databricks.spark.csv').options(header='true').save('file:///opt/BIG-DATA/VisualCargo/output/top_consigner.csv')
      File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 332, in save
      File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
      File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 36, in deco
      File "/opt/cloudera/parcels/CDH-5.5.1-1.cdh5.5.1.p0.11/lib/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
    py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.


py4j.protocol.Py4JJavaError: An error occurred while calling o156.save.
    : java.lang.NoSuchMethodError: scala.Predef$.$conforms()Lscala/Predef$$less$colon$less;
        at com.databricks.spark.csv.util.CompressionCodecs$.<init>(CompressionCodecs.scala:29)
        at com.databricks.spark.csv.util.CompressionCodecs$.<clinit>(CompressionCodecs.scala)
        at com.databricks.spark.csv.DefaultSource.createRelation(DefaultSource.scala:198)
        at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:170)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:146)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:137)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:745)
EN

回答 1

Stack Overflow用户

发布于 2019-08-13 20:15:39

我运行的火花在Windows和我面临类似的问题,不能写到文件(CSV或Parquet)。在阅读更多到星火网站,我发现以下错误,这是因为我使用的winutils版本。我把它改成了64位,它起了作用。希望这能帮上忙。火花测井

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/40906815

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档