AFAIK是Hadoop + Spark的最新、最好的S3实现,它是通过使用"s3a://“url协议调用的。这在预配置的Amazon EMR上非常有效。
但是,在使用预构建的spark-2.0.0-bin-hadoop2.7.tgz在本地开发系统上运行时,我得到
Caused by: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2101)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
... 99 more接下来,我尝试启动我的Spark作业,指定hadoop-aws插件:
$SPARK_HOME/bin/spark-submit --master local \
--packages org.apache.hadoop:hadoop-aws:2.7.3 \
my_spark_program.py我得到了
::::::::::::::::::::::::::::::::::::::::::::::
:: FAILED DOWNLOADS ::
:: ^ see resolution messages for details ^ ::
::::::::::::::::::::::::::::::::::::::::::::::
:: com.google.code.findbugs#jsr305;3.0.0!jsr305.jar
:: org.apache.avro#avro;1.7.4!avro.jar
:: org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.jar(bundle)
::::::::::::::::::::::::::::::::::::::::::::::我在一个临时目录中创建了一个带有这三个依赖项的虚拟build.sbt项目,以查看一个基本的sbt构建是否可以成功地下载这些依赖项,我得到了:
[error] (*:update) sbt.ResolveException: unresolved dependency: org.apache.avro#avro;1.7.4: several problems occurred while resolving dependency: org.apache.avro#avro;1.7.4 {compile=[default(compile)]}:
[error] org.apache.avro#avro;1.7.4!avro.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.pom
[error] org.apache.avro#avro;1.7.4!avro.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/apache/avro/avro/1.7.4/avro-1.7.4.pom
[error]
[error] unresolved dependency: com.google.code.findbugs#jsr305;3.0.0: several problems occurred while resolving dependency: com.google.code.findbugs#jsr305;3.0.0 {compile=[default(compile)]}:
[error] com.google.code.findbugs#jsr305;3.0.0!jsr305.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.pom
[error] com.google.code.findbugs#jsr305;3.0.0!jsr305.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/com/google/code/findbugs/jsr305/3.0.0/jsr305-3.0.0.pom
[error]
[error] unresolved dependency: org.xerial.snappy#snappy-java;1.0.4.1: several problems occurred while resolving dependency: org.xerial.snappy#snappy-java;1.0.4.1 {compile=[default(compile)]}:
[error] org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.pom
[error] org.xerial.snappy#snappy-java;1.0.4.1!snappy-java.pom(pom.original) origin location must be absolute: file:/Users/username/.m2/repository/org/xerial/snappy/snappy-java/1.0.4.1/snappy-java-1.0.4.1.pom
[error] Total time: 2 s, completed Sep 2, 2016 6:47:17 PM你有什么建议吗?我该怎么做呢?
发布于 2016-12-13 04:43:35
看起来您需要在提交标志中添加额外的jars。Maven存储库有许多用于Java的AWS包,您可以使用它们来修复当前的错误:https://mvnrepository.com/search?q=aws
我不断收到令人头疼的S3A文件系统错误;但是aws-java-sdk:1.7.4JAR适用于Spark2.0。
关于这个问题的进一步对话可以在这里找到;尽管在Maven AWS EC2存储库中确实有一个实际的包。
https://sparkour.urizone.net/recipes/using-s3/
试试这个:
spark-submit --packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3 my_spark_program.py发布于 2016-12-13 20:15:45
如果你正在使用Apache Spark (也就是说:我忽略了EMR中的构建亚马逊),你需要添加一个依赖于org.apache.hadoop:hadoop-aws的依赖项,这个依赖与其他spark使用的Hadoop版本完全相同。这将添加S3a FS和传递依赖项。AWS SDK的版本必须与用于构建hadoop-aws库的版本相同,因为它是一个有点移动的目标。
请参阅:Apache Spark and Object Stores
https://stackoverflow.com/questions/39301997
复制相似问题