首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >生成datastax spark-cassandra连接器失败

生成datastax spark-cassandra连接器失败
EN

Stack Overflow用户
提问于 2015-11-04 19:51:21
回答 1查看 497关注 0票数 0

我正在尝试构建spark-cassandra连接器,并遵循以下链接:

代码语言:javascript
复制
http://www.planetcassandra.org/blog/kindling-an-introduction-to-spark-with-cassandra/

链接中的下一步请求从git下载连接器并使用sbt进行构建。但是,当我尝试运行命令./sbt/sbt assembly时。它抛出以下异常:

代码语言:javascript
复制
Launching sbt from sbt/sbt-launch-0.13.8.jar
[info] Loading project definition from /home/naresh/Desktop/spark-cassandra-connector/project
Using releases: https://oss.sonatype.org/service/local/staging/deploy/maven2 for releases
Using snapshots: https://oss.sonatype.org/content/repositories/snapshots for snapshots

  Scala: 2.10.5 [To build against Scala 2.11 use '-Dscala-2.11=true']
  Scala Binary: 2.10
  Java: target=1.7 user=1.7.0_79

[info] Set current project to root (in build file:/home/naresh/Desktop/spark-cassandra-connector/)
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist
    [info] Compiling 140 Scala sources and 1 Java source to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/classes...
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:48: not found: value processTableIdentifier
    [error]     val id = processTableIdentifier(tableIdentifier).reverse.lift
    [error]              ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:134: value toSeq is not a member of org.apache.spark.sql.catalyst.TableIdentifier
    [error]     cachedDataSourceTables.refresh(tableIdent.toSeq)
    [error]                                               ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraSQLContext.scala:94: not found: value BroadcastNestedLoopJoin
    [error]       BroadcastNestedLoopJoin
    [error]       ^
    [error] three errors found
    [info] Compiling 11 Scala sources to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/target/scala-2.10/classes...
    [warn] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/src/main/scala/com/datastax/spark/connector/embedded/SparkTemplate.scala:69: value actorSystem in class SparkEnv is deprecated: Actor system is no longer supported as of 1.4.0
    [warn]   def actorSystem: ActorSystem = SparkEnv.get.actorSystem
    [warn]                                               ^
    [warn] one warning found
    [error] (spark-cassandra-connector/compile:compileIncremental) Compilation failed
    [error] Total time: 27 s, completed 4 Nov, 2015 12:34:33 PM
EN

回答 1

Stack Overflow用户

发布于 2015-11-04 21:54:22

这对我很有效,运行mvn -DskipTests clean package

  • 您可以在spark export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"

README.md文件中找到build spark command

  • 在运行该命令之前,您需要通过设置MAVEN_OPTS Dir.将Maven配置为使用比平时更多的内存
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/33521151

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档