首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Spark Scala 2.11.8 Spark HbaseConnector错误

Spark Scala 2.11.8 Spark HbaseConnector错误
EN

Stack Overflow用户
提问于 2017-09-26 09:51:09
回答 1查看 102关注 0票数 0

我正在尝试使用Spark scala 2.11.8 hbase连接器来保存来自kafka stream的数据。但是当我尝试保存时,我得到了以下错误。我使用的是来自hortoworks的shc连接器。我的SBT设置如下。

此连接器是否仍受支持?

代码语言:javascript
复制
libraryDependencies ++= Seq(
  "org.apache.spark" % "spark-core_2.11" % "2.0.1" % "provided",
  "org.apache.spark" % "spark-sql_2.11" % "2.0.1" % "provided",
  "org.apache.spark" % "spark-streaming_2.11" % "2.0.1" % "provided",
  ("org.apache.spark" % "spark-streaming-kafka-0-8_2.11" % "2.0.1").exclude("org.spark-project.spark", "unused"),
  "org.json4s" % "json4s-native_2.11" % "3.2.10",
  "joda-time" % "joda-time" % "2.9.9",
   "com.hortonworks" % "shc" % "1.1.1-2.1-s_2.11"
)

错误如下:

代码语言:javascript
复制
Exception in thread "streaming-job-executor-1" java.lang.NoClassDefFoundError: org/apache/spark/Logging
    at java.lang.ClassLoader.defineClass1(Native Method)
    at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
    at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
    at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.getDeclaredConstructors0(Native Method)
    at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
    at java.lang.Class.getConstructor0(Class.java:3075)
    at java.lang.Class.newInstance(Class.java:412)
    at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:427)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:211)
    at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:194)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$.saveHbase$1(<console>:193)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$.runBusinessLogicAndProduceOutput(<console>:295)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$$anonfun$run$1.apply(<console>:312)
    at $line20.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw$CTUMhbaseingest$$anonfun$run$1.apply(<console>:311)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
    at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:627)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
    at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
    at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
    at scala.util.Try$.apply(Try.scala:192)
    at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:245)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:245)
    at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
    at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:244)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

原因: java.net.URLClassLoader.findClass(URLClassLoader.java:381)上的java.lang.ClassNotFoundException: org.apache.spark.Logging

EN

回答 1

Stack Overflow用户

发布于 2017-09-26 20:32:04

您的执行器或驱动程序类路径有问题。org.apache.spark.Logging在Spark 1.5.2或更低版本中可用。但是,我从您的libraryDependencies中看到您使用的是spark 2.0.1。您可以在spark应用程序UI的环境菜单中签入,并查看driver & executors类路径。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/46416633

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档