首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >不能用火花写到Elasticsearch

不能用火花写到Elasticsearch
EN

Stack Overflow用户
提问于 2017-08-09 12:41:39
回答 1查看 1.7K关注 0票数 0

elasticsearch服务器存在于版本为5.4.1的linux服务器上。

使用的星火簇是火花-2.2.0bin-hadoop2.7.我将spark.jars.packages org.elasticsearch:elasticsearch-spark-20_2.11:5.4.1添加到Start-defaults.conf中,主站和从站的启动是成功的,可以在localhost:8080上找到spark。从./start-master.sh./start-slave.sh spark://ApacheFlink:7077开始

我使用Intellij和sbt。使用的scala版本为2.11.8

这是scala代码。

代码语言:javascript
复制
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.elasticsearch.spark._

object TestInput {
def main(args: Array[String]): Unit = {
println("Hello, world")

val conf = new SparkConf().setAppName("TestInput").setMaster("spark://ApacheFlink:7077")
conf.set("es.nodes","elasticserver")
conf.set("es.port","9200")
conf.set("es.index.auto.create", "true")

val sc = new SparkContext(conf)

val numbers = Map("one" -> 1, "two" -> 2, "three" -> 3)
val airports = Map("arrival" -> "Otopeni", "SFO" -> "San Fran")

sc.makeRDD(Seq(numbers, airports)).saveToEs("test/TestInput")
}
}

我经常使用sbt依赖项。这些是我到现在为止的发现。

我所有的尝试都使用scalaVersion := 2.11.8

代码语言:javascript
复制
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0" % "provided"

 [error] /home/foo/Desktop/AnotherOne/src/main/scala/TestInput.scala:4: object elasticsearch is not a member of package org
    [error] import org.elasticsearch.spark._
    [error]            ^
    [error] /home/foo/Desktop/AnotherOne/src/main/scala/TestInput.scala:20: value saveToEs is not a member of org.apache.spark.rdd.RDD[scala.collection.immutable.Map[String,Any]]
    [error]     sc.makeRDD(Seq(numbers, airports)).saveToEs("test/TestInput")
    [error]                                        ^
    [error] two errors found
    [error] (compile:compileIncremental) Compilation failed

第二次尝试:

代码语言:javascript
复制
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0" % "provided"
libraryDependencies += "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.4.1"

Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream
    at org.apache.spark.SparkConf.loadFromSystemProperties(SparkConf.scala:73)
    at org.apache.spark.SparkConf.<init>(SparkConf.scala:68)
    at org.apache.spark.SparkConf.<init>(SparkConf.scala:55)
    at TestInput$.main(TestInput.scala:11)
    at TestInput.main(TestInput.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.FSDataInputStream
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 5 more

第三次尝试;

代码语言:javascript
复制
libraryDependencies += "org.apache.spark" % "spark-core_2.11" % "2.2.0"
libraryDependencies += "org.elasticsearch" % "elasticsearch-spark-20_2.11" % "5.4.1"

17/08/09 13:43:09 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.1.111, executor 0): java.lang.ClassNotFoundException: org.elasticsearch.spark.rdd.EsSpark$$anonfun$doSaveToEs$1
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:348)
    at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:67)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1826)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1713)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2000)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2245)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2169)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2027)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:422)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:80)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:748)

我甚至尝试过用其他错误消息导入弹力-hadoop。我现在的问题很简单。我做错了什么?我现在没有更多的想法。我的星星团是不是遗漏了什么?

EN

回答 1

Stack Overflow用户

发布于 2017-08-09 12:46:54

您可能会在作为elasticsearch-spark模块的依赖项的星火罐和您的build.sbt的依赖项之间发生冲突。

我让它这样工作,在我的build.sbt中:

代码语言:javascript
复制
scalaVersion := "2.11.8"

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % "2.1.1" % "compile",
    "org.apache.spark" %% "spark-sql" % "2.1.1" % "compile",
    "org.apache.spark" %% "spark-mllib" % "2.1.1" % "compile",

    "org.elasticsearch" %% "elasticsearch-spark-20" % "5.0.2" excludeAll ExclusionRule(organization = "org.apache.spark")
)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/45590998

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档