这是我第一次真正尝试spark/scala,所以要温文尔雅。
我在HDFS上有一个名为test.json的文件,我正在尝试使用Spark读取和索引该文件。我可以通过SQLContext.jsonFile()读取文件,但是当我尝试使用SchemaRDD.saveToEs()时,我得到了一个接收到无效JSON片段的错误。我认为saveToES()函数实际上并没有格式化json格式的输出,而只是发送RDD的value字段。
我做错了什么?
Spark 1.2.0
Elasticsearch-hadoop 2.1.0.BUILD-20150217
test.json:
{"key":"value"}spark-shell:
import org.apache.spark.SparkContext._
import org.elasticsearch.spark._
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext._
val input = sqlContext.jsonFile("hdfs://nameservice1/user/mshirley/test.json")
input.saveToEs("mshirley_spark_test/test")错误:
<snip>
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: Found unrecoverable error [Bad Request(400) - Invalid JSON fragment received[["value"]][MapperParsingException[failed to parse]; nested: ElasticsearchParseException[Failed to derive xcontent from (offset=13, length=9): [123, 34, 105, 110, 100, 101, 120, 34, 58, 123, 125, 125, 10, 91, 34, 118, 97, 108, 117, 101, 34, 93, 10]]; ]]; Bailing out..
<snip>输入:
res2: org.apache.spark.sql.SchemaRDD =
SchemaRDD[6] at RDD at SchemaRDD.scala:108
== Query Plan ==
== Physical Plan ==
PhysicalRDD [key#0], MappedRDD[5] at map at JsonRDD.scala:47input.printSchema():
root
|-- key: string (nullable = true)发布于 2015-07-31 07:36:53
https://github.com/elastic/elasticsearch-hadoop/issues/382
已更改:
import org.elasticsearch.spark._至:
import org.elasticsearch.spark.sql._https://stackoverflow.com/questions/28617829
复制相似问题