Elasticsearch/Spark序列化似乎不能很好地处理嵌套类型。
例如:
public class Foo implements Serializable {
private List<Bar> bars = new ArrayList<Bar>();
// getters and setters
public static class Bar implements Serializable {
}
}
List<Foo> foos = new ArrayList<Foo>();
foos.add( new Foo());
// Note: Foo object does not contain nested Bar instances
SparkConf sc = new SparkConf(); //
sc.setMaster("local");
sc.setAppName("spark.app.name");
sc.set("spark.serializer", KryoSerializer.class.getName());
JavaSparkContext jsc = new JavaSparkContext(sc);
JavaRDD javaRDD = jsc.parallelize(ImmutableList.copyOf(foos));
JavaEsSpark.saveToEs(javaRDD, INDEX_NAME+"/"+TYPE_NAME); 上面的代码可以工作,Foo类型的文档将在Elasticsearch中被索引。
当Foo对象中的bars列表不为空时,就会出现问题,例如:
Foo = new Foo();
Bar = new Foo.Bar();
foo.getBars().add(bar);然后,当索引到Elasticsearch时,会抛出以下异常:
org.elasticsearch.hadoop.serialization.EsHadoopSerializationException:
Cannot handle type [Bar] within type [class Foo], instance [Bar ...]]
within instance [Foo@1cf628a]
using writer [org.elasticsearch.spark.serialization.ScalaValueWriter@4e635d]
at org.elasticsearch.hadoop.serialization.builder.ContentBuilder.value(ContentBuilder.java:63)
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.doWriteObject(TemplatedBulk.java:71)
at org.elasticsearch.hadoop.serialization.bulk.TemplatedBulk.write(TemplatedBulk.java:58)
at org.elasticsearch.hadoop.rest.RestRepository.writeToIndex(RestRepository.java:148)
at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:47)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:68)
at org.elasticsearch.spark.rdd.EsSpark$$anonfun$saveToEs$1.apply(EsSpark.scala:68)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
at org.apache.spark.scheduler.Task.run(Task.scala:64)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)这些是相关的Maven依赖项
<dependency>
<groupId>com.sksamuel.elastic4s</groupId>
<artifactId>elastic4s_2.11</artifactId>
<version>1.5.5</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.3.1</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-hadoop-cascading</artifactId>
<version>2.1.0.Beta4</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch-spark_2.10</artifactId>
<version>2.1.0.Beta4</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-xml</artifactId>
<version>2.11.0-M4</version>
</dependency>在ElasticSearch和Spark中使用嵌套类型时,正确的索引方式是什么?
谢谢
发布于 2015-06-20 01:13:14
一种解决方案是从您试图保存的对象构建一个json,例如使用Json4s。在这种情况下,您的"JavaEsSpark“RDD将是字符串的RDD。然后,您只需调用
JavaEsSpark.saveJsonToEs...
而不是
JavaEsSpark.saveToEs...
这个变通方法帮助我节省了无数个小时来尝试找出一种序列化嵌套地图的方法。
发布于 2015-06-04 03:57:20
查看ScalaValueWriter和JdkValueWriter代码,我们可以看到只有某些类型的代码是直接支持的。内部类很可能不是JavaBean或其他受支持的类型。
发布于 2015-06-09 18:18:43
总有一天,ScalaValueWriter和JdkValueWriter可能会支持用户定义的类型(就像我们示例中的Bar ),而不仅仅是String、int等类型。
同时,有以下解决方法。不是让Foo公开Bar对象的列表,而是在内部将该列表转换为Map<String, Object>并公开它。
如下所示:
private List<Map<String, Object>> bars= new ArrayList<Map<String, Object>>();
public List<Map<String, Object>> getBars() {
return bars;
}
public void setBars(List<Bar> bars) {
for (Bar bar: bars){
this.bars.add(bar.getAsMap());
}
}https://stackoverflow.com/questions/30594367
复制相似问题