在spark-csv README中运行的示例Java代码如下: import org.apache.spark.sql.SQLContext;import org.apache.spark.sql.types.*;
SQLContext sqlContext = new SQLContext(sc);
StructType customSchema = new StructType(
new StructField("year", IntegerType, true),
new StructField("make", StringType, true),
new StructField("model", StringType, true),
new StructField("comment", StringType, true),
new StructField("blank", StringType, true));
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.option("inferSchema", "true")
.option("header", "true")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv");它没有开箱即用地编译,所以经过一番争论,我将不正确的FooType语法改为DataTypes.FooType并将StructFields作为new StructField[]进行了编译;编译器在StructField的构造函数中为metadata请求了第四个参数,但我很难找到关于它含义的文档(javadoc描述了它的用例,但并没有真正决定如何在StructField构造期间传入什么)。使用以下代码,它将一直运行,直到出现任何副作用方法,如collect()
JavaSparkContext sc = new JavaSparkContext(conf);
SQLContext sqlContext = new SQLContext(sc);
// Read features.
System.out.println("Reading features from " + args[0]);
StructType featuresSchema = new StructType(new StructField[] {
new StructField("case_id", DataTypes.StringType, false, null),
new StructField("foo", DataTypes.DoubleType, false, null)
});
DataFrame features = sqlContext.read()
.format("com.databricks.spark.csv")
.schema(featuresSchema)
.load(args[0]);
for (Row r : features.collect()) {
System.out.println("Row: " + r);
}我得到了以下异常:
Exception in thread "main" java.lang.NullPointerException
at org.apache.spark.sql.catalyst.expressions.AttributeReference.hashCode(namedExpressions.scala:202)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at scala.collection.immutable.HashSet.elemHashCode(HashSet.scala:65)
at scala.collection.immutable.HashSet.computeHash(HashSet.scala:74)
at scala.collection.immutable.HashSet.$plus(HashSet.scala:56)
at scala.collection.immutable.HashSet.$plus(HashSet.scala:59)
at scala.collection.immutable.Set$Set4.$plus(Set.scala:127)
at scala.collection.immutable.Set$Set4.$plus(Set.scala:121)
at scala.collection.mutable.SetBuilder.$plus$eq(SetBuilder.scala:24)
at scala.collection.mutable.SetBuilder.$plus$eq(SetBuilder.scala:22)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:93)
at scala.collection.AbstractSet.map(Set.scala:47)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.foreach(AttributeSet.scala:114)
at scala.collection.TraversableOnce$class.size(TraversableOnce.scala:105)
at org.apache.spark.sql.catalyst.expressions.AttributeSet.size(AttributeSet.scala:56)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProjectRaw(DataSourceStrategy.scala:307)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.pruneFilterProject(DataSourceStrategy.scala:282)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$.apply(DataSourceStrategy.scala:56)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:926)
at org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:924)
at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:930)
at org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:930)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1903)
at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1384)
...知道出什么问题了吗?
发布于 2015-12-21 11:56:32
自述文件似乎非常过时,需要对Java示例进行一些重要的编辑。我跟踪了实际的JIRA which added the metadata field,它指出了Map.empty用例的缺省Java值,编写文档的人一定是直接将Scala翻译成Map.empty,尽管输入参数没有相同的缺省值。
在1.5 branch of SparkSQL's code中,我们可以看到它在没有检查的情况下引用了metadata.hashCode(),这就是导致NullPointerException的原因。Metadata.empty()方法的存在,再加上关于在Scala中使用空映射作为默认映射的讨论,似乎暗示了正确的实现是继续并传递Metadata.empty(),如果您不关心它的话。修改后的示例应该是:
SQLContext sqlContext = new SQLContext(sc);
StructType customSchema = new StructType(new StructField[] {
new StructField("year", DataTypes.IntegerType, true, Metadata.empty()),
new StructField("make", DataTypes.StringType, true, Metadata.empty()),
new StructField("model", DataTypes.StringType, true, Metadata.empty()),
new StructField("comment", DataTypes.StringType, true, Metadata.empty()),
new StructField("blank", DataTypes.StringType, true, Metadata.empty())
});
DataFrame df = sqlContext.read()
.format("com.databricks.spark.csv")
.schema(customSchema)
.option("header", "true")
.load("cars.csv");
df.select("year", "model").write()
.format("com.databricks.spark.csv")
.option("header", "true")
.save("newcars.csv");发布于 2017-09-25 00:04:04
就连我也得到了同样的例外。我通过提供元数据修复了它。
因此,修改代码如下
StructType customSchema = new StructType(
new StructField("year", IntegerType, true,Metadata.empty()),
new StructField("make", StringType, true,Metadata.empty()),
new StructField("model", StringType, true,Metadata.empty()),
new StructField("comment", StringType, true,Metadata.empty()),
new StructField("blank", StringType, true,Metadata.empty()));这会解决这个问题
https://stackoverflow.com/questions/34388705
复制相似问题