我是Spark和GraphFrames的新手。
当我想了解GraphFrame中的shortestPaths方法时,GraphFrames documentation给了我一个用Scala编写的示例代码,但不是用Java编写的。
在他们的文档中,他们提供了以下(Scala代码):
import org.graphframes.{examples,GraphFrame}
val g: GraphFrame = examples.Graphs.friends // get example graph
val results = g.shortestPaths.landmarks(Seq("a", "d")).run()
results.select("id", "distances").show()在Java中,我尝试了:
import org.graphframes.GraphFrames;
import scala.collection.Seq;
import scala.collection.JavaConverters;
GraphFrame g = new GraphFrame(...,...);
Seq landmarkSeq = JavaConverters.collectionAsScalaIterableConverter(Arrays.asList((Object)"a",(Object)"d")).asScala().toSeq();
g.shortestPaths().landmarks(landmarkSeq).run().show();或
g.shortestPaths().landmarks(new ArrayList<Object>(List.of((Object)"a",(Object)"d"))).run().show();强制转换为java.lang.Object是必要的,因为应用程序接口需要Seq或ArrayList,而我无法传递ArrayList来正确编译它。
运行代码后,我看到了以下消息:
Exception in thread "main" org.apache.spark.sql.AnalysisException: You're using untyped Scala UDF, which does not have the input type information. Spark may blindly pass null to the Scala closure with primitive-type argument, and the closure will see the default value of the Java type for the null argument, e.g. `udf((x: Int) => x, IntegerType)`, the result is 0 for null input. To get rid of this error, you could:
1. use typed Scala UDF APIs(without return type parameter), e.g. `udf((x: Int) => x)`
2. use Java UDF APIs, e.g. `udf(new UDF1[String, Integer] { override def call(s: String): Integer = s.length() }, IntegerType)`, if input types are all non primitive
3. set spark.sql.legacy.allowUntypedScalaUDF to true and use this API with caution;为了遵循3.,我添加了以下代码:
System.setProperty("spark.sql.legacy.allowUntypedScalaUDF","true");但情况并没有改变。
由于Java中关于GraphFrames的示例代码或堆栈溢出问题的数量有限,我在四处寻找时找不到任何有用的信息。
有这方面的经验的人能帮我解决这个问题吗?
发布于 2020-08-27 15:15:55
这似乎是GraphFrames 0.8.0中的一个错误。
请参阅github.com中的Issue #367
https://stackoverflow.com/questions/63609595
复制相似问题