我想从一个CSV文件中计算PageRank,文件的边格式如下:
12,13,1.0
12,14,1.0
12,15,1.0
12,16,1.0
12,17,1.0
...我的代码:
var filename = "<filename>.csv"
val graph = Graph.fromCsvReader[Long,Double,Double](
env = env,
pathEdges = filename,
readVertices = false,
hasEdgeValues = true,
vertexValueInitializer = new MapFunction[Long, Double] {
def map(id: Long): Double = 0.0 } )
val ranks = new PageRank[Long](0.85, 20).run(graph)我从Flink Scala Shell中得到以下错误:
error: type mismatch;
found : org.apache.flink.graph.scala.Graph[Long,_23,_24] where type _24 >: Double with _22, type _23 >: Double with _21
required: org.apache.flink.graph.Graph[Long,Double,Double]
val ranks = new PageRank[Long](0.85, 20).run(graph)
^我做错了什么?
(每个顶点的初始值为0.0,每个边的初始值为1.0,是否正确?)
发布于 2015-11-16 11:30:44
问题是,您要将Scala org.apache.flink.graph.scala.Graph交给PageRank.run,这需要Javaorg.apache.flink.graph.Graph。
为了为Scala Graph对象运行Graph,您必须用GraphAlgorithm调用Scala Graph的run方法。
graph.run(new PageRank[Long](0.85, 20))更新
在PageRank算法中,需要注意的是,该算法需要一个Graph[K, java.lang.Double, java.lang.Double]类型的实例。因为Java的Double类型不同于Scala的Double类型(就类型检查而言),所以必须考虑到这一点。
对于示例代码,这意味着
val graph = Graph.fromCsvReader[Long,java.lang.Double,java.lang.Double](
env = env,
pathEdges = filename,
readVertices = false,
hasEdgeValues = true,
vertexValueInitializer = new MapFunction[Long, java.lang.Double] {
def map(id: Long): java.lang.Double = 0.0 } )
.asInstanceOf[Graph[Long, java.lang.Double, java.lang.Double]]https://stackoverflow.com/questions/33733793
复制相似问题