我第一次在Scala上使用Spark Mllib,在实例化BinaryClassificationMetrics类时遇到了问题。它给出了一个Cannot resolve constructor错误,即使我根据需要将它的输入格式化为元组的RDD。你知道可能出了什么问题吗?
def modelEvaluation(model: PipelineModel, test: DataFrame): Unit = {
// Make a prediction on the test set
val predictionAndLabels = model.transform(test)
.select("prediction","label")
.rdd
.map(r => (r(0),r(1)))
/*.collect()
.foreach(r => println(r))*/
// Instantiate metrics object
val metrics = new BinaryClassificationMetrics(predictionAndLabels)
// Precision-Recall Curve
//val PRC = metrics.pr
}发布于 2019-07-03 17:05:12
BinaryClassificationMetrics需要RDD[(Double, Double)],详细信息:https://spark.apache.org/docs/2.4.0/api/scala/index.html#org.apache.spark.mllib.evaluation.BinaryClassificationMetrics
所以你可能会像这样改变:
def modelEvaluation(model: PipelineModel, test: DataFrame): Unit = {
// Make a prediction on the test set
val predictionAndLabels = model.transform(test)
.select("prediction","label")
.rdd
.map(r => (r(0).toString.toDouble,r(1).toString.toDouble))
// Instantiate metrics object
val metrics = new BinaryClassificationMetrics(predictionAndLabels)
// Precision-Recall Curve
//val PRC = metrics.pr
}https://stackoverflow.com/questions/53859402
复制相似问题