首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >训练随机森林分类器火花

训练随机森林分类器火花
EN

Stack Overflow用户
提问于 2016-01-06 07:41:25
回答 2查看 2.2K关注 0票数 1

基本上,我已经清理了我的数据集,删除了标题,错误的值等等。我现在试着训练一个随机的森林分类器,这样它就能做出预测。到目前为止:

代码语言:javascript
复制
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.ml.classification.{RandomForestClassificationModel, RandomForestClassifier}
import org.apache.spark.ml.feature.StandardScaler

object{
   def main(args: Array[String]): Unit = {
    //setting spark context
    val conf = new SparkConf().setAppName("Churn")
    val sc = new SparkContext(conf)
    //loading and mapping data into RDD
    val csv = sc.textFile("file://filename.csv")
    val data = csv.map {line =>
    val parts = line.split(",").map(_.trim)
    val stringvec = Array(parts(1)) ++ parts.slice(4,20)
    val label = parts(20.toDouble)
    val vec = stringvec.map(_.toDouble)
    LabeledPoint(label, Vectors.dense(vec))
    }
    val splits = data.randomSplit(Array(0.7,0.3))
    val(training, testing) = (splits(0),splits(1))
    val model = RandomForest.trainClassifier(training)
    }
}

但我得到一个错误如下:

代码语言:javascript
复制
error: overloaded method value trainClassifier with alternatives:

  (input: org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint],strategy: org.apache.spark.mllib.tree.configuration.Strategy,numTrees: Int,featureSubsetStrategy: String,seed: Int)org.apache.spark.mllib.tree.model.RandomForestModel
 cannot be applied to (org.apache.spark.rdd.RDD[org.apache.spark.mllib.regression.LabeledPoint])
   val model = RandomForest.trainClassifier(training)

在谷歌上搜索它让我无处可寻。如果你能解释这个错误是什么以及为什么我会得到它,我将不胜感激。然后我可以自己找一个解决方案。

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2016-01-06 07:57:34

您没有向RandomForest.trainClassifier()传递足够的参数,也没有方法trainClassifier(RDD[LabeledPoint])。有几个重载版本,但是您可以在这里找到简单的版本,trainClassifier

不仅要发送标记点,还要发送Strategy、树数、featureSubsetStrategy和种子(int)。

示例如下所示:

代码语言:javascript
复制
RandomForest.trainClassifier(training,
  Strategy.defaultStrategy("Classification"), 
  3, 
  "auto", 
  12345)

在实践中,你会使用更多的树比3和不同的种子。

票数 1
EN

Stack Overflow用户

发布于 2017-12-18 16:24:58

吉特布的完整答案原始数据集

这些事情是一个接一个地使用测试数据内部的两行.csv文件,第一行作为头,第二行作为测试数据。

代码语言:javascript
复制
import org.apache.spark.ml.Pipeline
import org.apache.spark.ml.evaluation.RegressionEvaluator
import org.apache.spark.ml.feature.{LabeledPoint, VectorIndexer}
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.ml.regression.{RandomForestRegressionModel, 
RandomForestRegressor}


object RandomForest {
   def main(args: Array[String]): Unit = {
     val sparkSess = org.apache.spark.sql.SparkSession.builder().master("local[*]").appName("car_mpg").getOrCreate()
     import sparkSess.implicits._
     val carData = sparkSess.read.format("csv").option("header","true").option("InterScema","true")
  .csv("D:\\testData\\mpg.csv").toDF("mpg","cylinders","displacement","hp","weight","acceleration","model_year","origin","car_name")
  .map(data => LabeledPoint(data(0).toString.toDouble, Vectors.dense(Array(data(1).toString.toDouble,
    data(2).toString.toDouble, data(3).toString.toDouble, data(4).toString.toDouble, data(5).toString.toDouble))))

val carData_df = carData.toDF("label","features")

val featureIndexer = new VectorIndexer()
  .setInputCol("features").setOutputCol("indexedFeatures").fit(carData)

val Array(training) = carData_df.randomSplit(Array(0.7))

val randomReg = new RandomForestRegressor()
    .setLabelCol("label").setFeaturesCol("features")

val model = new Pipeline()
  .setStages(Array(featureIndexer,randomReg)).fit(training)

val testData = sparkSess.read.format("csv").option("header","true").option("InterScema","true")
  .csv("D:\\testData\\testData.csv")
  .toDF("mpg","cylinders","displacement","hp","weight","acceleration","model_year","origin","car_name")
  .map(data => LabeledPoint(data(0).toString.toDouble,
    Vectors.dense(data(1).toString.toDouble,data(2).toString.toDouble,
      data(3).toString.toDouble, data(4).toString.toDouble, data(5).toString.toDouble)))

val predictions = model.transform(testData)
predictions.select("prediction","Label","Features").show()

val rmse = new RegressionEvaluator().setLabelCol("label")
  .setPredictionCol("prediction").setMetricName("rmse").evaluate(predictions)
println("Root Mean Squared Error :\n" + rmse)

val treeModels = model.stages(1).asInstanceOf[RandomForestRegressionModel]
println("Learned Regression tree models :\n" + treeModels.toDebugString)

sparkSess.stop()

}}

在这里输入链接描述

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/34627941

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档