首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Pyspark pyspark.rdd.PipelinedRDD不使用模型

Pyspark pyspark.rdd.PipelinedRDD不使用模型
EN

Stack Overflow用户
提问于 2017-06-07 06:59:40
回答 1查看 1.3K关注 0票数 2

我无法将RDD对象传递给PySpark logistic回归模型。我用的是火花2.0.1。任何帮助都将不胜感激。

代码语言:javascript
复制
>>> from pyspark import SparkContext, HiveContext
>>> from pyspark.mllib.regression import LabeledPoint
>>> from pyspark.mllib.classification import LogisticRegressionWithLBFGS
>>> from pyspark.mllib.util import MLUtils
>>>
>>> table_name = "api_model"
>>> target_col = "dv"
>>>
>>>
>>> hc = HiveContext(sc)
>>>
>>> # get the table from the hive context
... df = hc.table(table_name)
>>> df = df.select(target_col, *[col for col in df.columns if col != target_col])
>>>
>>> # map through the data to produce an rdd of labeled points
... rdd_of_labeled_points = df.rdd.map(lambda row: LabeledPoint(row[0], row[1:]))
>>> print (rdd_of_labeled_points.take(3))
[LabeledPoint(1.0, [0.0,2.520784472,0.0,0.0,0.0,2.004684436,2.000347299,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0]), LabeledPoint(0.0, [2.857738033,0.0,0.0,2.619965104,0.0,2.004684436,2.000347299,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0]), LabeledPoint(0.0, [2.857738033,0.0,2.061393767,0.0,0.0,2.004684436,0.0,0.0,2.228387043,2.228387043,0.0,0.0,0.0,0.0,0.0,0.0])]
>>>
>>> from pyspark.ml.classification import LogisticRegression
>>> lr = LogisticRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8)
>>> lrModel = lr.fit(sc.parallelize(rdd_of_labeled_points))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/hdp/current/spark2-client/python/pyspark/context.py", line 432, in parallelize
    c = list(c)    # Make it a list so we can compute its length
TypeError: 'PipelinedRDD' object is not iterable
EN

回答 1

Stack Overflow用户

发布于 2017-06-07 07:09:50

这是因为您在RDD上使用了RDD。以下是错误的

代码语言:javascript
复制
sc.parallelize(rdd_of_labeled_points)

您还混合了spark-mlspark-mllib

代码语言:javascript
复制
from pyspark.mllib.classification import LogisticRegressionWithLBFGS

代码语言:javascript
复制
from pyspark.ml.classification import LogisticRegression

lrModel = lr.fit(sc.parallelize(rdd_of_labeled_points))

在第一种情况下,您需要使用RDD对模型进行train,如上面所述,示例:

代码语言:javascript
复制
model = LinearRegressionWithSGD.train(rdd_of_labeled_points, iterations=100, step=0.00000001)

在第二种情况下,您需要将您的RDD转换为DataFrame,以便将它提供给您的模型。

我强烈建议你阅读官方文件。还有很多例子可以帮助你开始。

记住:

  • 星星之火-mllib使用RDD。
  • 火花-mll使用DataFrames.
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44405675

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档