首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >mleap AttributeError:“管道”对象没有属性“serializeToBundle”

mleap AttributeError:“管道”对象没有属性“serializeToBundle”
EN

Stack Overflow用户
提问于 2017-09-18 20:36:15
回答 3查看 1.7K关注 0票数 0

我在执行来自mleap储存库的示例代码时遇到了问题。我希望在脚本中运行代码,而不是jupyter笔记本(这就是运行示例的方式)。我的脚本如下:

代码语言:javascript
复制
##################################################################################
# start a local spark session
# https://spark.apache.org/docs/0.9.0/python-programming-guide.html
##################################################################################
from pyspark import SparkContext, SparkConf
conf = SparkConf()

#set app name
conf.set("spark.app.name", "train classifier")
#Run Spark locally with as many worker threads as logical cores on your machine (cores X threads).
conf.set("spark.master", "local[*]")
#number of cores to use for the driver process (only in cluster mode)
conf.set("spark.driver.cores", "1")
#Limit of total size of serialized results of all partitions for each Spark action (e.g. collect)
conf.set("spark.driver.maxResultSize", "1g")
#Amount of memory to use for the driver process
conf.set("spark.driver.memory", "1g")
#Amount of memory to use per executor process (e.g. 2g, 8g).
conf.set("spark.executor.memory", "2g")

#pass configuration to the spark context object along with code dependencies
sc = SparkContext(conf=conf)
from pyspark.sql.session import SparkSession
spark = SparkSession(sc)
##################################################################################


import mleap.pyspark

# # Imports MLeap serialization functionality for PySpark
from mleap.pyspark.spark_support import SimpleSparkSerializer

# Import standard PySpark Transformers and packages
from pyspark.ml.feature import VectorAssembler, StandardScaler, OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import Row

# Create a test data frame
l = [('Alice', 1), ('Bob', 2)]
rdd = sc.parallelize(l)
Person = Row('name', 'age')
person = rdd.map(lambda r: Person(*r))
df2 = spark.createDataFrame(person)
df2.collect()

# Build a very simple pipeline using two transformers
string_indexer = StringIndexer(inputCol='name', outputCol='name_string_index')

feature_assembler = VectorAssembler(
    inputCols=[string_indexer.getOutputCol()], outputCol="features")

feature_pipeline = [string_indexer, feature_assembler]

featurePipeline = Pipeline(stages=feature_pipeline)

featurePipeline.fit(df2)

featurePipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip")

在执行spark-submit script.py时,我得到以下错误:

代码语言:javascript
复制
17/09/18 13:26:43 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Traceback (most recent call last):
  File "/Users/opringle/Documents/Repos/finn/Magellan/src/no_spark_predict.py", line 58, in <module>
    featurePipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip")
AttributeError: 'Pipeline' object has no attribute 'serializeToBundle'

任何帮助都将不胜感激!我已经安装了pypy公司的m闰号。

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2018-05-15 19:56:26

请参阅这里

MLeap似乎还没有为Spark 2.3做好准备。如果您碰巧正在运行Spark 2.3,请尝试将其降级为2.2并重试。希望这能帮上忙!

票数 0
EN

Stack Overflow用户

发布于 2017-10-17 16:49:58

我已经通过在运行时附加以下jar文件来解决这个问题:

代码语言:javascript
复制
spark-submit --packages ml.combust.mleap:mleap-spark_2.11:0.8.1  script.py
票数 0
EN

Stack Overflow用户

发布于 2017-09-19 07:37:51

你似乎没有正确地遵循这些步骤,在这里,http://mleap-docs.combust.ml/getting-started/py-spark.html

注意:导入mleap.pyspark需要在导入任何其他PySpark库之前进行。

因此,尝试在SparkContext之后导入mleap

票数 -1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/46287811

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档