首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >'JavaPackage‘对象不可调用- MLeap

'JavaPackage‘对象不可调用- MLeap
EN

Stack Overflow用户
提问于 2018-08-27 15:36:09
回答 2查看 2K关注 0票数 3

当我试图使用以下代码使用MLeap序列化模型时:

代码语言:javascript
复制
import mleap.pyspark
from mleap.pyspark.spark_support import SimpleSparkSerializer

# Import standard PySpark Transformers and packages
from pyspark.ml.feature import VectorAssembler, StandardScaler, OneHotEncoder, StringIndexer
from pyspark.ml import Pipeline, PipelineModel
from pyspark.sql import Row

# Create a test data frame
l = [('Alice', 1), ('Bob', 2)]
rdd = sc.parallelize(l)
Person = Row('name', 'age')
person = rdd.map(lambda r: Person(*r))
df2 = spark.createDataFrame(person)
df2.collect()

# Build a very simple pipeline using two transformers
string_indexer = StringIndexer(inputCol='name', outputCol='name_string_index')

feature_assembler = VectorAssembler(inputCols=[string_indexer.getOutputCol()], outputCol="features")

feature_pipeline = [string_indexer, feature_assembler]

featurePipeline = Pipeline(stages=feature_pipeline)

fittedPipeline = featurePipeline.fit(df2)


# serialize the model:
fittedPipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip", fittedPipeline.transform(df2))

但是,我得到了以下错误:

代码语言:javascript
复制
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-2-98a49e4cd023> in <module>()
----> 1 fittedPipeline.serializeToBundle("jar:file:/tmp/pyspark.example.zip", fittedPipeline.transform(df2))

/opt/anaconda2/envs/py345/lib/python3.4/site-packages/mleap/pyspark/spark_support.py in serializeToBundle(self, path, dataset)
     22 
     23 def serializeToBundle(self, path, dataset=None):
---> 24     serializer = SimpleSparkSerializer()
     25     serializer.serializeToBundle(self, path, dataset=dataset)
     26 

/opt/anaconda2/envs/py345/lib/python3.4/site-packages/mleap/pyspark/spark_support.py in __init__(self)
     37     def __init__(self):
     38         super(SimpleSparkSerializer, self).__init__()
---> 39         self._java_obj = _jvm().ml.combust.mleap.spark.SimpleSparkSerializer()
     40 
     41     def serializeToBundle(self, transformer, path, dataset):

TypeError: 'JavaPackage' object is not callable

请帮忙?

EN

回答 2

Stack Overflow用户

回答已采纳

发布于 2018-08-28 11:51:54

通过下载并指向星火提交脚本中缺少的jar文件,我设法解决了这个问题。就我的情况而言,我已经安装了MLeap 0.8.1,并且使用了构建在Scalar11上的Spark2,因此我从MvnRepository下载了以下jar文件:

  • metrics-core-2.2.0
  • M闰-base_2.11-0.8.1
  • M闰-core_2.11-0.8.1
  • M闰-运行时_2.11-0.8.1
  • 跳跃-火花_2.11-0.8.1
  • 跳跃-火花-基准_2.11-0.8.1
  • 跃迁张量2.11-0.8.1

然后,我还使用火花提交文件上的--jar标志指向了这个jar文件,如下所示(我还使用--repository标志指向maven存储库):

代码语言:javascript
复制
export PYSPARK_SUBMIT_ARGS='--master yarn --deploy-mode client --driver-memory 40g --num-executors 15 --executor-memory 30g --executor-cores 5 --packages ml.combust.mleap:mleap-runtime_2.11:0.8.1 --repositories http://YOUR MAVEN REPO/ --jars arpack_combined_all-0.1.jar,mleap-base_2.11-0.8.1.jar,mleap-core_2.11-0.8.1.jar,mleap-runtime_2.11-0.8.1.jar,mleap-spark_2.11-0.8.1.jar,mleap-spark-base_2.11-0.8.1.jar,mleap-tensor_2.11-0.8.1.jar pyspark-shell'
jupyter notebook --no-browser --ip=$(hostname -f)

-来源

票数 4
EN

Stack Overflow用户

发布于 2022-02-21 14:00:14

@Tshilidzi的答案是正确的--您需要做的是将mleap-spark jar添加到您的星火类路径中。

吡火花中的一个选项是在创建spark.jars.packages配置时设置SparkSession配置

代码语言:javascript
复制
from pyspark.sql import SparkSession

spark = SparkSession.builder \
    .config('spark.jars.packages', 'ml.combust.mleap:mleap-spark_2.12:0.19.0') \
    .config("spark.jars.excludes", "net.sourceforge.f2j:arpack_combined_all") \ # this exclude is needed as this lib seems not to be available in public maven repos
    .getOrCreate()

我用Spark0.19.0和mleap 0.19.0测试了它

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52042658

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档