首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何将<class‘class’_. How .

如何将<class‘class’_. How .
EN

Stack Overflow用户
提问于 2017-03-02 04:11:07
回答 1查看 5.5K关注 0票数 5

我对Spark完全陌生,目前我正在尝试使用Python编写一段简单的代码,对一组数据执行KMeans操作。

代码语言:javascript
复制
from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
import re
from pyspark.mllib.clustering import KMeans, KMeansModel
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.linalg import SparseVector
from numpy import array
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler

import pandas as pd
import numpy
df = pd.read_csv("/<path>/Wholesale_customers_data.csv")
sql_sc = SQLContext(sc)
cols = ["Channel", "Region", "Fresh", "Milk", "Grocery", "Frozen", "Detergents_Paper", "Delicassen"]
s_df = sql_sc.createDataFrame(df)
vectorAss = VectorAssembler(inputCols=cols, outputCol="feature")
vdf = vectorAss.transform(s_df)
km = KMeans.train(vdf, k=2, maxIterations=10, runs=10, initializationMode="k-means||")
model = kmeans.fit(vdf)
cluster = model.clusterCenters()
print(cluster)

我将这些输入到pyspark中,当它运行model = kmeans.fit(vdf)时,我得到了以下错误:

TypeError:无法将类型转换为向量 在org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) at org.apache.spark.api.python.PythonRunner$$anon$1.(PythonRDD.scala:207) at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) at org.apache.spark.rdd.RDDorg.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38),org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313),org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:69),org.apache.spark.rdd.RDD.iterator(RDD.scala:275),org.apache.spark.rdd.ZippedPartitionsRDD2.compute(ZippedPartitionsRDD.scala:88),org.apache.spark.rdd.RDD.computeOrReadCheckpoint( org.apache.spark.rdd.RDD.iterator(RDD.scala:277) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) at org.apache.spark.scheduler.Task.run )( org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 17/02/26 23:31:58错误执行器:第23.0阶段任务6.0中的异常(TID 113) org.apache.spark.api.python )"/usr/hdp/2.5.0.0-1245/spark/python/lib/pyspark.zip/pyspark/worker.py",:.PythonException:.PythonException(最近一次调用):文件在main process() File "/usr/hdp/2.5.0.0-1245/spark/python/lib/pyspark.zip/pyspark/worker.py",第106行中,在process serializer.dump_stream(split_index迭代器)中,在文件"/usr/hdp/2.5.0.0-1245/spark/python/lib/pyspark.zip/pyspark/serializers.py",第263行中,在dump_stream vs =list(迭代器,itertools.islice)中(批处理)文件"/usr/hdp/2.5.0.0-1245/spark/python/lib/pyspark.zip/pyspark/mllib/linalg/init.py",第77行,在_convert_to_vector引发TypeError(“不能将类型%s转换为向量”% type(l)) TypeError:不能将类型转换为向量

我得到的数据来自:https://archive.ics.uci.edu/ml/machine-learning-databases/00292/Wholesale%20customers%20data.csv

有人能告诉我这里出了什么问题吗?我错过了什么?我很感谢你的帮助。

谢谢!

更新:@Garren我得到的错误是:

我得到的错误是:>>> kmm = kmeans.fit(s_df)17/03/02 21:58:01 INFO BlockManagerInfo:删除本地主机上的broadcast_1_piece0 :56193(大小:5.8KB,空闲: 511.1 MB) 17/03/02 21:58:01 INFO ContextCleaner:清除累加器5/03/02 21:58:01 INFO BlockManagerInfo:删除本地主机上的broadcast__piece0 :56193(大小:5.8KB,免费: 511.1 MB) 17/03/02 21:58:01 INFO ContextCleaner:清洁蓄电池4 追溯(最近一次调用):文件"",第1行,文件"/usr/hdp/2.5.0.0-1245/spark/python/pyspark/ml/pipeline.py",第69行,fit返回self._fit(数据集)文件"/usr/hdp/2.5.0.0-1245/spark/python/pyspark/ml/wrapper.py",第133行,在"/usr/hdp/2.5.0.0-1245/spark/python/pyspark/ml/wrapper.py",java_model = self._fit_java(dataset)文件第130行中,在_fit_java返回self._java_obj.fit(dataset._jdf)文件第813行中,在call File 第51行中,在deco (s.split(‘:',1)1,stackTrace) pyspark.sql.utils.AnalysisException: U“无法解析给定的输入列:通道、Grocery、新鲜、冷冻、Detergents_Paper、Region、Delicassen、Milk;“

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-03-02 05:41:37

在即将被弃用的Spark2.xML包中使用Spark2.xML包:

代码语言:javascript
复制
from pyspark.ml.clustering import KMeans
from pyspark.ml.feature import VectorAssembler
df = spark.read.option("inferSchema", "true").option("header", "true").csv("whole_customers_data.csv")
cols = df.columns
vectorAss = VectorAssembler(inputCols=cols, outputCol="features")
vdf = vectorAss.transform(df)
kmeans = KMeans(k=2, maxIter=10, seed=1)
kmm = kmeans.fit(vdf)
kmm.clusterCenters()
票数 3
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/42546720

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档