很抱歉这个菜鸟问题,但我已经被这个问题卡住了几个小时:
如果我键入:
df['avg_wind_speed_9am'].head()它返回:
TypeError Traceback (most recent call last) <ipython-input-42-c01967246c17> in <module>() ----> 1 df['avg_wind_speed_9am'].head() TypeError: 'Column' object is not callable如果我输入:
df[['avg_wind_speed_9am']].head()它返回:
Row(avg_wind_speed_9am=2.080354199999768)我不明白,通常它应该打印一列。
下面是我导入数据帧的方式:
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
df = sqlContext.read.load('file:///home/cloudera/Downloads/big-data-4/daily_weather.csv', format='com.databricks.spark.csv', header='true', inferSchema='true')下面是我的数据集的样子:
number,air_pressure_9am,air_temp_9am,avg_wind_direction_9am,avg_wind_speed_9am,max_wind_direction_9am,max_wind_speed_9am,rain_accumulation_9am,rain_duration_9am,relative_humidity_9am,relative_humidity_3pm
0,918.0600000000087,74.82200000000041,271.1,2.080354199999768,295.39999999999986,2.863283199999908,0.0,0.0,42.42000000000046,36.160000000000494
1,917.3476881177097,71.40384263106537,101.93517935618371,2.4430092157340217,140.47154847112498,3.5333236016106238,0.0,0.0,24.328697291802207,19.4265967985621发布于 2020-11-09 02:37:31
尝试以下方法之一:
df.select('avg_wind_speed_9am').head()
df.select('avg_wind_speed_9am').show()
n = 10
df.select('avg_wind_speed_9am').take(n)通常在pyspark中查询数据帧,而不是单个列,因此要查询单个列,您需要使用:
df.select(<list_of_cols>),其中<list_of_cols>是您案例中的单个列。
https://stackoverflow.com/questions/64741585
复制相似问题