我正在用Dask和python测试地板文件上的阅读规范,我发现用熊猫阅读同样的文件要比Dask快得多。我想了解为什么会这样,如果有一个方法可以达到同样的效果,
所有相关软件包的版本
print(dask.__version__) print(pd.__version__) print(pyarrow.__version__) print(fastparquet.__version__)
2.6.0 0.25.2 0.15.1 0.3.2
import pandas as pd
import numpy as np
import dask.dataframe as dd
col = [str(i) for i in list(np.arange(40))]
df = pd.DataFrame(np.random.randint(0,100,size=(5000000, 4 * 10)), columns=col)
df.to_parquet('large1.parquet', engine='pyarrow')
# Wall time: 3.86 s
df.to_parquet('large2.parquet', engine='fastparquet')
# Wall time: 27.1 s
df = dd.read_parquet('large2.parquet', engine='fastparquet').compute()
# Wall time: 5.89 s
df = dd.read_parquet('large1.parquet', engine='pyarrow').compute()
# Wall time: 4.84 s
df = pd.read_parquet('large1.parquet',engine='pyarrow')
# Wall time: 503 ms
df = pd.read_parquet('large2.parquet',engine='fastparquet')
# Wall time: 4.12 s在使用混合数据类型时,差异更大。
dtypes: category(7), datetime64[ns](2), float64(1), int64(1), object(9)
memory usage: 973.2+ MB
# df.shape == (8575745, 20)df.to_parquet('large1.parquet', engine='pyarrow')
# Wall time: 9.67 s
df.to_parquet('large2.parquet', engine='fastparquet')
# Wall time: 33.3 s
# read with Dask
df = dd.read_parquet('large1.parquet', engine='pyarrow').compute()
# Wall time: 34.5 s
df = dd.read_parquet('large2.parquet', engine='fastparquet').compute()
# Wall time: 1min 22s
# read with pandas
df = pd.read_parquet('large1.parquet',engine='pyarrow')
# Wall time: 8.67 s
df = pd.read_parquet('large2.parquet',engine='fastparquet')
# Wall time: 21.8 s发布于 2019-11-12 15:48:37
我的第一个猜测是,Pandas将Parquet数据集保存到一个行组中,这将不允许像Dask这样的系统并行化。这并不能解释为什么速度更慢,但也解释了为什么它不能更快。
要了解更多信息,我建议进行分析。您可能对本文件感兴趣:
https://stackoverflow.com/questions/58820760
复制相似问题