我有一个简单的循环,它处理一个大数据集。
for i in range (len(nbi.LONG_017)):
StoredOCC = []
for j in range (len(Final.X)):
r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
if (r < 0.03048):
SWw = Final.CUM_OCC[j]
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)len(nbi.LONG_017)是3000个,len(Final.X)是600万个数据点。
我想知道是否有一种有效的方法来实现这段代码,或者使用并行计算来使它更快?
我使用了这里提供的代码:Haversine Formula in Python (Bearing and Distance between two GPS points)用于函数haversine
def haversine(lon1, lat1, lon2, lat2):
"""
Calculate the great circle distance between two points
on the earth (specified in decimal degrees)
"""
# convert decimal degrees to radians
lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])
# haversine formula
dlon = lon2 - lon1
dlat = lat2 - lat1
a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2
c = 2 * asin(sqrt(a))
r = 6372.8 # Radius of earth in kilometers. Use 3956 for miles
return r * c发布于 2021-03-25 20:49:27
在讨论并行化之前,您可以对循环进行优化。第一种方法是迭代数据,而不是对长度进行增量值,然后每次访问数据:
#toy sample
np.random.seed(1)
size_nbi = 20
size_Final = 100
nbi = pd.DataFrame({'LONG_017':np.random.random(size=size_nbi)/100+73,
'LAT_016':np.random.random(size=size_nbi)/100+73,})
Final = pd.DataFrame({'X':np.random.random(size=size_Final)/100+73,
'Y':np.random.random(size=size_Final)/100+73,
'CUM_OCC':np.random.randint(size_Final,size=size_Final)})使用您的方法,您可以得到这些大小约为75 of的数据:
%%timeit
for i in range (len(nbi.LONG_017)):
StoredOCC = []
for j in range (len(Final.X)):
r = haversine(nbi.LONG_017[i], nbi.LAT_016[i], Final.X[j], Final.Y[j])
if (r < 0.03048):
SWw = Final.CUM_OCC[j]
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
nbi.loc[i, 'ADTT_02_2019'] = np.max(StoredOCC)
# 75.6 ms ± 4.05 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)现在,如果稍微改变循环,对数据本身进行迭代,并为结果使用一个列表,然后将结果作为一个列放在循环之外,那么只需对其进行优化(您可能会发现更好的优化),就可以降低到5ms,所以速度要快15倍:
%%timeit
res_ADIT = []
for lon1, lat1 in zip (nbi.LONG_017.to_numpy(),
nbi.LAT_016.to_numpy()):
StoredOCC = []
for lon2, lat2, SWw in zip(Final.X.to_numpy(),
Final.Y.to_numpy(),
Final.CUM_OCC.to_numpy()):
r = haversine(lon1, lat1, lon2, lat2)
if (r < 0.03048):
StoredOCC.append(SWw)
if len(StoredOCC) != 0:
res_ADIT.append(np.max(StoredOCC))
else:
res_ADIT.append(np.nan)
nbi['ADIT_v2'] = res_ADIT
# 5.23 ms ± 305 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)现在,您甚至可以进一步使用numpy和第二个循环的矢量化,还可以直接传递弧度值,而不必在每个循环中对它们进行map:
# empty list for result
res_ADIT = []
# work with array in radians
arr_lon2 = np.radians(Final.X.to_numpy())
arr_lat2 = np.radians(Final.Y.to_numpy())
arr_OCC = Final.CUM_OCC.to_numpy()
for lon1, lat1 in zip (np.radians(nbi.LONG_017), #pass directly the radians
np.radians(nbi.LAT_016)):
# do all the substraction in a vectorize way
arr_dlon = arr_lon2 - lon1
arr_dlat = arr_lat2 - lat1
# same here using numpy functions
arr_dist = np.sin(arr_dlat/2)**2 + np.cos(lat1) * np.cos(arr_lat2) * np.sin(arr_dlon/2)**2
arr_dist = 2 * np.arcsin(np.sqrt(arr_dist))
arr_dist *= 6372.8
# extract the values of CUM_OCC that meet the criteria
r = arr_OCC[arr_dist<0.03048]
# check that at least one element
if r.size>0:
res_ADIT.append(max(r))
else :
res_ADIT.append(np.nan)
nbi['AUDIT_np'] = res_ADIT如果您对此执行timeit,那么对于数据文件的小大小,您的速度比原来的解决方案快90倍,所有的结果都是相同的。
print(nbi)
LONG_017 LAT_016 ADTT_02_2019 ADIT_v2 AUDIT_np
0 73.004170 73.008007 NaN NaN NaN
1 73.007203 73.009683 30.0 30.0 30.0
2 73.000001 73.003134 14.0 14.0 14.0
3 73.003023 73.006923 82.0 82.0 82.0
4 73.001468 73.008764 NaN NaN NaN
5 73.000923 73.008946 NaN NaN NaN
6 73.001863 73.000850 NaN NaN NaN
7 73.003456 73.000391 NaN NaN NaN
8 73.003968 73.001698 21.0 21.0 21.0
9 73.005388 73.008781 NaN NaN NaN
10 73.004192 73.000983 93.0 93.0 93.0
11 73.006852 73.004211 NaN NaN NaN您可以稍微玩一下代码,并增加每个玩具数据的大小,您将看到如何通过增加数据的大小--特别是数据最终的大小--获得的增益变得有趣--例如,使用size_nbi = 20和size_Final = 1000,用矢量化的解决方案可以提高400倍的增益。诸若此类。对于您的全部数据大小(3K *6M),您仍然需要一些时间,在我的计算机上,我估计大约需要25分钟,而您的解决方案是在100小时内。如果这还不够,您可以考虑并行化,也可以使用numba
发布于 2021-03-25 21:57:48
这个使用Balltree的方法将在我的机器上花费大约一分钟(取决于半径,它越小越快),您提到的大小(7000和6米)
import numpy as np
import sklearn
import pandas as pd我生成数据,我使用了你的代码
#toy sample
np.random.seed(1)
size_nbi = 7000
size_Final = 6000000
nbi = pd.DataFrame({'LONG_017':np.random.random(size=size_nbi)/10+73,
'LAT_016':np.random.random(size=size_nbi)/10+73,})
Final = pd.DataFrame({'X':np.random.random(size=size_Final)/10+73,
'Y':np.random.random(size=size_Final)/10+73,
'CUM_OCC':np.random.randint(size_Final,size=size_Final)})
nbi_gps = nbi[["LAT_016", "LONG_017"]].values
final_gps = Final[["Y", "X"]].values创建一棵圆球树
%%time
from sklearn.neighbors import BallTree
import numpy as np
nbi = np.radians(nbi_gps)
final = np.radians(final_gps)
tree = BallTree(final, leaf_size=12, metric='haversine')拿走了Wall time: 23.8 s
查询
%%time
radius = 0.0000003
StoredOCC_indici = tree.query_radius(nbi, r=radius, return_distance=False, count_only=False)(不足一秒)
得到你感兴趣的东西的最大限度
StoredOCC = [ np.max( Final.CUM_OCC[i] ) for i in StoredOCC_indici]它自动为空列表生成Nan,这很好。这花了Wall time: 3.64 s
对于这个radius,600万的计算在我的机器上花费了不到一分钟的时间。
https://stackoverflow.com/questions/66802110
复制相似问题