我正试图从切片结果中为每个for循环输出构建一个大数据集。
我所制定的代码如下:
for n in range(4):
script_dir = os.path.dirname(directory)
rel_path = files[n]
abs_file_path = os.path.join(script_dir, rel_path)
to_open = pd.read_csv(abs_file_path, header=0)
to_open["Geographic Address"] = to_open["Geographic Address"].astype(str)
to_open["Geographic Address"] = to_open["Geographic Address"].map(lambda x: x[3:-1])
to_open = to_open[to_open["Geographic Address"] == ld_up[n]]
to_open.index = range(len(to_open))
ind = np.searchsorted(to_open["Time"], time[n])
ind = np.asscalar(np.array(ind))
UpperBound = ind - 30
data = to_open.iloc[UpperBound:ind,:]因此,从data列中可以看到,如果我切片输出,只显示来自case 3的输出,我希望有一个大文件,同时包含案例0、1、2、3。
发布于 2018-12-11 16:27:32
看起来您正在尝试堆叠这些不同的情况,在这种情况下,您应该做的是将它们附加到列表中,然后将列表连接起来。
df_list = []
for n in range(4):
script_dir = os.path.dirname(directory)
rel_path = files[n]
abs_file_path = os.path.join(script_dir, rel_path)
to_open = pd.read_csv(abs_file_path, header=0)
to_open["Geographic Address"] = to_open["Geographic Address"].astype(str)
to_open["Geographic Address"] = to_open["Geographic Address"].map(lambda x: x[3:-1])
to_open = to_open[to_open["Geographic Address"] == ld_up[n]]
to_open.index = range(len(to_open))
ind = np.searchsorted(to_open["Time"], time[n])
ind = np.asscalar(np.array(ind))
UpperBound = ind - 30
data = to_open.iloc[UpperBound:ind,:]
df_list.append(data)
df = pd.concat(df_list)https://stackoverflow.com/questions/53728345
复制相似问题