首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >使用Pandas对大文件进行切片、删除重复项并合并到输出中

使用Pandas对大文件进行切片、删除重复项并合并到输出中
EN

Stack Overflow用户
提问于 2021-02-04 00:24:45
回答 1查看 62关注 0票数 1

所以,我有一个有12.5亿个特性的地理包。该文件实际上并不包含几何图形,并且只有一个属性' id‘,它是唯一的id。有很多重复的东西,我想删除重复的'id‘,只保留唯一的值。由于存在大量的数据( geopackage包含19 GB),我选择了切片。我尝试了多进程,但这不起作用,而且它会有问题,因为我必须跟踪唯一的'id‘,而多进程不允许这样做(至少据我所知)。

我所拥有的:

代码语言:javascript
复制
import fiona
import geopandas as gpd
import pandas as pd
# import numpy as np

slice_count = 200
start = 0
end = slice_count
fname = "path/Output.gpkg"

file_gpd = gpd.read_file(fname, rows=slice(start, end))
chunk = pd.DataFrame(file_gpd)
chunks = pd.DataFrame()
only_ids = pd.DataFrame(columns=['id'])
loop = True
while loop:
    try:
        # Dropping duplicates in current dataset
        chunk = chunk.drop_duplicates(subset=['id'])

        # Extract only unique IDS from chunk variable to save memory 
        only_ids_in_chunk = pd.DataFrame()
        only_ids_in_chunk['id'] = chunk['id']

        only_ids = only_ids.append(only_ids_in_chunk)
        only_ids = only_ids.drop_duplicates(subset=['id'])

        # If we want to make another file which have all values unique
        # we must store somewhere what we have in chunk variable, to be able to load new chunk
        # Because we must not have all chunks in memory at the same time

        del chunk

        # Load next chunk

        start += slice_count
        end += slice_count
        file_gpd = gpd.read_file(fname, rows=slice(start, end))
        chunk = pd.DataFrame(file_gpd)
        if len(chunk) == 0:
            print(len(only_ids))
            loop = False
        else:
            pass
    except Exception:
        loop = False
        print("Iteration is stopped")

我得到了一个无限循环。我认为使用if语句可以发现块的长度何时等于0,或者切片何时结束。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2021-02-06 17:23:25

所以,这是最终的脚本。我遇到的问题是,当您使用geopandas对geopackage文件进行切片时,当您看到结尾时,它会从头开始并且不会停止。因此,我在代码末尾添加了if语句来涵盖这一点。

代码语言:javascript
复制
import fiona
import geopandas as gpd
import pandas as pd
import logging
import time

slice_count = 20000000
start = 0
end = slice_count
fname = "/Output.gpkg"

chunk = gpd.read_file(fname, rows=slice(start, end), ignore_geometry=True)

chunks = pd.DataFrame()
only_ids = pd.DataFrame(columns=['id'])
loop = True
chunk_num = 1
while loop:
    start_time = time.time()
    # Dropping duplicates in current dataset
    chunk = chunk.drop_duplicates(subset=['id'])
        
    only_ids = only_ids.append(chunk)
    only_ids = only_ids.drop_duplicates(subset=['id'])

    # delete chunk to save memory
    del chunk

    # Load next chunk
    start += slice_count
    end += slice_count
    chunk = gpd.read_file(fname, rows=slice(start, end), ignore_geometry=True)
    
    FORMAT = '%(asctime)s:%(name)s:%(levelname)s - %(message)s'
    logging.basicConfig(format=FORMAT, level=logging.INFO)
    logging.info(f"Chunk {chunk_num} done")
    print(f"Duration: {time.time() - start_time}")
    chunk_num += 1

    if len(chunk) != slice_count:
        chunk = chunk.drop_duplicates(subset=['id'])
        only_ids = only_ids.append(chunk)
        only_ids = only_ids.drop_duplicates(subset=['id'])
        del chunk
        break

only_ids.to_csv('output.csv')
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/66031593

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档