我有一个巨大的csv文件(大约5-6 GB)的大小,这是在配置单元。有没有办法计算文件中存在的唯一行的数量?
我对此一无所知。
我需要将输出与另一个配置单元表进行比较,后者具有相似的内容但具有唯一的值。所以,基本上我需要找出不同的linnes的数量。
发布于 2019-05-16 16:39:01
下面的逻辑是基于散列的。它读取每一行的哈希值,而不是整行的哈希值,这将使大小最小化。然后对散列进行比较。对于相等的字符串,哈希大部分是相同的,很少字符串可能不同,因此读取实际的行,并比较实际的字符串以确保。下面的代码也适用于大型文件。
from collections import Counter
input_file = r'input_file.txt'
# Main logic
# If hash is different then the contents are different
# If hash is same then the contents may be different
def count_with_index(values):
'''
Returns dict like key: (count, [indexes])
'''
result = {}
for i, v in enumerate(values):
count, indexes = result.get(v, (0, []))
result[v] = (count + 1, indexes + [i])
return result
def get_lines(fp, line_numbers):
return (v for i, v in enumerate(fp) if i in line_numbers)
# Gets hashes of all lines
counter = count_with_index(map(hash, open(input_file)))
# Sums only the unique hashes
sum_of_unique_hash = sum((c for _, (c, _) in counter.items() if c == 1))
# Filters all non unique hashes
non_unique_hash = ((h, v) for h, (c, v) in counter.items() if c != 1)
total_sum = sum_of_unique_hash
# For all non unique hashes get the actual line and count
# One hash is picked per time. So memory is not consumed much.
for h, v in non_unique_hash:
counter = Counter(get_lines(open(input_file), v))
total_sum += sum(1 for k, v in counter.items())
print('Total number of unique lines is : ', total_sum)https://stackoverflow.com/questions/56163678
复制相似问题