首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >为什么读取一个字节20倍比读取2,3,4,.文件中的字节?

为什么读取一个字节20倍比读取2,3,4,.文件中的字节?
EN

Stack Overflow用户
提问于 2017-01-13 00:30:37
回答 3查看 1.6K关注 0票数 33

我一直在努力理解readseek之间的权衡。对于小的“跳转”,读取不需要的数据比用seek跳过它更快。

当选择不同的读取/寻找块大小来寻找临界点时,我遇到了一个奇怪的现象:read(1)read(2)read(3)等慢20倍。对于不同的读取方法,例如read()readinto(),这种效果是相同的。

为什么是这种情况?

在计时结果中搜索以下第2/3行:

代码语言:javascript
复制
2 x buffered 1 byte readinto bytearray

环境:

代码语言:javascript
复制
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul  5 2016, 11:45:57) [MSC v.1900 32 bit (Intel)]

时间安排结果:

代码语言:javascript
复制
Non-cachable binary data ingestion (file object blk_size = 8192):
- 2 x buffered 0 byte readinto bytearray:
      robust mean: 6.01 µs +/- 377 ns
      min: 3.59 µs
- Buffered 0 byte seek followed by 0 byte readinto:
      robust mean: 9.31 µs +/- 506 ns
      min: 6.16 µs
- 2 x buffered 4 byte readinto bytearray:
      robust mean: 14.4 µs +/- 6.82 µs
      min: 2.57 µs
- 2 x buffered 7 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 2 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.77 µs
      min: 3.08 µs
- 2 x buffered 5 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 3 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.73 µs
      min: 2.57 µs
- 2 x buffered 49 byte readinto bytearray:
      robust mean: 14.5 µs +/- 6.72 µs
      min: 2.57 µs
- 2 x buffered 6 byte readinto bytearray:
      robust mean: 14.6 µs +/- 6.76 µs
      min: 3.08 µs
- 2 x buffered 343 byte readinto bytearray:
      robust mean: 15.3 µs +/- 6.43 µs
      min: 3.08 µs
- 2 x buffered 2401 byte readinto bytearray:
      robust mean: 138 µs +/- 247 µs
      min: 4.11 µs
- Buffered 7 byte seek followed by 7 byte readinto:
      robust mean: 278 µs +/- 333 µs
      min: 15.4 µs
- Buffered 3 byte seek followed by 3 byte readinto:
      robust mean: 279 µs +/- 333 µs
      min: 14.9 µs
- Buffered 1 byte seek followed by 1 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 2 byte seek followed by 2 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 4 byte seek followed by 4 byte readinto:
      robust mean: 279 µs +/- 334 µs
      min: 15.4 µs
- Buffered 49 byte seek followed by 49 byte readinto:
      robust mean: 281 µs +/- 336 µs
      min: 14.9 µs
- Buffered 6 byte seek followed by 6 byte readinto:
      robust mean: 281 µs +/- 337 µs
      min: 15.4 µs
- 2 x buffered 1 byte readinto bytearray:
      robust mean: 282 µs +/- 334 µs
      min: 17.5 µs
- Buffered 5 byte seek followed by 5 byte readinto:
      robust mean: 282 µs +/- 338 µs
      min: 15.4 µs
- Buffered 343 byte seek followed by 343 byte readinto:
      robust mean: 283 µs +/- 340 µs
      min: 15.4 µs
- Buffered 2401 byte seek followed by 2401 byte readinto:
      robust mean: 309 µs +/- 373 µs
      min: 15.4 µs
- Buffered 16807 byte seek followed by 16807 byte readinto:
      robust mean: 325 µs +/- 423 µs
      min: 15.4 µs
- 2 x buffered 16807 byte readinto bytearray:
      robust mean: 457 µs +/- 558 µs
      min: 16.9 µs
- Buffered 117649 byte seek followed by 117649 byte readinto:
      robust mean: 851 µs +/- 1.08 ms
      min: 15.9 µs
- 2 x buffered 117649 byte readinto bytearray:
      robust mean: 1.29 ms +/- 1.63 ms
      min: 18 µs

基准代码:

代码语言:javascript
复制
from _utils import BenchmarkResults

from timeit import timeit, repeat
import gc
import os
from contextlib import suppress
from math import floor
from random import randint

### Configuration

FILE_NAME = 'test.bin'
r = 5000
n = 100

reps = 1

chunk_sizes = list(range(7)) + [7**x for x in range(1,7)]

results = BenchmarkResults(description = 'Non-cachable binary data ingestion')


### Setup

FILE_SIZE = int(100e6)

# remove left over test file
with suppress(FileNotFoundError):
    os.unlink(FILE_NAME)

# determine how large a file needs to be to not fit in memory
gc.collect()
try:
    while True:
        data = bytearray(FILE_SIZE)
        del data
        FILE_SIZE *= 2
        gc.collect()
except MemoryError:
    FILE_SIZE *= 2
    print('Using file with {} GB'.format(FILE_SIZE / 1024**3))

# check enough data in file
required_size = sum(chunk_sizes)*2*2*reps*r
print('File size used: {} GB'.format(required_size / 1024**3))
assert required_size <= FILE_SIZE


# create test file
with open(FILE_NAME, 'wb') as file:
    buffer_size = int(10e6)
    data = bytearray(buffer_size)
    for i in range(int(FILE_SIZE / buffer_size)):
        file.write(data)

# read file once to try to force it into system cache as much as possible
from io import DEFAULT_BUFFER_SIZE
buffer_size = 10*DEFAULT_BUFFER_SIZE
buffer = bytearray(buffer_size)
with open(FILE_NAME, 'rb') as file:
    bytes_read = True
    while bytes_read:
        bytes_read = file.readinto(buffer)
    blk_size = file.raw._blksize

results.description += ' (file object blk_size = {})'.format(blk_size)

file = open(FILE_NAME, 'rb')

### Benchmarks

setup = \
"""
# random seek to avoid advantageous starting position biasing results
file.seek(randint(0, file.raw._blksize), 1)
"""

read_read = \
"""
file.read(chunk_size)
file.read(chunk_size)
"""

seek_seek = \
"""
file.seek(buffer_size, 1)
file.seek(buffer_size, 1)
"""

seek_read = \
"""
file.seek(buffer_size, 1)
file.read(chunk_size)
"""

read_read_timings = {}
seek_seek_timings = {}
seek_read_timings = {}
for chunk_size in chunk_sizes:
    read_read_timings[chunk_size] = []
    seek_seek_timings[chunk_size] = []
    seek_read_timings[chunk_size] = []

for j in range(r):
    #file.seek(0)
    for chunk_size in chunk_sizes:
        buffer = bytearray(chunk_size)
        read_read_timings[chunk_size].append(timeit(read_read, setup, number=reps, globals=globals()))
        #seek_seek_timings[chunk_size].append(timeit(seek_seek, setup, number=reps, globals=globals()))
        seek_read_timings[chunk_size].append(timeit(seek_read, setup, number=reps, globals=globals()))

for chunk_size in chunk_sizes:
    results['2 x buffered {} byte readinto bytearray'.format(chunk_size)] = read_read_timings[chunk_size]
    #results['2 x buffered {} byte seek'.format(chunk_size)] = seek_seek_timings[chunk_size]
    results['Buffered {} byte seek followed by {} byte readinto'.format(chunk_size, chunk_size)] = seek_read_timings[chunk_size]


### Cleanup
file.close()
os.unlink(FILE_NAME)

results.show()
results.save()

编辑2020-02-24:

@finefoot请求_utils包能够运行上面的代码。

代码语言:javascript
复制
from collections import OrderedDict
from math import ceil
from statistics import mean, stdev
from contextlib import suppress
import os
import inspect

class BenchmarkResults(OrderedDict):
    def __init__(self, *args, description='Benchmark Description', **kwArgs):
        self.description = description
        return super(BenchmarkResults, self).__init__(*args, **kwArgs)

    def __repr__(self):
        """Shows the results for the benchmarks in order of ascending duration"""
        characteristic_durations = []
        for name, timings in self.items():
            try:
                characteristic_durations.append(_robust_stats(timings)[0])
            except ValueError:
                if len(timings) > 1:
                    characteristic_durations.append(mean(timings))
                else:
                    characteristic_durations.append(timings[0])
        indx = _argsort(characteristic_durations)
        repr = '{}:\n'.format(self.description)
        items = list(self.items())
        for i in indx:
            name, timings = items[i]
            repr += '- {}:\n'.format(name)
            try:
                stats = _robust_stats(timings)
                repr += '      robust mean: {} +/- {}\n'.format(_units(stats[0]), _units(stats[1]))
            except ValueError:
                repr += '      timings: {}\n'.format(', '.join(map(_units, timings)))
            if len(timings) > 1:
                repr += '      min: {}\n'.format(_units(min(timings)))
        return repr

    def show(self):
        print(self)

    def save(self):
        caller = inspect.stack()[1]
        filename = os.path.splitext(caller.filename)[0] + '.log'
        with open(filename, 'w') as logfile:
            logfile.write(repr(self))


def _units(seconds, significant_figures=3):
    fmt = '{:.%sg} {}' % significant_figures
    if seconds > 1:
        return fmt.format(seconds, 's')
    elif seconds > 1e-3:
        return fmt.format(seconds*1e3, 'ms')
    elif seconds > 1e-6:
        return fmt.format(seconds*1e6, 'µs')
    elif seconds < 1e-6:
        return fmt.format(seconds*1e9, 'ns')
    elif seconds > 60:
        return fmt.format(seconds/60, 'min')
    else:
        return fmt.format(seconds/3600, 'hrs')
    raise ValueError()

def _robust_stats(timings, fraction_to_use=0.8):
    if len(timings) < 5:
        raise ValueError('To calculate a robust mean, you need at least 5 timing results')
    elts_to_prune = int(len(timings) * (1 - fraction_to_use))
    # prune at least the highest and the lowest result
    elts_to_prune = elts_to_prune if elts_to_prune > 2 else 2
    # round to even number --> symmetic pruning
    offset = ceil(elts_to_prune / 2)

    # sort the timings
    timings.sort()
    # prune the required fraction of the elements
    timings = timings[offset:-offset]
    return mean(timings), stdev(timings)

def _argsort(seq):
    # http://stackoverflow.com/questions/3071415/efficient-method-to-calculate-the-rank-vector-of-a-list-in-python
    return sorted(range(len(seq)), key=seq.__getitem__)

if __name__ == '__main__':
    pass
EN

回答 3

Stack Overflow用户

发布于 2020-02-24 13:10:35

我用你的代码重现了这个问题。但是,我注意到以下几点:如果您替换

代码语言:javascript
复制
file.seek(randint(0, file.raw._blksize), 1)

使用

代码语言:javascript
复制
file.seek(randint(0, file.raw._blksize), 0)

setup?我认为,在读取1字节时,您可能会在某个时候耗尽数据。读取2字节、3字节等等都不会有任何数据可读取,这就是为什么读取速度更快的原因。

票数 2
EN

Stack Overflow用户

发布于 2019-08-26 18:33:47

从文件句柄到字节的读取通常比读取分块慢。

通常,每个read()调用对应于Python中的C()调用。总的结果涉及请求下一个字符的系统调用。对于2kb的文件,这意味着对内核的2000次调用;每个调用都需要一个函数调用,请求内核,然后等待响应,通过返回传递。

这里最值得注意的是awaiting response,系统调用将被阻塞,直到您的调用在队列中被确认,所以您必须等待。

调用越少越好,所以字节越多越快;这就是为什么缓存的io非常常用的原因。

在python中,缓冲可以由io.BufferedReader提供,也可以通过buffering关键字参数在open上为文件提供。

票数 1
EN

Stack Overflow用户

发布于 2019-02-13 16:24:05

在处理与EEPROM接口的arduinos时,我也看到过类似的情况。基本上,为了从芯片或数据结构中写入或读取,您必须发送一个写/读启用命令,发送一个起始位置,然后获取第一个字符。但是,如果获取多个字节,大多数芯片将自动增加它们的目标地址寄存器.因此,启动读/写操作有一些开销。它的区别是:

  • 启动通信
  • 发送读启用
  • 发送读取命令
  • 发送地址1
  • 从目标1获取数据
  • 终端通信
  • 启动通信
  • 发送读启用
  • 发送读取命令
  • 发送地址2
  • 从目标2获取数据
  • 终端通信

  • 启动通信
  • 发送读启用
  • 发送读取命令
  • 发送地址1
  • 从目标1获取数据
  • 从目标2获取数据
  • 终端通信

只是,就机器指令而言,一次读取多个位/字节可以清除大量开销。更糟糕的是,当某些芯片要求您空闲几个时钟周期后,读/写启用是发送,让机械过程物理移动晶体管到位,以启用读写。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/41625529

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档