我有一个很大的文件4GB,当我尝试读取它时,计算机挂起了。因此,我想逐个读取它,并且在处理完每个块之后,将处理后的块存储到另一个文件中并读取下一个块。
yield
这些零件有什么方法吗?
我很想有一个懒惰的方法。
我有一个很大的文件4GB,当我尝试读取它时,计算机挂起了。因此,我想逐个读取它,并且在处理完每个块之后,将处理后的块存储到另一个文件中并读取下一个块。
yield
这些零件有什么方法吗?
我很想有一个懒惰的方法。
Answers:
要编写一个惰性函数,只需使用yield
:
def read_in_chunks(file_object, chunk_size=1024):
"""Lazy function (generator) to read a file piece by piece.
Default chunk size: 1k."""
while True:
data = file_object.read(chunk_size)
if not data:
break
yield data
with open('really_big_file.dat') as f:
for piece in read_in_chunks(f):
process_data(piece)
另一个选择是使用iter
和辅助功能:
f = open('really_big_file.dat')
def read1k():
return f.read(1024)
for piece in iter(read1k, ''):
process_data(piece)
如果文件是基于行的,则文件对象已经是行的惰性生成器:
for line in open('really_big_file.dat'):
process_data(line)
rb
@Tal Weiss提到的内容丢失;并缺少一条file.close()
语句(可以 with open('really_big_file.dat', 'rb') as f:
用来完成相同的操作;请参见此处以获取另一种简洁的实现方式
'rb'
是不是失踪。
'b'
他的数据,很可能会损坏数据。从文档 -Python on Windows makes a distinction between text and binary files; [...] it’ll corrupt binary data like that in JPEG or EXE files. Be very careful to use binary mode when reading and writing such files.
如果您的计算机,操作系统和python是64位的,则可以使用mmap模块将文件的内容映射到内存中,并使用索引和切片对其进行访问。以下是文档中的示例:
import mmap
with open("hello.txt", "r+") as f:
# memory-map the file, size 0 means whole file
map = mmap.mmap(f.fileno(), 0)
# read content via standard file methods
print map.readline() # prints "Hello Python!"
# read content via slice notation
print map[:5] # prints "Hello"
# update content using slice notation;
# note that new content must have same size
map[6:] = " world!\n"
# ... and read again using standard file methods
map.seek(0)
print map.readline() # prints "Hello world!"
# close the map
map.close()
如果您的计算机,操作系统或python是32位的,则映射大型文件可能会保留地址空间的大部分,并使内存程序饿死。
file.readlines()
接受一个可选的size参数,该参数近似返回的行中读取的行数。
bigfile = open('bigfilename','r')
tmp_lines = bigfile.readlines(BUF_SIZE)
while tmp_lines:
process([line for line in tmp_lines])
tmp_lines = bigfile.readlines(BUF_SIZE)
.read()
not .readlines()
。如果文件是二进制文件,则不会有换行符。
已经有很多不错的答案,但是如果您的整个文件都在一行上,并且您仍要处理“行”(与固定大小的块相对),那么这些答案将无济于事。
99%的时间,可以逐行处理文件。然后,按照此答案的建议,您可以将文件对象本身用作延迟生成器:
with open('big.csv') as f:
for line in f:
process(line)
然而,有一次我遇到了一个非常非常大的(几乎)单行文件,其中的行分隔符实际上没有'\n'
,但是'|'
。
'|'
为'\n'
处理前也是不可能的,因为此csv的某些字段包含'\n'
(自由文本用户输入)。对于这种情况,我创建了以下代码段:
def rows(f, chunksize=1024, sep='|'):
"""
Read a file where the row separator is '|' lazily.
Usage:
>>> with open('big.csv') as f:
>>> for r in rows(f):
>>> process(row)
"""
curr_row = ''
while True:
chunk = f.read(chunksize)
if chunk == '': # End of file
yield curr_row
break
while True:
i = chunk.find(sep)
if i == -1:
break
yield curr_row + chunk[:i]
curr_row = ''
chunk = chunk[i+1:]
curr_row += chunk
我能够成功使用它来解决我的问题。它已通过各种块大小的广泛测试。
测试套件,适合那些想要说服自己的人。
test_file = 'test_file'
def cleanup(func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
os.unlink(test_file)
return wrapper
@cleanup
def test_empty(chunksize=1024):
with open(test_file, 'w') as f:
f.write('')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1_char_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_1_char(chunksize=1024):
with open(test_file, 'w') as f:
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1025_chars_1_row(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1
@cleanup
def test_1024_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1023):
f.write('a')
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_1025_chars_1026_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1025):
f.write('|')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 1026
@cleanup
def test_2048_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
@cleanup
def test_2049_chars_2_rows(chunksize=1024):
with open(test_file, 'w') as f:
for i in range(1022):
f.write('a')
f.write('|')
f.write('a')
# -- end of 1st chunk --
for i in range(1024):
f.write('a')
# -- end of 2nd chunk
f.write('a')
with open(test_file) as f:
assert len(list(rows(f, chunksize=chunksize))) == 2
if __name__ == '__main__':
for chunksize in [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]:
test_empty(chunksize)
test_1_char_2_rows(chunksize)
test_1_char(chunksize)
test_1025_chars_1_row(chunksize)
test_1024_chars_2_rows(chunksize)
test_1025_chars_1026_rows(chunksize)
test_2048_chars_2_rows(chunksize)
test_2049_chars_2_rows(chunksize)
f = ... # file-like object, i.e. supporting read(size) function and
# returning empty string '' when there is nothing to read
def chunked(file, chunk_size):
return iter(lambda: file.read(chunk_size), '')
for data in chunked(f, 65536):
# process the data
更新:最好在https://stackoverflow.com/a/4566523/38592中解释该方法
请参阅python的官方文档 https://docs.python.org/zh-cn/3/library/functions.html?#iter
也许这种方法更pythonic:
from functools import partial
"""A file object returned by open() is a iterator with
read method which could specify current read's block size"""
with open('mydata.db', 'r') as f_in:
part_read = partial(f_in.read, 1024*1024)
iterator = iter(part_read, b'')
for index, block in enumerate(iterator, start=1):
block = process_block(block) # process block data
with open(f'{index}.txt', 'w') as f_out:
f_out.write(block)
由于声誉低下,我不允许发表评论,但是SilentGhosts解决方案应该可以通过file.readlines([sizehint])轻松得多
编辑:SilentGhost是正确的,但这应该比:
s = ""
for i in xrange(100):
s += file.next()
我处于类似情况。目前尚不清楚您是否知道块大小(以字节为单位)。我通常不知道,但是所需的记录(行)数是已知的:
def get_line():
with open('4gb_file') as file:
for i in file:
yield i
lines_required = 100
gen = get_line()
chunk = [i for i, j in zip(gen, range(lines_required))]
更新:谢谢nosklo。这就是我的意思。它几乎起作用了,只是它丢失了块之间的一行。
chunk = [next(gen) for i in range(lines_required)]
技巧不丢失任何行,但看起来不是很好。
f = open('really_big_file.dat')
只是一个指针,没有任何内存消耗?(我的意思是无论文件大小如何,消耗的内存都是相同的?)如果我使用urllib.readline()而不是f.readline(),它将如何影响性能?