【发布时间】:2018-11-06 01:06:17
【问题描述】:
我尝试对一个包含超过 2.6 亿行的数据文件进行采样,创建一个固定大小为 1000 个样本的均匀分布的样本。
我做了以下事情:
import random
file = "input.txt"
output = open("output.txt", "w+", encoding = "utf-8")
samples = random.sample(range(1, 264000000), 1000)
samples.sort(reverse=False)
with open(file, encoding = "utf-8") as fp:
line = fp.readline()
count = 0
while line:
if count in samples:
output.write(line)
samples.remove(count)
count += 1
line = fp.readline()
此代码导致内存错误,没有进一步的描述。这段代码怎么会出现内存错误?
据我所知,它应该逐行读取我的文件。该文件为 28.4GB,因此无法整体读取,这就是我采用 readline() 方法的原因。我该如何解决这个问题,以便处理整个文件,无论其大小如何?\
编辑: 最近的尝试抛出了这个错误,这实际上与我目前收到的每个先前的错误消息相同
MemoryError Traceback (most recent call last)
<ipython-input-1-a772dad1ea5a> in <module>()
12 with open(file, encoding = "utf-8") as fp:
13 count = 0
---> 14 for line in fp:
15 if count in samples:
16 output.write(line)
~\Anaconda3\lib\codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
MemoryError:
【问题讨论】:
标签: python memory readline sampling