我正在尝试做相当简单的事情,将一个大的csv文件读入pandas数据框。
data = pandas.read_csv(filepath, header = 0, sep = DELIMITER,skiprows = 2)
代码要么以失败MemoryError
,要么就永远无法完成。
任务管理器中的内存使用停止在506 Mb,并且在5分钟内没有更改并且在该过程中没有CPU活动之后,我将其停止。
我正在使用pandas版本0.11.0。
我知道文件解析器曾经存在内存问题,但是根据http://wesmckinney.com/blog/?p=543,该问题应该已得到解决。
我试图读取的文件是366 Mb,如果我将文件切成短片(25 Mb),则上面的代码将起作用。
还发生了一个弹出窗口,告诉我它无法写入地址0x1e0baf93 ...
堆栈跟踪:
Traceback (most recent call last):
File "F:\QA ALM\Python\new WIM data\new WIM data\new_WIM_data.py", line 25, in
<module>
wimdata = pandas.read_csv(filepath, header = 0, sep = DELIMITER,skiprows = 2
)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 401, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 216, in _read
return parser.read()
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\io\parsers.py"
, line 643, in read
df = DataFrame(col_dict, columns=columns, index=index)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 394, in __init__
mgr = self._init_dict(data, index, columns, dtype=dtype)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 525, in _init_dict
dtype=dtype)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\frame.py"
, line 5338, in _arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1820, in create_block_manager_from_arrays
blocks = form_blocks(arrays, names, axes)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1872, in form_blocks
float_blocks = _multi_blockify(float_items, items)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1930, in _multi_blockify
block_items, values = _stack_arrays(list(tup_block), ref_items, dtype)
File "C:\Program Files\Python\Anaconda\lib\site-packages\pandas\core\internals
.py", line 1962, in _stack_arrays
stacked = np.empty(shape, dtype=dtype)
MemoryError
Press any key to continue . . .
有点背景-我试图说服人们Python可以和R一样。为此,我试图复制一个R脚本,
data <- read.table(paste(INPUTDIR,config[i,]$TOEXTRACT,sep=""), HASHEADER, DELIMITER,skip=2,fill=TRUE)
R不仅可以很好地读取上述文件,它甚至可以在for循环中读取其中一些文件(然后对数据进行一些处理)。如果Python确实对这种大小的文件存在问题,那么我可能正在与一场艰苦的战斗...