我试图在pandas中读取一个大的csv文件(aprox.6 GB),我收到以下内存错误:
MemoryError Traceback (most recent call last)in () ----> 1 data=pd.read_csv('aphro.csv',sep=';') C:\Python27\lib\site-packages\pandas\io\parsers.pyc in parser_f(filepath_or_buffer, sep, dialect, compression, doublequote, escapechar, quotechar, quoting, skipinitialspace, lineterminator, header, index_col, names, prefix, skiprows, skipfooter, skip_footer, na_values, na_fvalues, true_values, false_values, delimiter, converters, dtype, usecols, engine, delim_whitespace, as_recarray, na_filter, compact_ints, use_unsigned, low_memory, buffer_lines, warn_bad_lines, error_bad_lines, keep_default_na, thousands, comment, decimal, parse_dates, keep_date_col, dayfirst, date_parser, memory_map, nrows, iterator, chunksize, verbose, encoding, squeeze, mangle_dupe_cols, tupleize_cols, infer_datetime_format) 450 infer_datetime_format=infer_datetime_format) 451 --> 452 return _read(filepath_or_buffer, kwds) 453 454 parser_f.__name__ = name C:\Python27\lib\site-packages\pandas\io\parsers.pyc in _read(filepath_or_buffer, kwds) 242 return parser 243 --> 244 return parser.read() 245 246 _parser_defaults = { C:\Python27\lib\site-packages\pandas\io\parsers.pyc in read(self, nrows) 693 raise ValueError('skip_footer not supported for iteration') 694 --> 695 ret = self._engine.read(nrows) 696 697 if self.options.get('as_recarray'): C:\Python27\lib\site-packages\pandas\io\parsers.pyc in read(self, nrows) 1137 1138 try: -> 1139 data = self._reader.read(nrows) 1140 except StopIteration: 1141 if nrows is None: C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader.read (pandas\parser.c:7145)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._read_low_memory (pandas\parser.c:7369)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._read_rows (pandas\parser.c:8194)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_column_data (pandas\parser.c:9402)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_tokens (pandas\parser.c:10057)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser.TextReader._convert_with_dtype (pandas\parser.c:10361)() C:\Python27\lib\site-packages\pandas\parser.pyd in pandas.parser._try_int64 (pandas\parser.c:17806)() MemoryError:
对此有何帮助??
该错误表明机器没有足够的内存来将整个CSV一次读入DataFrame.假设您一次不需要内存中的整个数据集,避免此问题的一种方法是以块的形式处理CSV(通过指定chunksize
参数):
chunksize = 10 ** 6 for chunk in pd.read_csv(filename, chunksize=chunksize): process(chunk)
该chucksize
参数指定每个块的行数.(当然,最后一个块可能包含少于chunksize
行.)
您可以将数据读取为大块,并将每个大块另存为泡菜。
import pandas as pd import pickle in_path = "" #Path where the large file is out_path = "" #Path to save the pickle files to chunk_size = 400000 #size of chunks relies on your available memory separator = "~" reader = pd.read_csv(in_path,sep=separator,chunksize=chunk_size, low_memory=False) for i, chunk in enumerate(reader): out_file = out_path + "/data_{}.pkl".format(i+1) with open(out_file, "wb") as f: pickle.dump(chunk,f,pickle.HIGHEST_PROTOCOL)
在下一步中,您将读取泡菜并将每个泡菜附加到所需的数据框中。
import glob pickle_path = "" #Same Path as out_path i.e. where the pickle files are data_p_files=[] for name in glob.glob(pickle_path + "/data_*.pkl"): data_p_files.append(name) df = pd.DataFrame([]) for i in range(len(data_p_files)): df = df.append(pd.read_pickle(data_p_files[i]),ignore_index=True)
对于大数据,我建议您使用库"dask",
例如:
# Dataframes implement the Pandas API import dask.dataframe as dd df = dd.read_csv('s3://.../2018-*-*.csv')
分块不应该总是这个问题的第一个停靠点.
1.由于重复的非数字数据或不需要的列,文件是否很大?
如果是这样,您有时可以通过读取列作为类别并通过pd.read_csv usecols
参数选择所需列来节省大量内存.
2.您的工作流程是否需要切片,操作,导出?
如果是这样,您可以使用dask.dataframe进行切片,执行计算并迭代导出.通过dask静默执行分块,它也支持pandas API的子集.
3.如果所有其他方法都失败了,请通过块逐行读取.
块通过大熊猫或通过CSV库作为最后的手段.
我继续这样做:
chunks=pd.read_table('aphro.csv',chunksize=1000000,sep=';',\ names=['lat','long','rf','date','slno'],index_col='slno',\ header=None,parse_dates=['date']) df=pd.DataFrame() %time df=pd.concat(chunk.groupby(['lat','long',chunk['date'].map(lambda x: x.year)])['rf'].agg(['sum']) for chunk in chunks)
上面的答案已经令人满意了.无论如何,如果你需要内存中的所有数据 - 看看bcolz.它压缩内存中的数据.我有很好的经验.但它缺少了很多熊猫的功能
编辑:我认为压缩率约为1/10或原始尺寸,当然这取决于数据类型.缺少的重要功能是聚合.