我想使用PyMongo的批量 写入操作功能,这些功能可以批量执行写操作,以减少网络往返次数并提高计算吞吐量.
我在这里也发现可以使用5000作为批号.
但是,我不希望批号的最佳大小以及如何在下面的代码中将PyMongo的批量写入操作功能与生成器结合起来?
from pymongo import MongoClient from itertools import groupby import csv def iter_something(rows): key_names = ['type', 'name', 'sub_name', 'pos', 's_type', 'x_type'] chr_key_names = ['letter', 'no'] for keys, group in groupby(rows, lambda row: row[:6]): result = dict(zip(key_names, keys)) result['chr'] = [dict(zip(chr_key_names, row[6:])) for row in group] yield result def main(): converters = [str, str, str, int, int, int, str, int] with open("/home/mic/tmp/test.txt") as c: reader = csv.reader(c, skipinitialspace=True) converted = ([conv(col) for conv, col in zip(converters, row)] for row in reader) for object_ in iter_something(converted): print(object_) if __name__ == '__main__': db = MongoClient().test sDB = db.snps main()
test.txt文件:
Test, A, B01, 828288, 1, 7, C, 5 Test, A, B01, 828288, 1, 7, T, 6 Test, A, B01, 171878, 3, 7, C, 5 Test, A, B01, 171878, 3, 7, T, 6 Test, A, B01, 871963, 3, 9, A, 5 Test, A, B01, 871963, 3, 9, G, 6 Test, A, B01, 1932523, 1, 10, T, 4 Test, A, B01, 1932523, 1, 10, A, 5 Test, A, B01, 1932523, 1, 10, X, 6 Test, A, B01, 667214, 1, 14, T, 4 Test, A, B01, 667214, 1, 14, G, 5 Test, A, B01, 67214, 1, 14, G, 6
A. Jesse Jir.. 5
你可以简单地做:
sDB.insert(iter_something(converted))
PyMongo将做正确的事情:迭代您的生成器,直到它产生1000个文档或16MB数据,然后在将批处理插入MongoDB时暂停生成器.插入批处理后,PyMongo将恢复您的生成器以创建下一批,并继续直到插入所有文档.然后insert()返回插入的文档ID列表.
在此提交中,对PyMongo添加了对生成器的初始支持,从那时起我们就一直保持对文档生成器的支持.