热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

org.apache.lucene.index.LogMergePolicy.setMergeFactor()方法的使用及代码示例

本文整理了Java中org.apache.lucene.index.LogMergePolicy.setMergeFactor()方法的一些代码示例,展示了

本文整理了Java中org.apache.lucene.index.LogMergePolicy.setMergeFactor()方法的一些代码示例,展示了LogMergePolicy.setMergeFactor()的具体用法。这些代码示例主要来源于Github/Stackoverflow/Maven等平台,是从一些精选项目中提取出来的代码,具有较强的参考意义,能在一定程度帮忙到你。LogMergePolicy.setMergeFactor()方法的具体详情如下:
包路径:org.apache.lucene.index.LogMergePolicy
类名称:LogMergePolicy
方法名:setMergeFactor

LogMergePolicy.setMergeFactor介绍

[英]Determines how often segment indices are merged by addDocument(). With smaller values, less RAM is used while indexing, and searches are faster, but indexing speed is slower. With larger values, more RAM is used during indexing, and while searches is slower, indexing is faster. Thus larger values ( > 10) are best for batch index creation, and smaller values ( <10) for indices that are interactively maintained.
[中]确定addDocument()合并段索引的频率。值越小,索引时使用的内存越少,搜索速度越快,但索引速度越慢。值越大,索引过程中使用的RAM越多,搜索越慢,索引速度越快。因此,较大的值(>10)最好用于批量索引创建,较小的值(<10)最好用于交互式维护的索引。

代码示例

代码示例来源:origin: com.strapdata.elasticsearch.test/framework

public static MergePolicy newLogMergePolicy(int mergeFactor) {
LogMergePolicy logmp = newLogMergePolicy();
logmp.setMergeFactor(mergeFactor);
return logmp;
}

代码示例来源:origin: org.apache.lucene/lucene-core-jfrog

/** Determines how often segment indices are merged by addDocument(). With
* smaller values, less RAM is used while indexing, and searches on
* unoptimized indices are faster, but indexing speed is slower. With larger
* values, more RAM is used during indexing, and while searches on unoptimized
* indices are slower, indexing is faster. Thus larger values (> 10) are best
* for batch index creation, and smaller values (<10) for indices that are
* interactively maintained.
*
*

Note that this method is a convenience method: it
* just calls mergePolicy.setMergeFactor as long as
* mergePolicy is an instance of {@link LogMergePolicy}.
* Otherwise an IllegalArgumentException is thrown.


*
*

This must never be less than 2. The default value is 10.
*/
public void setMergeFactor(int mergeFactor) {
getLogMergePolicy().setMergeFactor(mergeFactor);
}

代码示例来源:origin: org.apache.lucene/com.springsource.org.apache.lucene

/** Determines how often segment indices are merged by addDocument(). With
* smaller values, less RAM is used while indexing, and searches on
* unoptimized indices are faster, but indexing speed is slower. With larger
* values, more RAM is used during indexing, and while searches on unoptimized
* indices are slower, indexing is faster. Thus larger values (> 10) are best
* for batch index creation, and smaller values (<10) for indices that are
* interactively maintained.
*
*

Note that this method is a convenience method: it
* just calls mergePolicy.setMergeFactor as long as
* mergePolicy is an instance of {@link LogMergePolicy}.
* Otherwise an IllegalArgumentException is thrown.


*
*

This must never be less than 2. The default value is 10.
*/
public void setMergeFactor(int mergeFactor) {
getLogMergePolicy().setMergeFactor(mergeFactor);
}

代码示例来源:origin: com.strapdata.elasticsearch.test/framework

public static MergePolicy newLogMergePolicy(boolean useCFS, int mergeFactor) {
LogMergePolicy logmp = newLogMergePolicy();
logmp.setNoCFSRatio(useCFS ? 1.0 : 0.0);
logmp.setMergeFactor(mergeFactor);
return logmp;
}

代码示例来源:origin: org.apache.oodt/cas-filemgr

public void doOptimize() {
IndexWriter writer = null;
boolean createIndex = false;
try {
writer = new IndexWriter(reader.directory(), config);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(this.mergeFactor);
config.setMergePolicy(lmp);
long timeBefore = System.currentTimeMillis();
//TODO http://blog.trifork.com/2011/11/21/simon-says-optimize-is-bad-for-you/
//writer.optimize();
long timeAfter = System.currentTimeMillis();
double numSecOnds= ((timeAfter - timeBefore) * 1.0) / DOUBLE;
LOG.log(Level.INFO, "LuceneCatalog: [" + this.catalogPath
+ "] optimized: took: [" + numSeconds + "] seconds");
} catch (IOException e) {
LOG.log(Level.WARNING, "Unable to optimize lucene index: ["
+ catalogPath + "]: Message: " + e.getMessage());
} finally {
try {
writer.close();
} catch (Exception ignore) {
}
}
}

代码示例来源:origin: psidev.psi.mi/psimitab-search

public void index(Directory directory, InputStream is, boolean createIndex, boolean hasHeaderLine) throws IOException, ConverterException, MitabLineException {
IndexWriterConfig writerCOnfig= new IndexWriterConfig(Version.LUCENE_30, new StandardAnalyzer(Version.LUCENE_30));
LogMergePolicy policy = new LogDocMergePolicy();
policy.setMergeFactor(MERGE_FACTOR);
policy.setMaxMergeDocs(Integer.MAX_VALUE);
writerConfig.setMergePolicy(policy);
IndexWriter indexWriter = new IndexWriter(directory, writerConfig);
if (createIndex){
indexWriter.commit();
indexWriter.deleteAll();
indexWriter.commit();
}
index(indexWriter, is, hasHeaderLine);
indexWriter.close();
}

代码示例来源:origin: apache/oodt

public void doOptimize() {
IndexWriter writer = null;
boolean createIndex = false;
try {
writer = new IndexWriter(reader.directory(), config);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(this.mergeFactor);
config.setMergePolicy(lmp);
long timeBefore = System.currentTimeMillis();
//TODO http://blog.trifork.com/2011/11/21/simon-says-optimize-is-bad-for-you/
//writer.optimize();
long timeAfter = System.currentTimeMillis();
double numSecOnds= ((timeAfter - timeBefore) * 1.0) / DOUBLE;
LOG.log(Level.INFO, "LuceneCatalog: [" + this.catalogPath
+ "] optimized: took: [" + numSeconds + "] seconds");
} catch (IOException e) {
LOG.log(Level.WARNING, "Unable to optimize lucene index: ["
+ catalogPath + "]: Message: " + e.getMessage());
} finally {
try {
writer.close();
} catch (Exception ignore) {
}
}
}

代码示例来源:origin: INL/BlackLab

public static IndexWriterConfig getIndexWriterConfig(Analyzer analyzer, boolean create) {
IndexWriterConfig cOnfig= new IndexWriterConfig(analyzer);
config.setOpenMode(create ? OpenMode.CREATE : OpenMode.CREATE_OR_APPEND);
config.setRAMBufferSizeMB(150); // faster indexing
// Set merge factor (if using LogMergePolicy, which is the default up to version LUCENE_32,
// so yes)
MergePolicy mp = config.getMergePolicy();
if (mp instanceof LogMergePolicy) {
((LogMergePolicy) mp).setMergeFactor(40); // faster indexing
}
return config;
}

代码示例来源:origin: com.strapdata.elasticsearch.test/framework

public static LogMergePolicy newLogMergePolicy(Random r) {
LogMergePolicy logmp = r.nextBoolean() ? new LogDocMergePolicy() : new LogByteSizeMergePolicy();
logmp.setCalibrateSizeByDeletes(r.nextBoolean());
if (rarely(r)) {
logmp.setMergeFactor(TestUtil.nextInt(r, 2, 9));
} else {
logmp.setMergeFactor(TestUtil.nextInt(r, 10, 50));
}
configureRandom(r, logmp);
return logmp;
}

代码示例来源:origin: apache/oodt

config.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: org.apache.oodt/cas-workflow

config.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: apache/oodt

private synchronized void addWorkflowInstanceToCatalog(
WorkflowInstance wInst) throws InstanceRepositoryException {
IndexWriter writer = null;
try {
IndexWriterConfig cOnfig= new IndexWriterConfig(new StandardAnalyzer());
config.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);
writer = new IndexWriter(indexDir, config);
Document doc = toDoc(wInst);
writer.addDocument(doc);
} catch (IOException e) {
LOG.log(Level.WARNING, "Unable to index workflow instance: ["
+ wInst.getId() + "]: Message: " + e.getMessage());
throw new InstanceRepositoryException(
"Unable to index workflow instance: [" + wInst.getId()
+ "]: Message: " + e.getMessage());
} finally {
try {
writer.close();
} catch (Exception e) {
System.out.println(e);
}
}
}

代码示例来源:origin: org.apache.oodt/cas-workflow

private synchronized void addWorkflowInstanceToCatalog(
WorkflowInstance wInst) throws InstanceRepositoryException {
IndexWriter writer = null;
try {
IndexWriterConfig cOnfig= new IndexWriterConfig(new StandardAnalyzer());
config.setOpenMode(IndexWriterConfig.OpenMode.CREATE_OR_APPEND);
LogMergePolicy lmp =new LogDocMergePolicy();
lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);
writer = new IndexWriter(indexDir, config);
Document doc = toDoc(wInst);
writer.addDocument(doc);
} catch (IOException e) {
LOG.log(Level.WARNING, "Unable to index workflow instance: ["
+ wInst.getId() + "]: Message: " + e.getMessage());
throw new InstanceRepositoryException(
"Unable to index workflow instance: [" + wInst.getId()
+ "]: Message: " + e.getMessage());
} finally {
try {
writer.close();
} catch (Exception e) {
System.out.println(e);
}
}
}

代码示例来源:origin: apache/oodt

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: org.apache.oodt/cas-workflow

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: org.apache.oodt/cas-filemgr

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: apache/oodt

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: org.apache.oodt/cas-filemgr

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: apache/oodt

lmp.setMergeFactor(mergeFactor);
config.setMergePolicy(lmp);

代码示例来源:origin: org.apache.lucene/lucene-benchmark

if (iwConf.getMergePolicy() instanceof LogMergePolicy) {
LogMergePolicy logMergePolicy = (LogMergePolicy) iwConf.getMergePolicy();
logMergePolicy.setMergeFactor(config.get("merge.factor",OpenIndexTask.DEFAULT_MERGE_PFACTOR));

推荐阅读
author-avatar
lucia_8899_458
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有