这两天来一直在挣扎,只是不能用indexWriter.deleteDocuments(term)删除文档
在这里,我将把进行测试的代码放在这里,希望有人能指出我做错了什么,已经尝试过的事情:
2.x更新为5.xindexWriter.deleteDocuments()而不是indexReader.deleteDocuments()indexOption配置为NONE或DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS在这里,代码:
import org.apache.lucene.analysis.core.SimpleAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.FieldType;
import org.apache.lucene.index.*;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import java.io.IOException;
import java.nio.file.Paths;
public class TestSearch {
static SimpleAnalyzer analyzer = new SimpleAnalyzer();
public static void main(String[] argvs) throws IOException, ParseException {
generateIndex("5836962b0293a47b09d345f1");
query("5836962b0293a47b09d345f1");
delete("5836962b0293a47b09d345f1");
query("5836962b0293a47b09d345f1");
}
public static void generateIndex(String id) throws IOException {
Directory directory = FSDirectory.open(Paths.get("/tmp/test/lucene"));
IndexWriterConfig config = new IndexWriterConfig(analyzer);
IndexWriter iwriter = new IndexWriter(directory, config);
FieldType fieldType = new FieldType();
fieldType.setStored(true);
fieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS);
Field idField = new Field("_id", id, fieldType);
Document doc = new Document();
doc.add(idField);
iwriter.addDocument(doc);
iwriter.close();
}
public static void query(String id) throws ParseException, IOException {
Query query = new QueryParser("_id", analyzer).parse(id);
Directory directory = FSDirectory.open(Paths.get("/tmp/test/lucene"));
IndexReader ireader = DirectoryReader.open(directory);
IndexSearcher isearcher = new IndexSearcher(ireader);
ScoreDoc[] scoreDoc = isearcher.search(query, 100).scoreDocs;
for(ScoreDoc scdoc: scoreDoc){
Document doc = isearcher.doc(scdoc.doc);
System.out.println(doc.get("_id"));
}
}
public static void delete(String id){
try {
Directory directory = FSDirectory.open(Paths.get("/tmp/test/lucene"));
IndexWriterConfig config = new IndexWriterConfig(analyzer);
IndexWriter iwriter = new IndexWriter(directory, config);
Term term = new Term("_id", id);
iwriter.deleteDocuments(term);
iwriter.commit();
iwriter.close();
}catch (IOException e){
e.printStackTrace();
}
}
}首先generateIndex()将在/tmp/test/lucene中生成索引,query()将显示id将被成功查询,然后delete()希望删除文档,但query()将再次证明删除操作失败。
下面是pom依赖关系,以防有人需要进行测试
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-core</artifactId>
<version>5.5.4</version>
<type>jar</type>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-common</artifactId>
<version>5.5.4</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-queryparser</artifactId>
<version>5.5.4</version>
</dependency>
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers-smartcn</artifactId>
<version>5.5.4</version>
</dependency>绝望地想得到答案。
发布于 2017-02-17 19:13:24
你的问题在分析器里。SimpleAnalyzer将符号定义为字母的最大字符串(StandardAnalyzer,甚至WhitespaceAnalyzer,是更典型的选择),因此您正在索引的值被拆分为标记:"b“、"a”、"b“、"d”、"f“。但是,您定义的delete方法并不通过分析器,而只是创建了一个原始术语。如果尝试将main替换为以下内容,则可以看到此操作:
generateIndex("5836962b0293a47b09d345f1");
query("5836962b0293a47b09d345f1");
delete("b");
query("5836962b0293a47b09d345f1");作为一般规则,查询和术语等不进行分析,QueryParser会这样做。
对于一个标识符字段,您可能根本不想分析这个字段。在这种情况下,将其添加到FieldType中:
fieldType.setTokenized(false);然后,您将不得不更改查询(再次,QueryParser分析),并使用TermQuery代替。
Query query = new TermQuery(new Term("_id", id));https://stackoverflow.com/questions/42293998
复制相似问题