【发布时间】:2019-02-20 12:50:13
【问题描述】:
为了好玩和学习,我正在尝试使用 OpenNLP 和 Lucene 7.4 构建一个词性 (POS) 标注器。目标是一旦被索引,我实际上可以搜索一系列 POS 标签并找到与序列匹配的所有句子。我已经得到了索引部分,但我被困在查询部分。我知道 SolR 可能对此有一些功能,并且我已经检查了代码(毕竟这不是那么不言自明)。但我的目标是在 Lucene 7 中理解和实施,而不是在 SolR 中,因为我希望独立于任何顶级搜索引擎。
想法 输入句子 1:敏捷的棕狐跳过了懒惰的狗。 应用 Lucene OpenNLP 分词器导致:[The][quick][brown][fox][jumped][over][the][lazy][dogs][.] 接下来,应用 Lucene OpenNLP POS 标记结果:[DT][JJ][JJ][NN][VBD][IN][DT][JJ][NNS][.]
输入句子 2:给我,宝贝! 应用 Lucene OpenNLP 分词器导致:[Give][it][to][me][,][baby][!] 接下来,应用 Lucene OpenNLP POS 标记结果:[VB][PRP][TO][PRP][,][UH][.]
查询:JJ NN VBD 匹配句子 1 的一部分,因此应该返回句子 1。 (此时我只对完全匹配感兴趣,即让我们把部分匹配、通配符等放在一边)
索引 首先,我创建了自己的类 com.example.OpenNLPAnalyzer:
public class OpenNLPAnalyzer extends Analyzer {
protected TokenStreamComponents createComponents(String fieldName) {
try {
ResourceLoader resourceLoader = new ClasspathResourceLoader(ClassLoader.getSystemClassLoader());
TokenizerModel tokenizerModel = OpenNLPOpsFactory.getTokenizerModel("en-token.bin", resourceLoader);
NLPTokenizerOp tokenizerOp = new NLPTokenizerOp(tokenizerModel);
SentenceModel sentenceModel = OpenNLPOpsFactory.getSentenceModel("en-sent.bin", resourceLoader);
NLPSentenceDetectorOp sentenceDetectorOp = new NLPSentenceDetectorOp(sentenceModel);
Tokenizer source = new OpenNLPTokenizer(
AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, sentenceDetectorOp, tokenizerOp);
POSModel posModel = OpenNLPOpsFactory.getPOSTaggerModel("en-pos-maxent.bin", resourceLoader);
NLPPOSTaggerOp posTaggerOp = new NLPPOSTaggerOp(posModel);
// Perhaps we should also use a lower-case filter here?
TokenFilter posFilter = new OpenNLPPOSFilter(source, posTaggerOp);
// Very important: Tokens are not indexed, we need a store them as payloads otherwise we cannot search on them
TypeAsPayloadTokenFilter payloadFilter = new TypeAsPayloadTokenFilter(posFilter);
return new TokenStreamComponents(source, payloadFilter);
}
catch (IOException e) {
throw new RuntimeException(e.getMessage());
}
}
请注意,我们使用的是围绕 OpenNLPPOSFilter 包裹的 TypeAsPayloadTokenFilter。这意味着,我们的 POS 标签将被索引为有效负载,而我们的查询(无论看起来如何)也必须搜索有效负载。
查询 这就是我卡住的地方。我不知道如何查询有效载荷,无论我尝试什么都行不通。请注意,我使用的是 Lucene 7,似乎在旧版本中查询有效负载已经更改了好几次。文档极其稀缺。甚至不清楚现在要查询的正确字段名称是什么——它是“单词”还是“类型”或其他什么?例如,我尝试了这段不返回任何搜索结果的代码:
// Step 1: Indexing
final String body = "The quick brown fox jumped over the lazy dogs.";
Directory index = new RAMDirectory();
OpenNLPAnalyzer analyzer = new OpenNLPAnalyzer();
IndexWriterConfig indexWriterConfig = new IndexWriterConfig(analyzer);
IndexWriter writer = new IndexWriter(index, indexWriterConfig);
Document document = new Document();
document.add(new TextField("body", body, Field.Store.YES));
writer.addDocument(document);
writer.close();
// Step 2: Querying
final int topN = 10;
DirectoryReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
final String fieldName = "body"; // What is the correct field name here? "body", or "type", or "word" or anything else?
final String queryText = "JJ";
Term term = new Term(fieldName, queryText);
SpanQuery match = new SpanTermQuery(term);
BytesRef pay = new BytesRef("type"); // Don't understand what to put here as an argument
SpanPayloadCheckQuery query = new SpanPayloadCheckQuery(match, Collections.singletonList(pay));
System.out.println(query.toString());
TopDocs topDocs = searcher.search(query, topN);
非常感谢您提供任何帮助。
【问题讨论】:
标签: lucene nlp opennlp part-of-speech