【发布时间】:2014-05-05 02:19:17
【问题描述】:
我有一个带有以下映射和分析器的索引:
settings: {
analysis: {
char_filter: {
custom_cleaner: {
# remove - and * (we don't want them here)
type: "mapping",
mappings: ["-=>", "*=>"]
}
},
analyzer: {
custom_ngram: {
tokenizer: "standard",
filter: [ "lowercase", "custom_ngram_filter" ],
char_filter: ["custom_cleaner"]
}
},
filter: {
custom_ngram_filter: {
type: "nGram",
min_gram: 3,
max_gram: 20,
token_chars: [ "letter", "digit" ]
}
}
}
},
mappings: {
attributes: {
properties: {
name: { type: "string"},
words: { type: "string", similarity: "BM25", analyzer: "custom_ngram" }
}
}
}
}
我在索引中有以下 2 个文档:
"name": "shirts", "words": [ "shirt"]
和
"name": "t-shirts", "words": ["t-shirt"]
我执行多重匹配查询
"query": {
"multi_match": {
"query": "t-shirt",
"fields": [
"words",
"name"
],
"analyzer": "custom_ngram"
}
}
问题:
shirts 得分为 1.17,而 t-shirt 得分为 0.8。 为什么会这样?我怎样才能使 t-shirt(直接匹配)得分更高?
我需要 ngrams 用于另一个用例,我必须检测包含匹配项。 (衬衫是肌肉衬衫,...)因此我想我不能跳过 ngram。
谢谢!
【问题讨论】:
标签: search lucene elasticsearch n-gram