【发布时间】:2017-01-18 02:08:26
【问题描述】:
我有 13,000 个网页,其正文已编入索引。我们的目标是获得一个词、两个词、三个词……最多 8 个词的前 200 个词组频率。
这些网页中总共有超过 1.5 亿字需要标记化。
问题是查询大约需要 15 分钟,之后会用完堆空间,无法完成。
我正在一个 4 cpu 内核、8GB RAM、SSD ubuntu 服务器上进行测试。 6GB 的 RAM 被分配为堆。交换已禁用。
现在,我可以通过拆分为 8 个不同的索引来做到这一点,查询/设置/映射组合适用于单一类型的词组。也就是说,我可以单独在一个单词短语、两个单词短语等上运行它,从而获得我期望的结果(尽管每个仍然需要大约 5 分钟)。我想知道是否有办法通过一个索引和查询来调整这个完整的聚合以与我的硬件一起使用。
设置和映射:
{
"settings":{
"index":{
"number_of_shards" : 1,
"number_of_replicas" : 0,
"analysis":{
"analyzer":{
"analyzer_shingle_2":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_2"]
},
"analyzer_shingle_3":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_3"]
},
"analyzer_shingle_4":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_4"]
},
"analyzer_shingle_5":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_5"]
},
"analyzer_shingle_6":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_6"]
},
"analyzer_shingle_7":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_7"]
},
"analyzer_shingle_8":{
"tokenizer":"standard",
"filter":["standard", "lowercase", "filter_shingle_8"]
}
},
"filter":{
"filter_shingle_2":{
"type":"shingle",
"max_shingle_size":2,
"min_shingle_size":2,
"output_unigrams":"false"
},
"filter_shingle_3":{
"type":"shingle",
"max_shingle_size":3,
"min_shingle_size":3,
"output_unigrams":"false"
},
"filter_shingle_4":{
"type":"shingle",
"max_shingle_size":4,
"min_shingle_size":4,
"output_unigrams":"false"
},
"filter_shingle_5":{
"type":"shingle",
"max_shingle_size":5,
"min_shingle_size":5,
"output_unigrams":"false"
},
"filter_shingle_6":{
"type":"shingle",
"max_shingle_size":6,
"min_shingle_size":6,
"output_unigrams":"false"
},
"filter_shingle_7":{
"type":"shingle",
"max_shingle_size":7,
"min_shingle_size":7,
"output_unigrams":"false"
},
"filter_shingle_8":{
"type":"shingle",
"max_shingle_size":8,
"min_shingle_size":8,
"output_unigrams":"false"
}
}
}
}
},
"mappings":{
"items":{
"properties":{
"body":{
"type": "multi_field",
"fields": {
"two-word-phrases": {
"analyzer":"analyzer_shingle_2",
"type":"string"
},
"three-word-phrases": {
"analyzer":"analyzer_shingle_3",
"type":"string"
},
"four-word-phrases": {
"analyzer":"analyzer_shingle_4",
"type":"string"
},
"five-word-phrases": {
"analyzer":"analyzer_shingle_5",
"type":"string"
},
"six-word-phrases": {
"analyzer":"analyzer_shingle_6",
"type":"string"
},
"seven-word-phrases": {
"analyzer":"analyzer_shingle_7",
"type":"string"
},
"eight-word-phrases": {
"analyzer":"analyzer_shingle_8",
"type":"string"
}
}
}
}
}
}
}
查询:
{
"size" : 0,
"aggs" : {
"one-word-phrases" : {
"terms" : {
"field" : "body",
"size" : 200
}
},
"two-word-phrases" : {
"terms" : {
"field" : "body.two-word-phrases",
"size" : 200
}
},
"three-word-phrases" : {
"terms" : {
"field" : "body.three-word-phrases",
"size" : 200
}
},
"four-word-phrases" : {
"terms" : {
"field" : "body.four-word-phrases",
"size" : 200
}
},
"five-word-phrases" : {
"terms" : {
"field" : "body.five-word-phrases",
"size" : 200
}
},
"six-word-phrases" : {
"terms" : {
"field" : "body.six-word-phrases",
"size" : 200
}
},
"seven-word-phrases" : {
"terms" : {
"field" : "body.seven-word-phrases",
"size" : 200
}
},
"eight-word-phrases" : {
"terms" : {
"field" : "body.eight-word-phrases",
"size" : 200
}
}
}
}
【问题讨论】:
-
我不这么认为。缩小
size或运行单独的聚合。或者不要在您的笔记本电脑上运行它,而是在具有更多 RAM 的东西上运行。即便如此,也许这还不够。
标签: performance elasticsearch lucene aggregate-functions n-gram