这是一个典型的clustering 用例,我基本上看到了两个选项:
1。基于质心的聚类
可以通过centroid aggregations 在 Elasticsearch 中访问。
2。基于密度的聚类
DBC 是一种更好的方法,因为它是基于异常值的。这是python implementation。那里可能有更好的,包括。 scikit 的very own。对他们不太熟悉,所以我现在只能说这些。
我是来讨论 Elasticsearch 的,所以下面是选项 #1 的做法:
- 设置索引
PUT animals
{
"mappings": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
}
- 向其中添加一些位置
POST _bulk
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[7.5146484375,51.17934297928927]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[7.207031249999999,50.94458443495011]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[7.734374999999999,51.069016659603896]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[7.536621093749999,50.94458443495011]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[8.525390625,51.16556659836182]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[9.55810546875,50.83369767098071]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[9.0087890625,51.138001488062564]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[10.21728515625,50.56928286558243]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[10.87646484375,50.84757295365389]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.25,50.84757295365389]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.09619140625,50.77815527465925]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.513671874999998,50.84757295365389]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.3818359375,50.708634400828224]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.00830078125,50.736455137010665]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.6455078125,51.52241608253253]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[10.78857421875,50.3734961443035]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[10.546875,49.96535590991311]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[10.01953125,49.681846899401286]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[9.29443359375,49.85215166776998]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[8.942871093749998,49.710272582105695]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[9.20654296875,49.5822260446217]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[8.98681640625,49.52520834197442]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[8.6572265625,49.603590524348704]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.546630859375,50.14874640066278]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.865234375,50.0289165635219]}
{"index":{"_index":"animals","_type":"_doc"}}
{"location":[11.42578125,50.52041218671901]}
根据您的草图,我在德国使用了一些随机点:
- 计算质心
POST animals/_search
{
"size": 0,
"aggs": {
"weighted": {
"geohash_grid": {
"field": "location",
"precision": 2
},
"aggs": {
"centroid": {
"geo_centroid": {
"field": "location"
}
}
}
}
}
}
这会遍历所有点,而不是只是那些在您的草图中“明确绑定”的点。这意味着会有一些异常值包含很少需要跳过的点。
因此,获取 Elasticsearch 返回的存储桶,只过滤较大的存储桶(我在这里使用 JS 而不是 python),然后使用 TurfJS 将它们转换为 geojson:
turf.featureCollection(
buckets.filter(p => p.doc_count > 3)
.map(p => turf.point([
p.centroid.location.lon,
p.centroid.location.lat
])))
产生以下结果:
如您所见,“中心”是倾斜的,因为浓度不够“高”。使用更集中的组,算法会变得更好。
但坦率地说,DBSCAN 是要走的路,而不是加权质心。