首先,将字典转换为项键值集:
>>> data_dict = [{'a':1 , 'b':2,'c':3 , 'd':4} , {'a':1,'b':2 ,'e':1} , {'a':1 ,'b':2,'c':3 ,'z':5} , {'a':1 , 'b':2 , 'j':1}]
>>> L = [set(d.items()) for d in data_dict]
>>> L
[{('c', 3), ('a', 1), ('b', 2), ('d', 4)}, {('a', 1), ('e', 1), ('b', 2)}, {('c', 3), ('a', 1), ('b', 2), ('z', 5)}, {('j', 1), ('a', 1), ('b', 2)}]
现在,您希望按具有最多共同项目的对对集合进行分组。你可以在O(N^3) 中使用一个简单的算法:检查第一对的每一组与所有其他组,然后再做一次,直到找到所有对:
N = len(L)
left = set(range(N))
while left:
c = -1
pair = None
for i in left:
for j in left & set(range(i+1, N)):
t = len(L[i] & L[j])
if t > c:
pair = i, j
c = t
left.difference_update(ret) # remove the current pair
print(ret)
# (0, 2)
# (1, 3)
但这有点慢(4000^3 = 640 亿)。
另一个想法是创建一个字典item -> list of all sets containing that item。然后将此字典转换为字典item -> pairs of set containing that item(前一个列表的两个元素的组合)。最后,考虑所有创建频率最高的对(请参阅https://stackoverflow.com/a/26028865/6914441 了解该想法)。
在最坏的情况下这可能会更慢,因为您必须创建所有组合,但如果键值对最多由有限数量的 dicts 共享,则在实践中可能会更快。
indices_by_item = {}
for i, items in enumerate(L):
for item in items:
indices_by_item.setdefault(item, []).append(i)
# {('a', 1): [0, 1, 2, 3], ('d', 4): [0], ('c', 3): [0, 2], ('b', 2): [0, 1, 2, 3], ('e', 1): [1], ('z', 5): [2], ('j', 1): [3]}
# compute the combination
import itertools
pairs_by_item = {item: list(itertools.combinations(indices, 2)) for item, indices in indices_by_item.items()}
# {('a', 1): [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)], ('d', 4): [], ('c', 3): [(0, 2)], ('b', 2): [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)], ('e', 1): [], ('z', 5): [], ('j', 1): []}
import collections
c = collections.Counter([pair for pairs in pairs_by_item.values() for pair in pairs])
# Counter({(0, 2): 3, (0, 1): 2, (0, 3): 2, (1, 2): 2, (1, 3): 2, (2, 3): 2})
while c:
pair, _ = c.most_common(1)[0]
print(pair)
# remove all the pairs having one element of the best pair
for other_pair in list(c):
if pair[0] in other_pair or pair[1] in other_pair:
del c[other_pair]
# (0, 2)
# (1, 3)