【问题标题】:Reduce set size while preserving minimum frequency在保持最小频率的同时减小集合大小
【发布时间】:2014-09-12 05:19:20
【问题描述】:

假设我有以下设置:

{(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)}

这为每个数字提供以下频率:

2: 4, 3: 4, 4: 3, 1: 2

您能否提出一种减少集合的方法,使每个数字在其中至少存在 2 次,但将集合中的元组数量减少到最少?

例如,元组 (3, 4) 可以从集合中删除,给出这些频率:

2: 4, 3: 3, 4: 2, 1: 2

这是我解决这个问题的非常微弱的尝试:

def reduce(a, limit):
    while True:
       remove = None
       for i in a:
          c = Counter([i for s in a for i in s])

          if c.most_common(1)[0][0] in i:
             if min([c[j] for j in i]) > limit:
                remove = i
                break

       if remove:
          a.remove(remove)
       else:
          break

reduce(a, 2) # we want at least two of each number

这个解决方案的问题是它可能会减少集合,但不一定让我得到尽可能小的集合。

对于我的特定示例,我希望减少的集合包含字符串,可以这样说:

a = [("one","eighty one","three"), ("eighty five","sixty one","three", "eleven"), ...] 其中a的长度为1000。a中每个元组的长度为3到9。元组可以由100个唯一值组成,例如,“一”是一个这样的值。在减少集合后,我希望每个唯一值至少表示 25 次。 PC 需要多长时间来计算缩减集?我们说的是几秒钟还是几分钟?

【问题讨论】:

  • 这看起来像是en.wikipedia.org/wiki/Set_cover_problem 的变体。
  • 我认为这个特定的集合有两种解决方案,其中之一是 {(2,), (1, 4), (1, 2, 3), (3, 4)}。对于这种大小的集合,您可以进行蛮力搜索,但我怀疑您更喜欢更优雅的方法...
  • 这不是最佳答案吗:{(1, 2, 3, 4), (1, 2, 3, 4)}?
  • @ArtemFedosov:不,你只能使用原始集合中的元组。

标签: python algorithm


【解决方案1】:

如 cmets 中所述,NP-hard 问题 Set Cover 是一个特殊的 这个问题的最小频率是k = 1的情况,使得这个 问题 NP-hard 也是如此。我会推荐一个像这样的图书馆 PuLP 具有以下整数 程序。

minimize sum over tuples T of x(T)
subject to
y(e): for all elements e, (sum over tuples T of (count of e in T) * x(T)) >= k
z(T): for all tuples T, x(T) in {0, 1}

PuLP 的一个缺点是它需要外部求解器。我曾是 然而,我想破解,所以我写了一个(非常轻微测试的)纯 Python 求解器。它使用深度优先搜索和最佳优先回溯, 使用简单的传播策略来确定哪些元组必须或 不得选择和基于原始对偶的启发式函数 近似于前一个程序的以下对偶(所以它是 复杂的玩具,但仍然是玩具)。

maximize (sum over elements e of k * y(e)) - (sum over tuples T of z(T))
subject to
x(T): for all tuples T, (sum over elements e in T of y(e)) - z(T) <= 1
for all elements e, y(e) >= 0
for all tuples T, z(T) >= 0

原始对偶策略是以相同的速度增加 y谁增加不需要无利可图的相应增加 在z.

from collections import Counter, defaultdict, namedtuple
from fractions import Fraction
from heapq import heappop, heappush
from math import ceil
from operator import itemgetter


class _BestFirstSearchDepthFirstBacktracking:
    def optimize(self):
        node = self._make_root_node()
        heap = []
        upper_bound = None
        while True:
            lower_bound = ceil(node.lower_bound)
            if upper_bound is None or lower_bound < upper_bound:
                child_nodes = list(self._make_child_nodes(node))
                if child_nodes:
                    i, node = min(enumerate(child_nodes), key=itemgetter(1))
                    del child_nodes[i]
                    for child_node in child_nodes:
                        heappush(heap, child_node)
                    continue
                upper_bound = lower_bound
                solution = node
            if not heap:
                return (upper_bound, solution)
            node = heappop(heap)


Node = namedtuple('Node', ('lower_bound', 'index', 'maybes', 'yeses', 'variable'))


class UnsolvableException(Exception):
    pass


class _Optimizer(_BestFirstSearchDepthFirstBacktracking):
    def __init__(self, tuples, min_freq):
        self._index = 0
        self._tuples = set(tuples)
        self._min_freq = min_freq
        self._elements = set()
        for t in self._tuples:
            self._elements.update(t)

    def _propagate(self, maybes, yeses):
        upper_count = Counter()
        for t in maybes:
            upper_count.update(t)
        for t in yeses:
            upper_count.update(t)
        if any(upper_count[e] < self._min_freq for e in self._elements):
            raise UnsolvableException()
        forced_yeses = set()
        forced_yeses = {t for t in maybes if any(upper_count[e] - k < self._min_freq for e, k in Counter(t).items())}
        maybes = maybes - forced_yeses
        yeses = yeses | forced_yeses
        lower_count = Counter()
        for t in yeses:
            lower_count.update(t)
        residual = {e for e in self._elements if lower_count[e] < self._min_freq}
        maybes = {t for t in maybes if any(e in residual for e in t)}
        return (maybes, yeses)

    def _compute_heuristic(self, maybes, yeses):
        lower_count = Counter()
        for t in yeses:
            lower_count.update(t)
        residual_count = {e: max(self._min_freq - lower_count[e], 0) for e in self._elements}
        y = defaultdict(int)
        z = defaultdict(int)
        variable = None
        while True:
            slack = {t: 1 + z[t] - sum(y[e] for e in t) for t in maybes}
            assert all(s >= 0 for s in slack.values())
            inactive_maybes = {t for t, s in slack.items() if s > 0}
            if not inactive_maybes:
                break
            active_maybes = {t for t, s in slack.items() if s == 0}
            active_count = Counter()
            for t in active_maybes:
                active_count.update(t)
            dy = {e: 1 for e, k in residual_count.items() if active_count[e] < k}
            if not dy:
                break
            delta_inverse, variable = max(((Fraction(sum(dy.get(e, 0) for e in t), slack[t]), t) for t in inactive_maybes), key=itemgetter(0))
            delta = Fraction(1, delta_inverse)
            for e, dy_e in dy.items():
                y[e] += delta * dy_e
            for t in active_maybes:
                z[t] += delta * sum(dy.get(e, 0) for e in t)
        return (sum(residual_count[e] * y_e for e, y_e in y.items()) - sum(z.values()), variable)

    def _make_node(self, maybes, yeses):
        maybes, yeses = self._propagate(maybes, yeses)
        heuristic, variable = self._compute_heuristic(maybes, yeses)
        node = Node(len(yeses) + heuristic, self._index, maybes, yeses, variable)
        self._index += 1
        return node

    def _make_root_node(self):
        return self._make_node(self._tuples, set())

    def _make_child_nodes(self, node):
        if node.variable is None:
            return
        variable = {node.variable}
        maybes = node.maybes - variable
        yield self._make_node(maybes, node.yeses)
        yield self._make_node(maybes, node.yeses | variable)


def optimize(tuples, min_freq):
    optimizer = _Optimizer(tuples, min_freq)
    node = optimizer.optimize()[1]
    print('Nodes examined:', optimizer._index)
    return node.yeses


print(optimize({(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)}, 2))
print(optimize({(1, 2, 3, 4, 5, 6, 7), (8, 9, 10, 11, 12, 13, 14), (1, 2, 3, 4, 8, 9, 10, 11), (5, 6, 12, 13), (7, 14)}, 1))

【讨论】:

    【解决方案2】:

    这是一种快速而肮脏的方法。希望足以让您继续前进。

    不幸的是,不能保证获得精确的最小结果集。它首先摆脱较小的元组。因此,如果有更多更小的元组和更少的长元组,它可能对你有用。

    也以有序集(列表)开始,但没有恢复顺序。至少需要在函数中进行排序,以便计算值正确关联。我想清理它并重构它,但是已经晚了。

    def reduce(source, min_count=2):
        print "source: {}".format(source)
        # [(2,), (3,), (1, 4), (1, 2, 3), (2, 3), (3, 4), (2, 4)]
        answer = []
    
        freq = {}
        lens = []
        for t in source:
            lens.append(len(t))
            for i in t:
                freq[i] = freq.get(i, 0) + 1
        print "freq: {}".format(freq) # {1: 2, 2: 4, 3: 4, 4: 3}
        print "lens: {}".format(lens) # [1, 1, 2, 3, 2, 2, 2]
    
        from collections import defaultdict
        slens = defaultdict(list)
        for l, t in zip(lens, source):
            slens[l].append(t)
        print "slens: {}".format(slens)
        # {1: [(2,), (3,)], 2: [(1, 4), (2, 3), (3, 4), (2, 4)], 3: [(1, 2, 3)]}
    
        for l in sorted(slens.keys()):
            for t in slens[l]:
                save = False
                for i in t:
                    if (freq[i] <= min_count):
                        save = True
                    freq[i] -= 1
                if save:
                    answer.append(t)
        print "answer: {}".format(answer) # [(1, 4), (1, 2, 3), (3, 4), (2, 4)]
    
        freq = {}
        for t in answer:
            for i in t:
                freq[i] = freq.get(i, 0) + 1
        print "freq: {}".format(freq) # {1: 2, 2: 2, 3: 2, 4: 3}
    

    我最初的想法是迭代,将所有元组保存在 min_count 以下并减少工作集。然后对剩余的元组进行评分,其中较低频率的元素计数更多。然后丢弃得分最低的元组,这些元组在删除时不会将任何组件的频率降低到 min_count 以下。然后重新计算频率并重新开始。

    【讨论】:

    • 这个解决方案减少了集合,但不一定是最大可能的数量。
    【解决方案3】:

    问题至少是 NP 难题,这意味着您将无法找到有效的(多项式时间)算法。但是,有一些方法可以减少恒定时间因子。除了使用更好的算法外,还可以考虑使用更快的运行时,例如 PyPy。

    如果运行完成,以下代码将返回一个尽可能小的子集。此外,它只考虑有效输入,并且可以逐步输出增加小的覆盖子集。

    from collections import defaultdict
    from itertools import product, combinations
    
    def covering_set(sets, min_freq, print_approximations=False):
    
        # dictionary mapping each unique value to the sets that contain it
        d = defaultdict(list)
        for set_ in sets:
            for elem in set_:
                d[elem].append(set_)
    
        # we need min_freq number of each unique values
        combos = [combinations(x, min_freq) for x in d.values()]
    
        #initial solution
        min_cover = sets
        min_length = len(sets)
    
        #iterate through valid solutions
        #cover is a list of list of sets
        for cover in product(*combos):
    
            #we must flatten and remove the duplicates in the cover
            covering_set = set()
            for elem_cover in cover:
                for set_ in elem_cover:
                    if set_ not in covering_set:
                        covering_set.add(set_)
    
            #now, we check if it the smallest current solution            
            if len(covering_set) < min_length:
                min_cover = covering_set
                min_length = len(covering_set)
                if print_approximations:
                    print(min_length, min_cover)
    
        return min_cover
    

    【讨论】:

      【解决方案4】:

      所以这是我的解决方案。我看到您最多使用了一组 1000 个元素,所以我决定以递归模式实现该算法。

      首先,让我们定义获取元组中每个数字的频率的函数:

      def get_frequency(tuple_list):
          frequency = {}
          def mapper(element):
              if frequency.has_key(element):
                  frequency[element] += 1
              else:
                  frequency[element] = 1
          map(lambda t: map(mapper, t), tuple_list)
          return frequency
      

      这说起来比较容易,所以我不会花太多时间在这上面。之后,我决定实现名为recursive 的主函数。此函数返回一个元组,该元组由能够删除的元素列表和算法可以达到的最大深度组成。

      这是我在实现之前写的预算法:

      if tuple_list is null : return ([], iteration)
      best_deletion = None
      for elements:
           if element can't be deleted : continue
           launch the next recursion without the current element in the list
           if the iteration is better than best_iteration or best_iteration is None :
               set the result of recursion in best_deletion
      if best_deletion is None : return ([], iteration)
      return the best_iteration with adding the current Node inside, and increment the iteration
      

      结果如下:

      def recursive(tuple_list, limit, iteration):
          if tuple_list == []:
              return ([], iteration)
      
          frequency = get_frequency(tuple_list)
      
          value = None
      
          for i in xrange(len(tuple_list)):
      
              impossible_to_delete = False
              for number in tuple_list[i]:
                  frequency[number] -= 1
                  if frequency[number] < limit:
                      impossible_to_delete = True
                      break
      
              if impossible_to_delete:
                  continue
      
              next_recursion_list = tuple_list[:]
              next_recursion_list.pop(i)
      
              maximum_deletion = recursive(next_recursion_list, limit, iteration + 1)
      
              if value == None:
                  maximum_deletion[0].insert(0, tuple_list[i])
                  value = (maximum_deletion[0], maximum_deletion[1] + 1)
              else:
                  if value[1] < maximum_deletion[1]:
                      maximum_deletion[0].insert(0, tuple_list[i])
                      value = (maximum_deletion[0], maximum_deletion[1] + 1)
      
          if value == None:
              return ([], iteration)
          return value
      

      之后,就这样调用函数:

      items_to_delete = recursive(list(tuple_set), 2, 0)
      

      希望它会有所帮助。如果我有时间,我会测试哪个先例算法是最快的

      【讨论】:

        猜你喜欢
        • 2011-11-18
        • 2011-05-28
        • 2019-03-21
        • 2019-06-21
        • 1970-01-01
        • 2011-03-09
        • 2013-10-01
        • 2017-09-09
        • 2020-06-29
        相关资源
        最近更新 更多