【问题标题】:How to efficiently search list elements in a string in python如何在python中有效地搜索字符串中的列表元素
【发布时间】:2019-02-09 18:34:03
【问题描述】:

我有一个概念列表 (myconcepts) 和一个句子列表 (sentences),如下所示。

concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]


sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']

简而言之,我想在sentences 中找到concepts。更具体地说,给定concepts 中的列表(例如['natural language processing', 'text mining', 'texts', 'nlp']),我想在句子中识别这些概念并用它的第一个元素替换它们(即natural language processing)。

示例: 所以,如果我们考虑句子data mining and text mining;结果应该是advanced data mining and natural language processing。 (因为data miningtext mining的第一个元素分别是advanced data miningnatural language processing)。

以上虚拟数据的结果应该是:

['advanced data mining and natural language processing', 'natural language processing is mainly used by discourse analysis community', 'advanced data mining in python is fun', 'discourse analysis advanced data mining analysis involves natural language processing', 'advanced data mining and advanced data mining are both very interesting']

我目前正在使用正则表达式执行此操作,如下所示:

concepts_re = []

for item in sorted_wikipedia_redirects:
        item_re = "|".join(re.escape(item) for item in item)
        concepts_re.append(item_re)

sentences_mapping = []

for sentence in sentences:
    for terms in concepts:
        if len(terms) > 1:
            for item in terms:
                if item in sentence:
                    sentence = re.sub(concepts_re[concepts.index(terms)], item[0], sentence)
    sentences_mapping.append(sentence)

在我的真实数据集中,我有大约 800 万个 concepts。因此,我的方法效率非常低,处理一个句子大约需要 5 分钟。我想知道在python中是否有任何有效的方法。

对于那些想要处理一长串concepts 来测量时间的人,我附上了一个更长的列表:https://drive.google.com/file/d/1OsggJTDZx67PGH4LupXIkCTObla0gDnX/view?usp=sharing

如果需要,我很乐意提供更多详细信息。

【问题讨论】:

  • 您的代码显示您使用正则表达式,但解释没有解释原因。使用直接字符串替换可以稍微优化一下吗?
  • @SergeBallesta 感谢您的评论。我还尝试了直接字符串替换。不幸的是,它也很慢。

标签: python list


【解决方案1】:

下面提供的解决方案在运行时的复杂度约为 O(n),其中 n 是每个句子中的标记数。

对于 500 万个句子和您的 concepts.txt,它会在大约 30 秒内执行所需的操作,请参阅第三部分中的基本测试。

当涉及到空间复杂度时,你将不得不保持一个嵌套的字典结构(让我们现在像这样简化它),比如说它是 O(c*u),其中 u 是特定长度概念的唯一标记(token-wise),而 c 是概念的长度。

很难确定确切的复杂性,但它与此非常相似(对于您的示例数据和您提供的 [concepts.txt],这些数据非常准确,但我们会血淋淋的在我们完成实现的过程中的详细信息)。

我假设您可以在空格上拆分您的概念和句子,如果不是这种情况,我建议您查看spaCy,它提供了更智能的方式来标记您的数据。

1。简介

让我们举个例子:

concepts = [
    ["natural language processing", "text mining", "texts", "nlp"],
    ["advanced data mining", "data mining", "data"],
    ["discourse analysis", "learning analytics", "mooc"],
]

正如你所说,概念中的每个元素都必须映射到第一个元素,因此,在 Pythonish 中,它会大致遵循以下原则:

for concept in concepts:
    concept[1:] = concept[0]

如果所有概念的标记长度都等于 1(这里不是这种情况),任务会很容易,并且是唯一的。让我们关注第二种情况和concept 的一个特定(稍作修改)示例,以了解我的观点:

["advanced data mining", "data something", "data"]

这里data 会映射到advanced data mining但是 data something,由data 组成的应该映射到它之前。如果我理解正确,你会想要这句话:

"Here is data something and another data"

要映射到:

"Here is advanced data mapping and another advanced data mining"

而不是幼稚的方法:

"Here is advanced data mapping something and another advanced data mining"

请注意,对于第二个示例,我们只映射了 data,而不是 data something

为了优先考虑data something(以及其他符合此模式的),我使用了一个充满字典的数组结构,其中数组中较早的概念是那些更长的token-wise。

继续我们的示例,这样的数组如下所示:

structure = [
    {"data": {"something": "advanced data mining"}},
    {"data": "advanced data mining"},
]

请注意,如果我们按此顺序遍历标记(例如,首先遍历具有连续标记的第一个字典,如果未找到匹配项,则转到第二个字典,依此类推),我们将首先获得最长的概念。

2。代码

好的,我希望您了解基本概念(如果没有,请在下方发表评论,我会尝试更详细地解释不清楚的部分)。

免责声明:我对这种代码方面并不感到特别自豪,但它完成了工作,而且我想可能会更糟

2.1 分层字典

首先,让我们获得最长的概念标记(不包括第一个元素,因为它是我们的目标,我们不必更改它):

def get_longest(concepts: List[List[str]]):
    return max(len(text.split()) for concept in concepts for text in concept[1:])

使用这些信息,我们可以通过创建与不同长度的概念一样多的字典来初始化我们的结构(在上面的示例中,它将是 2,因此它适用于您的所有数据。任何长度的概念都可以):

def init_hierarchical_dictionaries(longest: int):
    return [(length, {}) for length in reversed(range(longest))]

请注意,我将每个概念的长度添加到数组中,IMO 在遍历时更容易,但在对实现进行一些更改后,您可以不使用它。 p>

现在,当我们拥有这些辅助函数时,我们可以从概念列表中创建结构:

def create_hierarchical_dictionaries(concepts: List[List[str]]):
    # Initialization
    longest = get_longest(concepts)
    hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

    for concept in concepts:
        for text in concept[1:]:
            tokens = text.split()
            # Initialize dictionary; get the one with corresponding length.
            # The longer, the earlier it is in the hierarchy
            current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
            # All of the tokens except the last one are another dictionary mapping to
            # the next token in concept.
            for token in tokens[:-1]:
                current_dictionary[token] = {}
                current_dictionary = current_dictionary[token]

            # Last token is mapped to the first concept
            current_dictionary[tokens[-1]] = concept[0].split()

    return hierarchical_dictionaries

此函数将创建我们的分层字典,请参阅源代码中的 cmets 以获得一些解释。您可能想创建一个自定义类来保留这个东西,这样应该更容易使用。

这与 1 中描述的对象完全相同。简介

2.2 遍历字典

这部分要困难得多,但这次我们使用自上而下的方法。我们将轻松开始:

def embed_sentences(sentences: List[str], hierarchical_dictionaries):
    return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)

提供分层字典,它创建一个生成器,根据概念映射转换每个句子。

现在traverse函数:

def traverse(sentence: str, hierarchical_dictionaries):
    # Get all tokens in the sentence
    tokens = sentence.split()
    output_sentence = []
    # Initialize index to the first token
    index = 0
    # Until any tokens left to check for concepts
    while index < len(tokens):
        # Iterate over hierarchical dictionaries (elements of the array)
        for hierarchical_dictionary_tuple in hierarchical_dictionaries:
            # New index is returned based on match and token-wise length of concept
            index, concept = traverse_through_dictionary(
                index, tokens, hierarchical_dictionary_tuple
            )
            # Concept was found in current hierarchical_dictionary_tuple, let's add it
            # to output
            if concept is not None:
                output_sentence.extend(concept)
                # No need to check other hierarchical dictionaries for matching concept
                break
        # Token (and it's next tokens) do not match with any concept, return original
        else:
            output_sentence.append(tokens[index])
        # Increment index in order to move to the next token
        index += 1

    # Join list of tokens into a sentence
    return " ".join(output_sentence)

再说一次,如果您不确定发生了什么,请发表评论

使用这种方法,悲观地,我们将执行 O(n*c!) 检查,其中 n 是句子中的标记数,c 是最长概念的标记长度,它是阶乘。这种情况极不可能在实践中发生,句子中的每个标记都必须几乎完全符合最长的概念加上所有较短的概念都必须是最短的前缀一个(例如super data miningsuper datadata)。

对于任何实际问题,它会非常接近 O(n),正如我之前所说,使用您在 .txt 文件中提供的数据是 O(3 * n) 最差的-case,通常为 O(2 * n)。

遍历每个字典

def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
    # Get the level of nested dictionaries and initial dictionary
    length, current_dictionary = hierarchical_dictionary_tuple
    # inner_index will loop through tokens until match or no match was found
    inner_index = index
    for _ in range(length):
        # Get next nested dictionary and move inner_index to the next token
        current_dictionary = current_dictionary.get(tokens[inner_index])
        inner_index += 1
        # If no match was found in any level of dictionary
        # Return current index in sentence and None representing lack of concept.
        if current_dictionary is None or inner_index >= len(tokens):
            return index, None

    # If everything went fine through all nested dictionaries, check whether
    # last token corresponds to concept
    concept = current_dictionary.get(tokens[inner_index])
    if concept is None:
        return index, None
    # If so, return inner_index (we have moved length tokens, so we have to update it)
    return inner_index, concept

这构成了我的解决方案的“核心”。

3。结果

现在,为简洁起见,下面提供了完整的源代码(concepts.txt 是您提供的):

import ast
import time
from typing import List


def get_longest(concepts: List[List[str]]):
    return max(len(text.split()) for concept in concepts for text in concept[1:])


def init_hierarchical_dictionaries(longest: int):
    return [(length, {}) for length in reversed(range(longest))]


def create_hierarchical_dictionaries(concepts: List[List[str]]):
    # Initialization
    longest = get_longest(concepts)
    hierarchical_dictionaries = init_hierarchical_dictionaries(longest)

    for concept in concepts:
        for text in concept[1:]:
            tokens = text.split()
            # Initialize dictionary; get the one with corresponding length.
            # The longer, the earlier it is in the hierarchy
            current_dictionary = hierarchical_dictionaries[longest - len(tokens)][1]
            # All of the tokens except the last one are another dictionary mapping to
            # the next token in concept.
            for token in tokens[:-1]:
                current_dictionary[token] = {}
                current_dictionary = current_dictionary[token]

            # Last token is mapped to the first concept
            current_dictionary[tokens[-1]] = concept[0].split()

    return hierarchical_dictionaries


def traverse_through_dictionary(index, tokens, hierarchical_dictionary_tuple):
    # Get the level of nested dictionaries and initial dictionary
    length, current_dictionary = hierarchical_dictionary_tuple
    # inner_index will loop through tokens until match or no match was found
    inner_index = index
    for _ in range(length):
        # Get next nested dictionary and move inner_index to the next token
        current_dictionary = current_dictionary.get(tokens[inner_index])
        inner_index += 1
        # If no match was found in any level of dictionary
        # Return current index in sentence and None representing lack of concept.
        if current_dictionary is None or inner_index >= len(tokens):
            return index, None

    # If everything went fine through all nested dictionaries, check whether
    # last token corresponds to concept
    concept = current_dictionary.get(tokens[inner_index])
    if concept is None:
        return index, None
    # If so, return inner_index (we have moved length tokens, so we have to update it)
    return inner_index, concept


def traverse(sentence: str, hierarchical_dictionaries):
    # Get all tokens in the sentence
    tokens = sentence.split()
    output_sentence = []
    # Initialize index to the first token
    index = 0
    # Until any tokens left to check for concepts
    while index < len(tokens):
        # Iterate over hierarchical dictionaries (elements of the array)
        for hierarchical_dictionary_tuple in hierarchical_dictionaries:
            # New index is returned based on match and token-wise length of concept
            index, concept = traverse_through_dictionary(
                index, tokens, hierarchical_dictionary_tuple
            )
            # Concept was found in current hierarchical_dictionary_tuple, let's add it
            # to output
            if concept is not None:
                output_sentence.extend(concept)
                # No need to check other hierarchical dictionaries for matching concept
                break
        # Token (and it's next tokens) do not match with any concept, return original
        else:
            output_sentence.append(tokens[index])
        # Increment index in order to move to the next token
        index += 1

    # Join list of tokens into a sentence
    return " ".join(output_sentence)


def embed_sentences(sentences: List[str], hierarchical_dictionaries):
    return (traverse(sentence, hierarchical_dictionaries) for sentence in sentences)


def sanity_check():
    concepts = [
        ["natural language processing", "text mining", "texts", "nlp"],
        ["advanced data mining", "data mining", "data"],
        ["discourse analysis", "learning analytics", "mooc"],
    ]
    sentences = [
        "data mining and text mining",
        "nlp is mainly used by discourse analysis community",
        "data mining in python is fun",
        "mooc data analysis involves texts",
        "data and data mining are both very interesting",
    ]

    targets = [
        "advanced data mining and natural language processing",
        "natural language processing is mainly used by discourse analysis community",
        "advanced data mining in python is fun",
        "discourse analysis advanced data mining analysis involves natural language processing",
        "advanced data mining and advanced data mining are both very interesting",
    ]

    hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)

    results = list(embed_sentences(sentences, hierarchical_dictionaries))
    if results == targets:
        print("Correct results")
    else:
        print("Incorrect results")


def speed_check():
    with open("./concepts.txt") as f:
        concepts = ast.literal_eval(f.read())

    initial_sentences = [
        "data mining and text mining",
        "nlp is mainly used by discourse analysis community",
        "data mining in python is fun",
        "mooc data analysis involves texts",
        "data and data mining are both very interesting",
    ]

    sentences = initial_sentences.copy()

    for i in range(1_000_000):
        sentences += initial_sentences

    start = time.time()
    hierarchical_dictionaries = create_hierarchical_dictionaries(concepts)
    middle = time.time()
    letters = []
    for result in embed_sentences(sentences, hierarchical_dictionaries):
        letters.append(result[0].capitalize())
    end = time.time()
    print(f"Time for hierarchical creation {(middle-start) * 1000.0} ms")
    print(f"Time for embedding {(end-middle) * 1000.0} ms")
    print(f"Overall time elapsed {(end-start) * 1000.0} ms")


def main():
    sanity_check()
    speed_check()


if __name__ == "__main__":
    main()

速度检查结果如下:

Time for hierarchical creation 107.71822929382324 ms
Time for embedding 30460.427284240723 ms
Overall time elapsed 30568.145513534546 ms

因此,对于 500 万个句子(您提供的 5 个句子连接 100 万次)和您提供的概念文件 (1.1 mb),执行概念映射大约需要 30 秒,我想这还不错。

在最坏的情况下,字典应该占用与输入文件一样多的内存(在这种情况下为concepts.txt),但通常会更低/低得多,因为这取决于概念长度和唯一单词的组合单词。

【讨论】:

  • 哇,这令人印象深刻。我将在我的真实数据集上运行这段代码,并让你知道它是如何执行的。谢谢你的精彩回答
  • 再次感谢您的出色回答。当我为我的真实数据集possible_dictionary = current_dictionary.get(tokens[inner_index]) IndexError: list index out of range 运行您的代码时,出现以下错误。有没有办法解决这个问题?如果需要,我很乐意为您提供更多详细信息。期待您的来信。
  • 我附上了一个示例概念列表(链接:drive.google.com/file/d/1U2gT0umy-iFdP1G5ELkY1veMZJmtLEC8/…)。我得到句子world wide web www 的错误。我想问题出在www。期待您的回音。谢谢你:)
  • 对不起,很遗憾,没时间,虽然我已经在我的答案中更新了traverse_through_dictionary 函数,但现在应该修复它。无论如何,您的案例在性能方面的表现如何?
  • 非常感谢您的更新。对此,我真的非常感激。该更新已解决该问题。我可以在大约 5 分钟内运行我的整个数据集(这非常令人印象深刻)。我会向正在阅读我帖子的任何人强烈推荐此解决方案。再次感谢您:)
【解决方案2】:

使用suffix array 方法,

如果您的数据已经过清理,请跳过此步骤。

首先,清理您的数据,用您知道不会成为任何概念或句子一部分的任何字符替换所有空白字符。

然后为所有句子构建后缀数组。每个句子都需要 O(nLogn) 时间。很少有算法可以使用 suffix trees 在 O(n) 时间内完成此操作

为所有句子准备好后缀数组后,只需对您的概念执行二进制搜索即可。

您可以使用 LCP 数组进一步优化您的搜索。参考:kasai's

同时使用 LCP 和后缀数组,可以将搜索的时间复杂度降低到 O(n)。

编辑: 这种方法通常用于基因组的序列比对,也很受欢迎。您应该很容易找到适合您的实现。

【讨论】:

  • ... build suffix arrays for all the sentences. - 每个句子的数组还是所有句子的数组?当 sanitizing 句子时,结果是单个字符串使用占位符来表示句子 within 的空格,而使用不同的占位符表示 sentence boundary ?
  • 每个句子的数组原因: OP 想要找到每个句子中的每个概念。关于你的第二步,我真的不喜欢玩空白。消毒剂只是为了清楚起见。所以是的,它将是使用占位符的单个字符串。我没有想到将所有句子与边界占位符合并在一起。因为即使我们将所有句子合并在一起,后缀仍然是按顺序排列的,因此我们可以进行第二次二进制搜索以查找合并句子中的所有出现。所以是的,也许合并是一个更好的解决方案。谢谢指点。
【解决方案3】:
import re
concepts = [['natural language processing', 'text mining', 'texts', 'nlp'], ['advanced data mining', 'data mining', 'data'], ['discourse analysis', 'learning analytics', 'mooc']]
sentences = ['data mining and text mining', 'nlp is mainly used by discourse analysis community', 'data mining in python is fun', 'mooc data analysis involves texts', 'data and data mining are both very interesting']

replacementDict = {concept[0] : concept[1:] for concept in concepts}

finderAndReplacements = [(re.compile('(' + '|'.join(replacees) + ')'), replacement) 
for replacement, replacees in replacementDict.items()]

def sentenceReplaced(findRegEx, replacement, sentence):
    return findRegEx.sub(replacement, sentence, count=0)

def sentencesAllReplaced(sentences, finderAndReplacements=finderAndReplacements):
    for regex, replacement in finderAndReplacements:
        sentences = [sentenceReplaced(regex, replacement, sentence) for sentence in sentences]
    return sentences

print(sentencesAllReplaced(sentences))
  • 设置:我更喜欢concepts 表示为一个字典,其中键、值是替换、替换。将此存储在replacementDict
  • 为每个预期的替换组编译一个匹配的正则表达式。将其与预期的替换内容一起存储在 finderAndReplacements 列表中。
  • sentenceReplaced 函数在执行替换后返回输入语句。 (此处的应用顺序无关紧要,因此如果我们注意避免竞争条件,应该可以进行并行化。)
  • 最后,我们循环遍历并查找/替换每个句子。 (大量并行结构可以提高性能。)

我很想看到一些彻底的基准测试/测试/报告,因为我确信根据此任务输入的性质(conceptssentences)和运行它的硬件,会有很多细微之处。

如果sentences 是主要输入组件,与concepts 替换相比,我相信编译正则表达式将是有利的。当句子少而概念多时,特别是如果大多数概念不在任何句子中,编译这些匹配器将是一种浪费。如果每次替换都有很多替换,则使用的编译方法可能会执行不佳甚至出错。 . . (关于输入参数的不同假设提供了许多权衡考虑,这通常是这种情况。)

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2019-06-26
    • 2021-05-28
    • 2019-10-30
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2011-08-28
    • 1970-01-01
    相关资源
    最近更新 更多