【发布时间】:2017-05-03 21:42:44
【问题描述】:
我编写了几个用户定义的函数来从文本句子/段落列表中删除 Python 中的命名实体(使用 NLTK)。我遇到的问题是我的方法非常慢,尤其是对于大量数据。有没有人建议如何优化它以使其运行得更快?
import nltk
import string
# Function to reverse tokenization
def untokenize(tokens):
return("".join([" "+i if not i.startswith("'") and i not in string.punctuation else i for i in tokens]).strip())
# Remove named entities
def ne_removal(text):
tokens = nltk.word_tokenize(text)
chunked = nltk.ne_chunk(nltk.pos_tag(tokens))
tokens = [leaf[0] for leaf in chunked if type(leaf) != nltk.Tree]
return(untokenize(tokens))
要使用代码,我通常有一个文本列表并通过列表理解调用ne_removal 函数。示例如下:
text_list = ["Bob Smith went to the store.", "Jane Doe is my friend."]
named_entities_removed = [ne_removal(text) for text in text_list]
print(named_entities_removed)
## OUT: ['went to the store.', 'is my friend.']
更新:我尝试使用此代码切换到批处理版本,但它只是稍微快一点。会继续探索。感谢您迄今为止的意见。
def extract_nonentities(tree):
tokens = [leaf[0] for leaf in tree if type(leaf) != nltk.Tree]
return(untokenize(tokens))
def fast_ne_removal(text_list):
token_list = [nltk.word_tokenize(text) for text in text_list]
tagged = nltk.pos_tag_sents(token_list)
chunked = nltk.ne_chunk_sents(tagged)
non_entities = []
for tree in chunked:
non_entities.append(extract_nonentities(tree))
return(non_entities)
【问题讨论】:
-
我不确定迁移到 codereview 是否合适。过慢的代码是一个问题,这不是关于“我可以更好地构建我的代码吗”。
标签: python optimization nltk named-entity-recognition