【发布时间】:2021-04-23 09:42:55
【问题描述】:
当您在 spacy 的 (v3.0.5) 英语语言模型 en_core_web_sm 中分配分词器时,它自己的默认分词器会改变其行为。
您期望没有任何变化,但它默默地失败了。这是为什么呢?
要重现的代码:
import spacy
text = "don't you're i'm we're he's"
# No tokenizer assignment, everything is fine
nlp = spacy.load('en_core_web_sm')
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ['do', "n't", 'you', 'be', 'I', 'be', 'we', 'be', 'he', 'be']
# Default Tokenizer assignent, tokenization and therefore lemmatization fails
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = spacy.tokenizer.Tokenizer(nlp.vocab)
doc = nlp(text)
[t.lemma_ for t in doc]
>>> ["don't", "you're", "i'm", "we're", "he's"]
【问题讨论】:
-
我认为你应该 tyr:tokenizer =nlp.Defaults.create_tokenizer(nlp.vocab)
-
AttributeError: type object 'EnglishDefaults' has no attribute 'create_tokenizer' @NirElbaz
标签: python python-3.x spacy spacy-3