【问题标题】:Why this does not work? Stop words in CountVectorizer为什么这不起作用? CountVectorizer 中的停用词
【发布时间】:2017-06-01 19:05:45
【问题描述】:

我正在使用 CountVectorizer 对文本进行标记,并且我想添加自己的停用词。为什么这不起作用? 'de' 这个词不应该出现在最终版本中。

from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()
print (word_tokenizer(u'Isto é um teste de qualquer coisa.'))

[u'Isto', u'um', u'teste', u'de', u'qualquer', u'coisa']

【问题讨论】:

标签: python scikit-learn nlp


【解决方案1】:
from sklearn.feature_extraction.text import CountVectorizer

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
word_tokenizer = vectorizer.build_tokenizer()

In [7]: vectorizer.vocabulary_
Out[7]: {u'coisa': 0, u'isto': 1, u'qualquer': 2, u'teste': 3, u'um': 4}

您可以看到u'de' 不在计算词汇表中...

build_tokenizer 方法只是标记了你的字符串,删除 stopwords 应该在之后完成

来自CountVectorizer的源代码:

def build_tokenizer(self):
    """Return a function that splits a string into a sequence of tokens"""
    if self.tokenizer is not None:
        return self.tokenizer
    token_pattern = re.compile(self.token_pattern)
    return lambda doc: token_pattern.findall(doc)

您的问题的解决方案可以是:

vectorizer = CountVectorizer(ngram_range=(1,1),stop_words=frozenset([u'de']))
sentence = [u'Isto é um teste de qualquer coisa.']
tokenized = vectorizer.fit_transform(sentence)
result = vectorizer.inverse_transform(tokenized)

In [12]: result
Out[12]: 
[array([u'isto', u'um', u'teste', u'qualquer', u'coisa'], 
       dtype='<U8')]

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2011-09-22
    • 2012-11-17
    • 1970-01-01
    • 1970-01-01
    • 2013-04-06
    相关资源
    最近更新 更多