【问题标题】:Gensim - TF-IDF, how to perform a proper genesis TF-IDF?Gensim - TF-IDF,如何执行适当的 Genesis TF-IDF?
【发布时间】:2021-03-07 21:23:04
【问题描述】:

我正在尝试在我的学士论文的一部分中执行一些 NLP(更准确地说是一个 TF-IDF 项目)。

我将其中的一小部分导出到一个名为“thesis.txt”的文档中,似乎在将清理后的文本数据拟合到 gensim Dictionary 时遇到了问题。

所有单词都被标记化,存储在一个单词包中,我不知道我做错了什么。

这是我得到的错误:

    ---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-317-73828cccaebe> in <module>
     17 
     18 #Create dictionary
---> 19 dictionary = Dictionary(tokens_no_stop)
     20 
     21 #Create bag of words

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in __init__(self, documents, prune_at)
     89 
     90         if documents is not None:
---> 91             self.add_documents(documents, prune_at=prune_at)
     92 
     93     def __getitem__(self, tokenid):

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in add_documents(self, documents, prune_at)
    210 
    211             # update Dictionary with the document
--> 212             self.doc2bow(document, allow_update=True)  # ignore the result, here we only care about updating token ids
    213 
    214         logger.info(

~/Library/Python/3.8/lib/python/site-packages/gensim/corpora/dictionary.py in doc2bow(self, document, allow_update, return_missing)
    250         """
    251         if isinstance(document, string_types):
--> 252             raise TypeError("doc2bow expects an array of unicode tokens on input, not a single string")
    253 
    254         # Construct (word, frequency) mapping.

TypeError: doc2bow expects an array of unicode tokens on input, not a single string

提前感谢您的帮助 :)(在我的代码下方查找)

from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from collections import Counter
from gensim.corpora import Dictionary
from gensim.models.tfidfmodel import TfidfModel

f = open('/Users/romeoleon/Desktop/Python & R/NLP/TRIAL_THESIS/thesis.txt','r')
text = f.read()

#Tokenize text
Tokens = word_tokenize(text)

#Lower case everything
Tokens = [t.lower() for t in Tokens]

#Keep only leters
tokens_alpha = [t for t in Tokens if t.isalpha()]

#Remove stopwords
tokens_no_stop = [t for t in tokens_alpha if t not in stopwords.words('french')]

#Create Lemmatizer
lem = WordNetLemmatizer()
lemmatized = [lem.lemmatize(t) for t in tokens_no_stop]


#Create dictionary
dictionary = Dictionary(tokens_no_stop)

#Create bag of words
bow = [dictionary.doc2bow(line) for line in tokens_no_stop]

#Model TFID
tfidf = TfidfModel(bow)
bow_tfidf = tfidf[bow]

【问题讨论】:

    标签: python nlp nltk gensim


    【解决方案1】:

    您的tokens_no_stop 是一个字符串列表,但Dictionary 接受一个字符串列表(更准确地说是字符串的可迭代)。

    【讨论】:

      猜你喜欢
      • 2019-05-24
      • 2017-11-14
      • 1970-01-01
      • 2023-04-06
      • 2015-05-07
      • 2018-11-28
      • 2020-09-27
      • 1970-01-01
      • 2017-03-04
      相关资源
      最近更新 更多