【发布时间】:2020-02-15 14:39:34
【问题描述】:
我试图了解TfidfVectorizer 背后的数学原理。我用this教程,但我的代码有点改变:
最后还说The values differ slightly because sklearn uses a smoothed version idf and various other little optimizations.
我希望能够使用TfidfVectorizer,但也可以手动计算相同的简单样本。
这是我的全部代码: 将熊猫导入为 pd 从 sklearn.feature_extraction.text 导入 CountVectorizer 从 sklearn.feature_extraction.text 导入 TfidfTransformer 从 sklearn.feature_extraction.text 导入 TfidfVectorizer
def main():
documentA = 'the man went out for a walk'
documentB = 'the children sat around the fire'
corpus = [documentA, documentB]
bagOfWordsA = documentA.split(' ')
bagOfWordsB = documentB.split(' ')
uniqueWords = set(bagOfWordsA).union(set(bagOfWordsB))
print('----------- compare word count -------------------')
numOfWordsA = dict.fromkeys(uniqueWords, 0)
for word in bagOfWordsA:
numOfWordsA[word] += 1
numOfWordsB = dict.fromkeys(uniqueWords, 0)
for word in bagOfWordsB:
numOfWordsB[word] += 1
tfA = computeTF(numOfWordsA, bagOfWordsA)
tfB = computeTF(numOfWordsB, bagOfWordsB)
print(pd.DataFrame([tfA, tfB]))
CV = CountVectorizer(stop_words=None, token_pattern='(?u)\\b\\w\\w*\\b')
cv_ft = CV.fit_transform(corpus)
tt = TfidfTransformer(use_idf=False, norm='l1')
t = tt.fit_transform(cv_ft)
print(pd.DataFrame(t.todense().tolist(), columns=CV.get_feature_names()))
print('----------- compare idf -------------------')
idfs = computeIDF([numOfWordsA, numOfWordsB])
print(pd.DataFrame([idfs]))
tfidfA = computeTFIDF(tfA, idfs)
tfidfB = computeTFIDF(tfB, idfs)
print(pd.DataFrame([tfidfA, tfidfB]))
ttf = TfidfTransformer(use_idf=True, smooth_idf=False, norm=None)
f = ttf.fit_transform(cv_ft)
print(pd.DataFrame(f.todense().tolist(), columns=CV.get_feature_names()))
print('----------- TfidfVectorizer -------------------')
vectorizer = TfidfVectorizer(smooth_idf=False, use_idf=True, stop_words=None, token_pattern='(?u)\\b\\w\\w*\\b', norm=None)
vectors = vectorizer.fit_transform([documentA, documentB])
feature_names = vectorizer.get_feature_names()
print(pd.DataFrame(vectors.todense().tolist(), columns=feature_names))
def computeTF(wordDict, bagOfWords):
tfDict = {}
bagOfWordsCount = len(bagOfWords)
for word, count in wordDict.items():
tfDict[word] = count / float(bagOfWordsCount)
return tfDict
def computeIDF(documents):
import math
N = len(documents)
idfDict = dict.fromkeys(documents[0].keys(), 0)
for document in documents:
for word, val in document.items():
if val > 0:
idfDict[word] += 1
for word, val in idfDict.items():
idfDict[word] = math.log(N / float(val))
return idfDict
def computeTFIDF(tfBagOfWords, idfs):
tfidf = {}
for word, val in tfBagOfWords.items():
tfidf[word] = val * idfs[word]
return tfidf
if __name__ == "__main__":
main()
我可以比较词频的计算。两个结果看起来一样。但是当我计算 IDF 和 TF-IDF 时,来自网站的代码和 TfidfVectorizer 之间存在差异(我也尝试组合 CountVectorizer 和 TfidfTransformer 以确保它返回与 TfidfVectorizer 相同的结果)。
编码 Tf-Idf 结果:
TfidfVectorizer Tf-Idf 结果:
任何人都可以帮助我返回与TfidfVectorizer 相同的返回值或TfidfVectorizer 的设置将返回与上述代码相同的结果吗?
【问题讨论】:
-
这里有多个更正。对于
char,请参考类似的答案here
标签: python-3.x scikit-learn tfidfvectorizer