【发布时间】:2015-07-03 05:22:23
【问题描述】:
我尝试创建自己的语料库来对推文进行情绪分析(无论是正面还是负面)。
我首先尝试现有的 NLTK 电影评论语料库。 但是,如果我使用此代码:
import string
from itertools import chain
from nltk.corpus import movie_reviews as mr
from nltk.corpus import stopwords
from nltk.probability import FreqDist
from nltk.classify import NaiveBayesClassifier as nbc
import nltk
stop = stopwords.words('english')
documents = [([w for w in mr.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]
word_features = FreqDist(chain(*[i for i,j in documents]))
word_features = word_features.keys()[:100]
numtrain = int(len(documents) * 90 / 100)
train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]
test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]
classifier = nbc.train(train_set)
print nltk.classify.accuracy(classifier, test_set)
classifier.show_most_informative_features(5)
我正在接收输出:
0.31
Most Informative Features
uplifting = True pos : neg = 5.9 : 1.0
wednesday = True pos : neg = 3.7 : 1.0
controversy = True pos : neg = 3.4 : 1.0
shocks = True pos : neg = 3.0 : 1.0
catchy = True pos : neg = 2.6 : 1.0
而不是预期的输出(见Classification using movie review corpus in NLTK/Python):
0.655
Most Informative Features
bad = True neg : pos = 2.0 : 1.0
script = True neg : pos = 1.5 : 1.0
world = True pos : neg = 1.5 : 1.0
nothing = True neg : pos = 1.5 : 1.0
bad = False pos : neg = 1.5 : 1.0
我使用的代码与另一个 StackOverflow 页面中的代码完全相同,我的 NLTK(和他们的)是最新的,而且我还拥有最新的电影评论语料库。有人知道出了什么问题吗?
谢谢!
【问题讨论】:
-
更好地了解语料库的长度。
标签: python nltk sentiment-analysis