【发布时间】:2015-07-10 21:09:57
【问题描述】:
我有以下代码从一组文件中提取特征(文件夹名称是类别名称)用于文本分类。
import sklearn.datasets
from sklearn.feature_extraction.text import TfidfVectorizer
train = sklearn.datasets.load_files('./train', description=None, categories=None, load_content=True, shuffle=True, encoding=None, decode_error='strict', random_state=0)
print len(train.data)
print train.target_names
vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(train.data)
它抛出以下堆栈跟踪:
Traceback (most recent call last):
File "C:\EclipseWorkspace\TextClassifier\main.py", line 16, in <module>
X_train = vectorizer.fit_transform(train.data)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 1285, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 804, in fit_transform
self.fixed_vocabulary_)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 739, in _count_vocab
for feature in analyze(doc):
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 236, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "C:\Python27\lib\site-packages\sklearn\feature_extraction\text.py", line 113, in decode
doc = doc.decode(self.encoding, self.decode_error)
File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 32054: invalid start byte
我运行 Python 2.7。我怎样才能让它工作?
编辑:
我刚刚发现这对于utf-8 编码的文件非常有效(我的文件是ANSI 编码的)。有什么方法可以让sklearn.datasets.load_files() 使用ANSI 编码?
【问题讨论】:
-
您可以添加数据样本吗?可能是数据没有以 utf-8 编码——也许它们是 utf-16?在不了解数据格式的情况下,这很难。我不是专家,但您可以尝试使用
each_string.decode('utf-16').encode('utf-8')之类的方式将字符串转换为 utf-8 -
@ohruunuruus 我的训练数据类似于20个新闻组数据集,编码是ANSI
-
TfidfVectorizer采用encoding参数。尝试传递encoding=ansi并报告任何错误
标签: python machine-learning scikit-learn text-classification scikits