【发布时间】:2014-10-08 13:09:54
【问题描述】:
我首先将 pdf 转换为纯文本(我将其打印出来,一切正常),然后当我尝试从 NLTK 运行 word_tokenize() 时出现 UnicodeDecodeError。
尽管我事先尝试对纯文本进行 decode('utf-8').encode('utf-8') ,但仍收到该错误。在回溯中,我注意到 word_tokenize() 中首先引发错误的代码行是 plaintext.split('\n')。这就是为什么我试图通过在纯文本上运行 split('\n') 来重现该错误,但仍然没有出现任何错误。
所以,我既不了解导致错误的原因,也不了解如何避免。
任何帮助将不胜感激! :) 也许我可以通过更改 pdf_to_txt 文件中的某些内容来避免它?
这是标记化的代码:
from cStringIO import StringIO
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import os
import string
from pdfminer.pdfinterp import PDFResourceManager, PDFPageInterpreter
from pdfminer.converter import TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfpage import PDFPage
stopset = stopwords.words('english')
path = 'my_folder'
listing = os.listdir(path)
for infile in listing:
text = self.convert_pdf_to_txt(path+infile)
text = text.decode('utf-8').encode('utf-8').lower()
print text
splitted = text.split('\n')
filtered_tokens = [i for i in word_tokenize(text) if i not in stopset and i not in string.punctuation]
这是我为了从 pdf 转换为 txt 而调用的方法:
def convert_pdf_to_txt(self, path):
rsrcmgr = PDFResourceManager()
retstr = StringIO()
codec = 'utf-8'
laparams = LAParams()
device = TextConverter(rsrcmgr, retstr, codec=codec, laparams=laparams)
fp = file(path, 'rb')
interpreter = PDFPageInterpreter(rsrcmgr, device)
password = ""
maxpages = 0
caching = True
pagenos=set()
for page in PDFPage.get_pages(fp, pagenos, maxpages=maxpages, password=password,caching=caching, check_extractable=True):
interpreter.process_page(page)
fp.close()
device.close()
ret = retstr.getvalue()
retstr.close()
return ret
这是我得到的错误的回溯:
Traceback (most recent call last):
File "/home/iammyr/opt/workspace/task-logger/task_logger/nlp/pre_processing.py", line 65, in <module>
obj.tokenizeStopWords()
File "/home/iammyr/opt/workspace/task-logger/task_logger/nlp/pre_processing.py", line 29, in tokenizeStopWords
filtered_tokens = [i for i in word_tokenize(text) if i not in stopset and i not in string.punctuation]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 93, in word_tokenize
return [token for sent in sent_tokenize(text)
[...]
File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/punkt.py", line 586, in _tokenize_words
for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 9: ordinal not in range(128)
感谢你的一百万和大量的好业力! ;)
【问题讨论】:
-
“纯文本”是什么意思?你在文件中有什么编码?
-
解码然后立即编码的意义何在?我猜删除
.encode('utf-8')会解决您的问题。 -
嗨,三胞胎,非常感谢您的帮助!确实删除编码有效,非常感谢:)我再次解码和编码的原因是因为我读过这个stackoverflow.com/questions/9644099/…并且“纯文本”的编解码器已经是utf-8,正如你在convert_pdf_to_txt()中看到的那样.这就是为什么我感到困惑的部分原因,因为即使解码也不应该是必要的,但它仍然是。多谢! ;)
标签: python-2.7 encoding utf-8 nltk pdfminer