【发布时间】:2014-05-09 04:36:14
【问题描述】:
我正在编写一个脚本,它会转到链接列表并解析信息。
它适用于大多数网站,但有些网站令人窒息 "UnicodeEncodeError: 'ascii' 编解码器无法对位置 13 中的字符 '\xe9' 进行编码:序数不在范围内 (128)"
它在 client.py 上停止,它是 python3 上 urllib 的一部分
确切的链接是: http://finance.yahoo.com/news/cafés-growth-faster-than-fast-food-peers-144512056.html
这里有很多类似的帖子,但似乎没有一个答案对我有用。
我的代码是:
from urllib import request
def __request(link,debug=0):
try:
html = request.urlopen(link, timeout=35).read() #made this long as I was getting lots of timeouts
unicode_html = html.decode('utf-8','ignore')
# NOTE the except HTTPError must come first, otherwise except URLError will also catch an HTTPError.
except HTTPError as e:
if debug:
print('The server couldn\'t fulfill the request for ' + link)
print('Error code: ', e.code)
return ''
except URLError as e:
if isinstance(e.reason, socket.timeout):
print('timeout')
return ''
else:
return unicode_html
这会调用请求函数
link = 'http://finance.yahoo.com/news/cafés-growth-faster-than-fast-food-peers-144512056.html' page = __request(链接)
回溯是:
Traceback (most recent call last):
File "<string>", line 250, in run_nodebug
File "C:\reader\get_news.py", line 276, in <module>
main()
File "C:\reader\get_news.py", line 255, in main
body = get_article_body(item['link'],debug=0)
File "C:\reader\get_news.py", line 155, in get_article_body
page = __request('na',url)
File "C:\reader\get_news.py", line 50, in __request
html = request.urlopen(link, timeout=35).read()
File "C:\Python33\Lib\urllib\request.py", line 156, in urlopen
return opener.open(url, data, timeout)
File "C:\Python33\Lib\urllib\request.py", line 469, in open
response = self._open(req, data)
File "C:\Python33\Lib\urllib\request.py", line 487, in _open
'_open', req)
File "C:\Python33\Lib\urllib\request.py", line 447, in _call_chain
result = func(*args)
File "C:\Python33\Lib\urllib\request.py", line 1268, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "C:\Python33\Lib\urllib\request.py", line 1248, in do_open
h.request(req.get_method(), req.selector, req.data, headers)
File "C:\Python33\Lib\http\client.py", line 1061, in request
self._send_request(method, url, body, headers)
File "C:\Python33\Lib\http\client.py", line 1089, in _send_request
self.putrequest(method, url, **skips)
File "C:\Python33\Lib\http\client.py", line 953, in putrequest
self._output(request.encode('ascii'))
UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 13: ordinal not in range(128)
感谢任何帮助这让我发疯,我想我已经尝试了 x.decode 和类似的所有组合
(如果可能的话,我可以忽略违规字符。)
【问题讨论】:
-
用户 Kenneth Reitz 的请求库。我不能高度推荐它。它将使所有这些代码变得更加简单,并且几乎肯定会解决这个问题。
-
@JackGibbs:
requests确实会通过明确重新引用 URL 来处理其中包含非 ASCII 字符的 URL。 -
@JackGibbs: valid urls have characters that are subset of ascii.
标签: python exception-handling web-scraping beautifulsoup utf8-decode