【发布时间】:2019-05-29 13:55:45
【问题描述】:
我正在 Python 2.7 中编写一个简单的网络爬虫,在尝试从 HTTPS 网站检索 robots.txt 文件时遇到 SSL 证书验证失败异常。
以下是相关代码:
def getHTMLpage(pagelink, currenttime):
"Downloads HTML page from server"
#init
#parse URL and get domain name
o = urlparse.urlparse(pagelink,"http")
if o.netloc == "":
netloc = re.search(r"[^/]+\.[^/]+\.[^/]+", o.path)
if netloc:
domainname="http://"+netloc.group(0)+"/"
else:
domainname=o.scheme+"://"+o.netloc+"/"
if o.netloc != "" and o.netloc != None and o.scheme != "mailto": #if netloc isn't empty and it's not a mailto link
link=domainname+o.path[1:]+o.params+"?"+o.query+"#"+o.fragment
if not (robotfiledictionary.get(domainname)): #if robot file for domainname was not downloaded
robotfiledictionary[domainname] = robotparser.RobotFileParser() #initialize robots.txt parser
robotfiledictionary[domainname].set_url(domainname+"robots.txt") #set url for robots.txt
print " Robots.txt for %s initial download" % str(domainname)
robotfiledictionary[domainname].read() #download/read robots.txt
elif (robotfiledictionary.get(domainname)): #if robot file for domainname was already downloaded
if (currenttime - robotfiledictionary[domainname].mtime()) > 3600: #if robot file is older than 1 hour
robotfiledictionary[domainname].read() #download/read robots.txt
print " Robots.txt for %s downloaded" % str(domainname)
robotfiledictionary[domainname].modified() #update time
if robotfiledictionary[domainname].can_fetch("WebCrawlerUserAgent", link): #if access is allowed...
#fetch page
print link
page = requests.get(link, verify=False)
return page.text()
else: #otherwise, report
print " URL disallowed due to robots.txt from %s" % str(domainname)
return "URL disallowed due to robots.txt"
else: #if netloc was empty, URL wasn't parsed. report
print "URL not parsed: %s" % str(pagelink)
return "URL not parsed"
这是我得到的例外:
Robots.txt for https://ehi-siegel.de/ initial download
Traceback (most recent call last):
File "C:\webcrawler.py", line 561, in <module>
HTMLpage = getHTMLpage(link, loopstarttime)
File "C:\webcrawler.py", line 122, in getHTMLpage
robotfiledictionary[domainname].read() #download/read robots.txt
File "C:\Python27\lib\robotparser.py", line 58, in read
f = opener.open(self.url)
File "C:\Python27\lib\urllib.py", line 213, in open
return getattr(self, name)(url)
File "C:\Python27\lib\urllib.py", line 443, in open_https
h.endheaders(data)
File "C:\Python27\lib\httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "C:\Python27\lib\httplib.py", line 897, in _send_output
self.send(msg)
File "C:\Python27\lib\httplib.py", line 859, in send
self.connect()
File "C:\Python27\lib\httplib.py", line 1278, in connect
server_hostname=server_hostname)
File "C:\Python27\lib\ssl.py", line 353, in wrap_socket
_context=self)
File "C:\Python27\lib\ssl.py", line 601, in __init__
self.do_handshake()
File "C:\Python27\lib\ssl.py", line 830, in do_handshake
self._sslobj.do_handshake()
IOError: [Errno socket error] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)
如您所见,我已经更改了最后的代码以检索忽略 SSL 证书的页面(我知道在生产中不赞成,但我想对其进行测试),但现在看来 @ 987654321@ 函数未能通过 SSL 验证。我已经看到我可以手动下载证书并将程序指向该方向以验证 SSL 证书,但理想情况下我希望我的程序“开箱即用”工作,因为我个人不会成为一个使用它。有人知道该怎么做吗?
编辑:我进入了robotparser.py。我加了
import requests
并将第 58 行更改为
f = requests.get(self.url, verify=False)
这似乎已经解决了。这仍然不理想,所以我仍然愿意接受有关如何做的建议。
【问题讨论】:
标签: python-2.7 parsing ssl ssl-certificate robots.txt