【问题标题】:RobotParser throws SSL Certificate Verify Failed exceptionRobotParser 抛出 SSL 证书验证失败异常
【发布时间】:2019-05-29 13:55:45
【问题描述】:

我正在 Python 2.7 中编写一个简单的网络爬虫,在尝试从 HTTPS 网站检索 robots.txt 文件时遇到 SSL 证书验证失败异常。

以下是相关代码:

def getHTMLpage(pagelink, currenttime):
    "Downloads HTML page from server"
    #init
    #parse URL and get domain name
    o = urlparse.urlparse(pagelink,"http")
    if o.netloc == "":
        netloc = re.search(r"[^/]+\.[^/]+\.[^/]+", o.path)
        if netloc:
            domainname="http://"+netloc.group(0)+"/"
    else:
        domainname=o.scheme+"://"+o.netloc+"/"
    if o.netloc != "" and o.netloc != None and o.scheme != "mailto": #if netloc isn't empty and it's not a mailto link
        link=domainname+o.path[1:]+o.params+"?"+o.query+"#"+o.fragment
        if not (robotfiledictionary.get(domainname)): #if robot file for domainname was not downloaded
            robotfiledictionary[domainname] = robotparser.RobotFileParser() #initialize robots.txt parser
            robotfiledictionary[domainname].set_url(domainname+"robots.txt") #set url for robots.txt
            print "  Robots.txt for %s initial download" % str(domainname)
            robotfiledictionary[domainname].read() #download/read robots.txt
        elif (robotfiledictionary.get(domainname)): #if robot file for domainname was already downloaded
            if (currenttime - robotfiledictionary[domainname].mtime()) > 3600: #if robot file is older than 1 hour
                robotfiledictionary[domainname].read() #download/read robots.txt
                print "  Robots.txt for %s downloaded" % str(domainname)
                robotfiledictionary[domainname].modified() #update time
        if robotfiledictionary[domainname].can_fetch("WebCrawlerUserAgent", link): #if access is allowed...
            #fetch page
            print link
            page = requests.get(link, verify=False)
            return page.text()
        else: #otherwise, report
            print "  URL disallowed due to robots.txt from %s" % str(domainname)
            return "URL disallowed due to robots.txt"
    else: #if netloc was empty, URL wasn't parsed. report
        print "URL not parsed: %s" % str(pagelink)
        return "URL not parsed"

这是我得到的例外:

  Robots.txt for https://ehi-siegel.de/ initial download
Traceback (most recent call last):
  File "C:\webcrawler.py", line 561, in <module>
    HTMLpage = getHTMLpage(link, loopstarttime)
  File "C:\webcrawler.py", line 122, in getHTMLpage
    robotfiledictionary[domainname].read() #download/read robots.txt
  File "C:\Python27\lib\robotparser.py", line 58, in read
    f = opener.open(self.url)
  File "C:\Python27\lib\urllib.py", line 213, in open
    return getattr(self, name)(url)
  File "C:\Python27\lib\urllib.py", line 443, in open_https
    h.endheaders(data)
  File "C:\Python27\lib\httplib.py", line 1053, in endheaders
    self._send_output(message_body)
  File "C:\Python27\lib\httplib.py", line 897, in _send_output
    self.send(msg)
  File "C:\Python27\lib\httplib.py", line 859, in send
    self.connect()
  File "C:\Python27\lib\httplib.py", line 1278, in connect
    server_hostname=server_hostname)
  File "C:\Python27\lib\ssl.py", line 353, in wrap_socket
    _context=self)
  File "C:\Python27\lib\ssl.py", line 601, in __init__
    self.do_handshake()
  File "C:\Python27\lib\ssl.py", line 830, in do_handshake
    self._sslobj.do_handshake()
IOError: [Errno socket error] [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)

如您所见,我已经更改了最后的代码以检索忽略 SSL 证书的页面(我知道在生产中不赞成,但我想对其进行测试),但现在看来 @ 987654321@ 函数未能通过 SSL 验证。我已经看到我可以手动下载证书并将程序指向该方向以验证 SSL 证书,但理想情况下我希望我的程序“开箱即用”工作,因为我个人不会成为一个使用它。有人知道该怎么做吗?

编辑:我进入了robotparser.py。我加了

import requests

并将第 58 行更改为

f = requests.get(self.url, verify=False)

这似乎已经解决了。这仍然不理想,所以我仍然愿意接受有关如何做的建议。

【问题讨论】:

    标签: python-2.7 parsing ssl ssl-certificate robots.txt


    【解决方案1】:

    我自己找到了解决方案。使用 urllib3 的请求功能,我能够验证所有网站并继续访问它们。

    我仍然需要编辑 robotsparser.py 文件。这是我在开头添加的内容:

    import urllib3
    import urllib3.contrib.pyopenssl
    import certifi
    urllib3.contrib.pyopenssl.inject_into_urllib3()
    http = urllib3.PoolManager(cert_reqs="CERT_REQUIRED", ca_certs=certifi.where())
    

    这是read(self)的定义:

    def read(self):
        """Reads the robots.txt URL and feeds it to the parser."""
        opener = URLopener()
        f = http.request('GET', self.url)
        lines = [line.strip() for line in f.data]
        f.close()
        self.errcode = opener.errcode
        if self.errcode in (401, 403):
            self.disallow_all = True
        elif self.errcode >= 400 and self.errcode < 500:
            self.allow_all = True
        elif self.errcode == 200 and lines:
            self.parse(lines)
    

    我也使用相同的过程来获取程序函数中的实际页面请求:

    def getHTMLpage(pagelink, currenttime):
        "Downloads HTML page from server"
        #init
        #parse URL and get domain name
        o = urlparse.urlparse(pagelink,u"http")
        if o.netloc == u"":
            netloc = re.search(ur"[^/]+\.[^/]+\.[^/]+", o.path)
            if netloc:
                domainname=u"http://"+netloc.group(0)+u"/"
        else:
            domainname=o.scheme+u"://"+o.netloc+u"/"
        if o.netloc != u"" and o.netloc != None and o.scheme != u"mailto": #if netloc isn't empty and it's not a mailto link
            link=domainname+o.path[1:]+o.params+u"?"+o.query+u"#"+o.fragment
            if not (robotfiledictionary.get(domainname)): #if robot file for domainname was not downloaded
                robotfiledictionary[domainname] = robotparser.RobotFileParser() #initialize robots.txt parser
                robotfiledictionary[domainname].set_url(domainname+u"robots.txt") #set url for robots.txt
                print u"  Robots.txt for %s initial download" % str(domainname)
                robotfiledictionary[domainname].read() #download/read robots.txt
            elif (robotfiledictionary.get(domainname)): #if robot file for domainname was already downloaded
                if (currenttime - robotfiledictionary[domainname].mtime()) > 3600: #if robot file is older than 1 hour
                    robotfiledictionary[domainname].read() #download/read robots.txt
                    print u"  Robots.txt for %s downloaded" % str(domainname)
                    robotfiledictionary[domainname].modified() #update time
            if robotfiledictionary[domainname].can_fetch("WebCrawlerUserAgent", link.encode('utf-8')): #if access is allowed...
                #fetch page
                if domainname == u"https://www.otto.de/" or domainname == u"http://www.otto.de":
                    driver.get(link.encode('utf-8'))
                    time.sleep(5)
                    page=driver.page_source
                    return page
                else:
                    page = http.request('GET',link.encode('utf-8'))
                    return page.data.decode('UTF-8','ignore')
            else: #otherwise, report
                print u"  URL disallowed due to robots.txt from %s" % str(domainname)
                return u"URL disallowed due to robots.txt"
        else: #if netloc was empty, URL wasn't parsed. report
            print u"URL not parsed: %s" % str(pagelink)
            return u"URL not parsed"
    

    您还会注意到我将程序更改为严格使用 UTF-8,但这无关紧要。

    【讨论】:

      【解决方案2】:

      我最近遇到了同样的问题。 一个快速的解决方法是将这些行添加到我的代码中:

      import ssl
      ssl._create_default_https_context = ssl._create_unverified_context
      

      python 2.7.16

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 2021-09-14
        • 2019-03-23
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 2016-09-12
        • 2017-03-13
        • 2021-09-28
        相关资源
        最近更新 更多