【问题标题】:Scrapy with LinkExtractor not extracting links from a website使用 LinkExtractor 的 Scrapy 不从网站中提取链接
【发布时间】:2019-06-11 03:52:22
【问题描述】:

我正在尝试使用 LinkExtractor 函数抓取网站以输出特定链接中的所有链接。

Scrapy 没有输出某些网站的链接。例如,如果我尝试这个链接https://blog.nus.edu.sg,它似乎可以工作。但不适用于http://nus.edu.sg

所有这些链接都会产生一个工作网站。我试图查看这两个站点的源代码,它们在链接到其他站点的方式上看起来都相似

这是我的爬虫

class Crawler(scrapy.Spider):
    name = 'all'

    def __init__(self, startURL):
        self.links=[]
        self.start_urls = [startURL]

    custom_settings = {
        'LOG_LEVEL': logging.WARNING,
        'DEPTH_LEVEL': 1
    }

    def parse(self, response):
        le = LinkExtractor()
        print(le)
        for link in le.extract_links(response):
            print(link.url)

使用以下函数调用它的位置

def _getLinksDriver(url):


    header = {'USER_AGENT': agent} #agent is some user agent previously defined
    process = CrawlerProcess(header)
    process.crawl(Crawler, url)

    process.start(stop_after_crawl=True)

例如,如果我尝试 _getLinksDriver("http://nus.edu.sg")

输出很简单

2019-06-11 11:42:22 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: scrapybot)
2019-06-11 11:42:22 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Linux-4.18.0-21-generic-x86_64-with-Ubuntu-18.04-bionic
2019-06-11 11:42:22 [scrapy.crawler] INFO: Overridden settings: {'LOG_LEVEL': 30, 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
<scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor object at 0x7fc45fbbac18>

但是,如果我们导航到实际站点,则显然有链接可以链接到。

尝试_getLinksDriver("https://blog.nus.edu.sg") 给了

2019-06-11 11:38:20 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: scrapybot)
2019-06-11 11:38:20 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Linux-4.18.0-21-generic-x86_64-with-Ubuntu-18.04-bionic
2019-06-11 11:38:20 [scrapy.crawler] INFO: Overridden settings: {'LOG_LEVEL': 30, 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'}
<scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor object at 0x7fc4605bcb38>
https://blog.nus.edu.sg#main
https://blog.nus.edu.sg/
http://blog.nus.edu.sg/
https://wiki.nus.edu.sg/display/cit/Blog.nus+Common+Queries
http://help.edublogs.org/user-guide/
https://wiki.nus.edu.sg/display/cit/Blog.nus+Terms+of+Use
https://wiki.nus.edu.sg/display/cit/Blog.nus+Disclaimers
https://blog.nus.edu.sg/wp-signup.php
http://twitter.com/nuscit
http://facebook.com/nuscit
https://blog.nus.edu.sg#scroll-top
http://cyberchimps.com/responsive-theme/
http://wordpress.org/
http://cit.nus.edu.sg/
http://www.nus.edu.sg/
http://www.statcounter.com/wordpress.org/
https://blog.nus.edu.sg#wp-toolbar
https://blog.nus.edu.sg/wp-login.php?redirect_to=https%3A%2F%2Fblog.nus.edu.sg%2F

这是我希望看到的。

如何使这项功能适用于所有网站?

谢谢

如果有帮助,我的 Scrapy 版本、Python 及其所有依赖项

2019-06-11 11:42:12 [scrapy.utils.log] INFO: Scrapy 1.6.0 started (bot: scrapybot)
2019-06-11 11:42:12 [scrapy.utils.log] INFO: Versions: lxml 4.3.3.0, libxml2 2.9.9, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 19.2.0, Python 3.6.7 (default, Oct 22 2018, 11:32:17) - [GCC 8.2.0], pyOpenSSL 19.0.0 (OpenSSL 1.1.1b  26 Feb 2019), cryptography 2.6.1, Platform Linux-4.18.0-21-generic-x86_64-with-Ubuntu-18.04-bionic

【问题讨论】:

    标签: python-3.x scrapy web-crawler


    【解决方案1】:

    您的代码不适用于上述网站 (http://nus.edu.sg/) 的简单原因是 Incapsula

    如果您查看response.body,您会发现类似这样的内容:

    Request unsuccessful. Incapsula incident ID: 432001820008199878-98367043303115621
    

    【讨论】:

      【解决方案2】:

      只是 gangabas 答案的一个插件(所以请接受他的):

      正如 gangabas 提到的那样,http://nus.edu.sg 受到 Incapsula 的保护,免受机器人攻击。 scrapy 得到的是这个 (curl 'http://nus.edu.sg/'):

      <html>
      <head>
      <META NAME="robots" CONTENT="noindex,nofollow">
      <script src="/_Incapsula_Resource?SWJIYLWA=5074a744e2e3d891814e9a2dace20bd4,719d34d31c8e3a6e6fffd425f7e032f3">
      </script>
      <body>
      </body></html>
      

      实际内容是通过 javascript 加载的(scrapy 不会执行)。如果你想执行 javascript,你可以使用 scrapy-splash: https://github.com/scrapy-plugins/scrapy-splash

      不幸的是,这更复杂(但这正是网站所有者想要的)。如果你想友好,你不要爬那些页面(https://blog.scrapinghub.com/2016/08/25/how-to-crawl-the-web-politely-with-scrapy

      【讨论】:

        猜你喜欢
        • 1970-01-01
        • 1970-01-01
        • 2016-04-27
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        • 1970-01-01
        相关资源
        最近更新 更多