【问题标题】:Scrapy: crawl 1 level deep on offsite linksScrapy:在异地链接上爬取 1 级深度
【发布时间】:2016-03-15 09:15:01
【问题描述】:

在 scrapy 中,对于允许域外的所有链接,我将如何让 scrapy 仅抓取 1 级深度。在爬网中,我希望能够确保站点内的所有出站链接都正常工作,而不是 404'd。我不希望它抓取非允许域的整个站点。我目前正在处理允许的域 404。我知道我可以将 DEPTH_LIMIT 设置为 1,但这也会影响允许的域。

我的代码:

from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor 

from smcrawl.items import Website
import smcrawl.util 

def iterate(lists):
    for a in lists:
        return a    

class WalmartSpider(CrawlSpider):
    handle_httpstatus_list = [200, 302, 404, 500, 502]
    name = "surveymonkeycouk"
    allowed_domains = ["surveymonkey.co.uk", "surveymonkey.com"]    

    start_urls = ['https://www.surveymonkey.co.uk/']    

    rules = (
        Rule(
            LinkExtractor(
                allow=(),
                deny=(),
                process_value=smcrawl.util.trim),
                callback="parse_items",
                follow=True,),
    )
    #process_links=lambda links: [link for link in links if not link.nofollow] = filter nofollow links  

    #parses start urls
    def parse_start_url(self, response):
        list(self.parse_items(response))    

    def parse_items(self, response):
        hxs = Selector(response)
        sites = response.selector.xpath('//html')
        items = []  

        for site in sites:
            if response.status == 404:            
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status    
            elif response.status == 200:            
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status        
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1    
            elif response.status == 302:
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status        
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1       
            elif response.status == 404:            
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status         
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1                                   
            elif response.status == 500:            
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status         
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1    
            elif response.status == 502:            
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status         
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1   
            else:           
                item = Website()
                item['url'] = response.url
                item['referer'] = response.request.headers.get('Referer')
                item['canonical'] = site.xpath('//head/link[@rel="canonical"]/@href').extract()
                item['robots'] = site.xpath('//meta[@name="robots"]/@content').extract()
                item['original_url'] = response.meta.get('redirect_urls', [response.url])[0]
                item['description'] = site.xpath('//meta[@name="description"]/@content').extract()
                item['redirect'] = response.status         
                titles = site.xpath('/html/head/title/text()').extract()
                try:
                    titles = iterate(titles)
                    titles = titles.strip()
                except:
                    pass
                item['title'] = titles
                h1 = site.xpath('//h1/text()').extract()
                try:
                    h1 = iterate(h1)
                    h1 = h1.strip()
                except:
                    pass
                item['h1'] = h1                                                  
            items.append(item)  

        return items

【问题讨论】:

  • 你能分享你的代码吗?
  • @eLRuLL 添加了代码

标签: scrapy scrapy-spider


【解决方案1】:

好的,您可以做的一件事是避免使用allowed_domains,这样您就不会过滤任何场外请求。

但为了让它更有趣,你可以创建自己的OffsiteMiddleware,如下所示:

from scrapy.spidermiddlewares.offsite import OffsiteMiddleware

class MyOffsiteMiddleware(OffsiteMiddleware):
    offsite_domains = set()
    def should_follow(self, request, spider):
        regex = self.host_regex
        host = urlparse_cached(request).hostname or ''
        if host in offsite_domains:
            return False
        if not bool(regex.search(host)):
            self.offsite_domains.add(host)
        return True

我没有测试它,但它应该可以工作,记住你应该禁用默认中间件并在设置中启用你的:

SPIDER_MIDDLEWARES = {
    'myproject.middlewares.MyOffsiteMiddleware': 543,
    'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': None,
}

【讨论】:

  • 你能定义offsite_requests吗?
  • self.offsite_domains。我将如何去爬取所有起始域,但只有 1 级深度的外部域?
  • 我不明白你的意思,你试过上面的代码吗?
【解决方案2】:

我已经提到Scrapy set depth limit per allowed_domains 作为答案。它与我正在寻找的解决方案有点不同,但是有了我愿意抓取的 URL 白名单,最终结果是相同的。谢谢!

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2012-06-08
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2021-08-09
    • 1970-01-01
    • 1970-01-01
    • 2017-02-18
    相关资源
    最近更新 更多