【问题标题】:Scrapy and Splash don't crawlScrapy 和 Splash 不会爬行
【发布时间】:2016-01-28 21:09:14
【问题描述】:

我做了一个爬虫,splash 可以正常工作(我在浏览器中测试过),scrapy 虽然不能爬取和提取项目。

我的实际代码是:

# -*- coding: utf-8 -*-
import scrapy
import json
from scrapy.http.headers import Headers
from scrapy.spiders import CrawlSpider, Rule
from oddsportal.items import OddsportalItem



class OddbotSpider(CrawlSpider):
    name = "oddbot"
    allowed_domains = ["oddsportal.com"]
    start_urls = (
        'http://www.oddsportal.com/matches/tennis/',
    )

def start_requests(self):
    for url in self.start_urls:
        yield scrapy.Request(url, self.parse, meta={
            'splash': {
                'endpoint': 'render.html',
                'args': {'wait': 5.5}
            }
        })

    def parse(self, response):
        item = OddsportalItem()
        print response.body

【问题讨论】:

  • response.body 的输出是什么?
  • print response.body?
  • 它什么也没打印:我用实际代码编辑了

标签: python scrapy web-crawler splash-screen


【解决方案1】:

尝试导入 scrap_splash 并通过 SplashRequest 调用新请求为:

from scrapy_splash import SplashRequest

yield SplashRequest(url, endpoint='render.html', args={'any':any})

【讨论】:

    【解决方案2】:

    你应该修改 CrawlSpider

    def _requests_to_follow(self, response):
        if not isinstance(response, (HtmlResponse, SplashJsonResponse, SplashTextResponse)):
            return
        seen = set()
        for n, rule in enumerate(self._rules):
            links = [lnk for lnk in rule.link_extractor.extract_links(response)
                     if lnk not in seen]
            if links and rule.process_links:
                links = rule.process_links(links)
            for link in links:
                seen.add(link)
                r = self._build_request(n, link)
                yield rule.process_request(r)
    

    【讨论】:

      猜你喜欢
      • 2021-12-26
      • 1970-01-01
      • 2013-01-24
      • 2015-10-18
      • 1970-01-01
      • 2019-05-28
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多