【问题标题】:Scrapy bypassing start_urlsScrapy 绕过 start_urls
【发布时间】:2016-05-08 18:35:31
【问题描述】:

当运行这个蜘蛛时,scrapy 告诉我被抓取的页面是'http://192.168.59.103:8050/render.html'(在start_requests“meta”参数中定义的启动渲染端点)。当然这是我想传递start_urls to 的 url,而不是我想抓取的那些。我猜问题在于我如何将 url 从 start_urls 传递到 start_requests 进行解析,但我无法确定确切的问题。

这也是我的settings file

提前致谢。

# -*- coding: utf-8 -*-
#scrapy crawl ia_checkr -o IA_OUT.csv -t csv

import scrapy
from scrapy.http import Request
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider, Rule

from ia_check.items import Check_Item

from datetime import datetime
import ia_check

class CheckSpider(CrawlSpider):
    name = "ia_check"
    handle_httpstatus_list = [404,429,503]

    start_urls = [
    "http://www.amazon.com/Easy-Smart-Touch-Action-Games/dp/B00PRH5UJW",
    "http://www.amazon.com/mobile9-LAZYtube-MP4-Video-Downloader/dp/B00KFITEV8",
    "http://www.amazon.com/Forgress-Storyteller-Audiobook-Pro/dp/B00J0T73XO",
    "http://www.amazon.com/cgt-MP3-Downloader/dp/B00O65Z0RS",
    "http://www.amazon.com/DoomsDayBunny-Squelch-Free-Music-Downloader/dp/B00N3DDDRI"
    ]

    def start_requests(self):
        for url in self.start_urls:
            yield scrapy.Request(url, self.parse, meta={
                'splash': {
                    'endpoint': 'render.html',
                    'args': {'wait': 1}
                }
            })

    def parse(self, response):
        ResultsDict = Check_Item()
        Select = Selector(response).xpath

        ResultsDict['title'] = Select(".//*[@class='h1']/text()|.//*[@id='btAsinTitle']/text()").extract()
        ResultsDict['application_url'] = response.url
        return ResultsDict

【问题讨论】:

    标签: python python-2.7 scrapy


    【解决方案1】:

    我建议你升级到最新的scrapy-splash plugin(以前叫scrapyjs

    有一个方便的 scrapy_splash.SplashRequest 实用程序可以“修复”原始远程主机的 URL,而不是 Splash 端点。

    这是一个类似于你的示例蜘蛛:

    import scrapy
    from scrapy_splash import SplashRequest
    
    
    class CheckSpider(scrapy.Spider):
        name = "scrapy-splash-example"
        handle_httpstatus_list = [404,429,503]
    
        start_urls = [
            "http://rads.stackoverflow.com/amzn/click/B00PRH5UJW",
            "http://rads.stackoverflow.com/amzn/click/B00KFITEV8",
            "http://rads.stackoverflow.com/amzn/click/B00J0T73XO",
            "http://rads.stackoverflow.com/amzn/click/B00O65Z0RS",
            "http://rads.stackoverflow.com/amzn/click/B00N3DDDRI"
        ]
    
        def start_requests(self):
            for url in self.start_urls:
                yield SplashRequest(url,
                                    callback=self.parse,
                                    args={
                                        'wait': 1,
                                    })
    
        def parse(self, response):
            self.logger.debug("Response: status=%d; url=%s" % (response.status, response.url))
    

    settings.py

    # -*- coding: utf-8 -*-
    
    # Scrapy settings for splashtst project
    #
    # For simplicity, this file contains only settings considered important or
    # commonly used. You can find more settings consulting the documentation:
    #
    #     http://doc.scrapy.org/en/latest/topics/settings.html
    #     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
    #     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
    
    BOT_NAME = 'splashtst'
    
    SPIDER_MODULES = ['splashtst.spiders']
    NEWSPIDER_MODULE = 'splashtst.spiders'
    
    # Splash stuff
    SPLASH_URL = 'http://localhost:8050'
    DOWNLOADER_MIDDLEWARES = {
        'scrapy_splash.SplashCookiesMiddleware': 723,
        'scrapy_splash.SplashMiddleware': 725,
        'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
    }
    
    SPIDER_MIDDLEWARES = {
        'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
    }
    DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
    HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
    

    检查您获得的控制台日志,尤其是 URL:

    $ scrapy crawl scrapy-splash-example
    2016-05-09 12:46:05 [scrapy] INFO: Scrapy 1.0.6 started (bot: splashtst)
    2016-05-09 12:46:05 [scrapy] INFO: Optional features available: ssl, http11
    2016-05-09 12:46:05 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'splashtst.spiders', 'SPIDER_MODULES': ['splashtst.spiders'], 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter', 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage', 'BOT_NAME': 'splashtst'}
    2016-05-09 12:46:05 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
    2016-05-09 12:46:05 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, RedirectMiddleware, CookiesMiddleware, SplashCookiesMiddleware, SplashMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2016-05-09 12:46:05 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, SplashDeduplicateArgsMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2016-05-09 12:46:05 [scrapy] INFO: Enabled item pipelines: 
    2016-05-09 12:46:05 [scrapy] INFO: Spider opened
    2016-05-09 12:46:05 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2016-05-09 12:46:05 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
    2016-05-09 12:46:07 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00O65Z0RS via http://localhost:8050/render.html> (referer: None)
    2016-05-09 12:46:07 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00O65Z0RS
    2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00KFITEV8 via http://localhost:8050/render.html> (referer: None)
    2016-05-09 12:46:12 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00KFITEV8
    2016-05-09 12:46:12 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00PRH5UJW via http://localhost:8050/render.html> (referer: None)
    2016-05-09 12:46:13 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00PRH5UJW
    2016-05-09 12:46:16 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00N3DDDRI via http://localhost:8050/render.html> (referer: None)
    2016-05-09 12:46:17 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00N3DDDRI
    2016-05-09 12:46:18 [scrapy] DEBUG: Crawled (200) <GET http://rads.stackoverflow.com/amzn/click/B00J0T73XO via http://localhost:8050/render.html> (referer: None)
    2016-05-09 12:46:18 [scrapy-splash-example] DEBUG: Response: status=200; url=http://rads.stackoverflow.com/amzn/click/B00J0T73XO
    2016-05-09 12:46:18 [scrapy] INFO: Closing spider (finished)
    2016-05-09 12:46:18 [scrapy] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 2690,
     'downloader/request_count': 5,
     'downloader/request_method_count/POST': 5,
     'downloader/response_bytes': 1794947,
     'downloader/response_count': 5,
     'downloader/response_status_count/200': 5,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2016, 5, 9, 10, 46, 18, 631501),
     'log_count/DEBUG': 11,
     'log_count/INFO': 7,
     'response_received_count': 5,
     'scheduler/dequeued': 10,
     'scheduler/dequeued/memory': 10,
     'scheduler/enqueued': 10,
     'scheduler/enqueued/memory': 10,
     'splash/render.html/request_count': 5,
     'splash/render.html/response_count/200': 5,
     'start_time': datetime.datetime(2016, 5, 9, 10, 46, 5, 368693)}
    2016-05-09 12:46:18 [scrapy] INFO: Spider closed (finished)
    

    【讨论】:

    • 谢谢。这有很大帮助,但我仍然得到不同的输出。您可以在答案中包含 settings.py 吗?我的输出的 2 行示例:[1]: 2016-05-09 12:34:24 [scrapy] DEBUG: Crawled (429) &lt;GET http://www.amazon.com/lazipub-Free-Music-Download/dp/B00P9EZ174 via http://192.168.59.103:8050/render.html&gt; (referer: None) [2]: 2016-05-09 12:34:25 [ia_check] DEBUG: Response: status=429; url=http://192.168.59.103:8050/render.html
    • 我添加了我的settings.py,这是scrapy-splash 的标准配置。我没有测试的是非 200 状态码
    • 您的设置文件和通过scrapinghub blog/splash tutorial 提供的设置文件之间存在一些差异(我现在理解它已经过时),不幸的是,scrapyjs 和 scrapy-splash 之间的差异没有得到很好的记录。
    • 你用的是linux机器吗?
    • @BenjaminJames,是的,我是。你自己运行的是什么操作系统?
    猜你喜欢
    • 1970-01-01
    • 2014-03-27
    • 1970-01-01
    • 1970-01-01
    • 2014-06-22
    • 2013-12-05
    • 2021-11-05
    • 2022-01-13
    • 2019-03-10
    相关资源
    最近更新 更多