【问题标题】:Close a scrapy spider when a condition is met and return the output object满足条件时关闭爬虫并返回输出对象
【发布时间】:2016-07-12 14:15:53
【问题描述】:

我已经制作了一个蜘蛛来使用 scrapy 从像 here 这样的页面获取评论。我只希望产品评论到某个日期(在这种情况下为 2016 年 7 月 2 日)。一旦审查日期早于给定日期,我想关闭我的蜘蛛并返回项目列表。 蜘蛛运行良好,但我的问题是,如果满足条件,我无法关闭我的蜘蛛。如果我引发异常,蜘蛛会关闭而不返回任何东西。 请建议手动关闭蜘蛛的最佳方法。这是我的代码:

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from scrapy import Selector
from tars.items import FlipkartProductReviewsItem
import re as r
import unicodedata
from datetime import datetime 

class Freviewspider(CrawlSpider):
    name = "frs"
    allowed_domains = ["flipkart.com"]
    def __init__(self, *args, **kwargs):
        super(Freviewspider, self).__init__(*args, **kwargs)
        self.start_urls = [kwargs.get('start_url')]


    rules = (
        Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="nav_bar_next_prev"]')), callback="parse_start_url", follow= True),
)


    def parse_start_url(self, response):

        hxs = Selector(response)
        titles = hxs.xpath('//div[@class="fclear fk-review fk-position-relative line "]')

        items = []

        for i in titles:

            item = FlipkartProductReviewsItem()

            #x-paths:

            title_xpath = "div[2]/div[1]/strong/text()"
            review_xpath = "div[2]/p/span/text()"
            date_xpath = "div[1]/div[3]/text()"



            #field-values-extraction:

            item["date"] = (''.join(i.xpath(date_xpath).extract())).replace('\n ', '')
            item["title"] = (''.join(i.xpath(title_xpath).extract())).replace('\n ', '')

            review_list = i.xpath(review_xpath).extract()
            temp_list = []
            for element in review_list:
                temp_list.append(element.replace('\n ', '').replace('\n', ''))

            item["review"] = ' '.join(temp_list)

            xxx = datetime.strptime(item["date"], '%d %b %Y ')
            comp_date = datetime.strptime('02 Jul 2016 ', '%d %b %Y ')
            if xxx>comp_date:
                items.append(item)
            else:
                break

        return(items)

【问题讨论】:

    标签: python scrapy web-crawler screen-scraping


    【解决方案1】:

    要强制蜘蛛关闭,您可以使用引发CloseSpider 异常,如here in scrapy docs 所述。请务必在提出例外之前退回/交出您的物品。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2018-03-05
      • 2015-03-24
      • 1970-01-01
      • 1970-01-01
      • 2017-03-30
      • 2021-05-03
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多