【问题标题】:How to fix my Scrapy dictionary output format for CSV /JSON如何修复我的 CSV / JSON 的 Scrapy 字典输出格式
【发布时间】:2016-07-01 18:01:35
【问题描述】:

我的代码如下。我希望将结果提取到 CSV。但是,scrapy 会生成一个包含 2 个键的字典,并且所有值都集中在每个键中。输出看起来不太好。 我该如何解决。这可以通过管道/项目加载器等来完成吗...

非常感谢。

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
from gumtree1.items import GumtreeItems

class AdItemLoader(ItemLoader):
    jobs_in = MapCompose(unicode.strip)

class GumtreeEasySpider(CrawlSpider):
    name = 'gumtree_easy'
    allowed_domains = ['gumtree.com.au']
    start_urls = ['http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//a[@class="rs-paginator-btn next"]'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        loader = AdItemLoader(item=GumtreeItems(), response=response)
        loader.add_xpath('jobs','//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()')
        loader.add_xpath('location', '//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()')
        yield loader.load_item() 

结果:

2016-03-16 01:51:32 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-5/c9302?ad=offering>
{'jobs': [u'Technical Account Manager',
          u'Service & Maintenance Advisor',
          u'we are hiring motorbike driver delivery leaflet.Strat NOW(BE...',
          u'Casual Gardner/landscape maintenance labourer',
          u'Seeking for Experienced Builders Cleaners with white card',
          u'Babysitter / home help for approx 2 weeks',
          u'Toothing brickwork | Dapto',
          u'EXPERIENCED CHEF',
          u'ChildCare Trainee Wanted',
          u'Skilled Pipelayers & Drainer- Sydney Region',
          u'Casual staff required for Royal Easter Show',
          u'Fencing contractor',
          u'Excavator & Loader Operator',
          u'***EXPERIENCED STRAWBERRY AND RASPBERRY PICKERS WANTED***',
          u'Kitchenhand required for Indian restaurant',
          u'Taxi Driver Wanted',
          u'Full time nanny/sitter',
          u'Kitchen hand and meal packing',
          u'Depot Assistant Required',
          u'hairdresser Junior apprentice required for salon in Randwick',
          u'Insulation Installers Required',
          u'The Knox is seeking a new apprentice',
          u'Medical Receptionist Needed in Bankstown Area - Night Shifts',
          u'On Call Easy Work, Do you live in Berala, Lidcombe or Auburn...',
          u'Looking for farm jon'],
 'location': [u'Melbourne City',
              u'Eastern Suburbs',
              u'Rockdale Area',
              u'Logan Area',
              u'Greater Dandenong',
              u'Brisbane North East',
              u'Kiama Area',
              u'Byron Area',
              u'Dardanup Area',
              u'Blacktown Area',
              u'Auburn Area',
              u'Kingston Area',
              u'Inner Sydney',
              u'Northern Midlands',
              u'Inner Sydney',
              u'Hume Area',
              u'Maribyrnong Area',
              u'Perth City',
              u'Brisbane South East',
              u'Eastern Suburbs',
              u'Gold Coast South',
              u'North Canberra',
              u'Bankstown Area',
              u'Auburn Area',
              u'Gingin Area']}

应该是这样吧。工作和位置作为单独的字典?这可以在单独的单元格中正确写入带有 Jobs 和 Location 的 CSV,但我发现使用 for 循环和 zip 不应该是最好的方法。

import scrapy
from gumtree1.items import GumtreeItems

class AussieGum1Spider(scrapy.Spider):
    name = "aussie_gum1"
    allowed_domains = ["gumtree.com.au"]
    start_urls = (
        'http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering',
    )

    def parse(self, response):
        item = GumtreeItems()
        jobs = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()').extract()
        location = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()').extract()
        for j, l in zip(jobs, location):
            item['jobs'] = j.strip()
            item['location'] = l
            yield item

部分结果如下。

2016-03-16 02:20:46 [scrapy] DEBUG: Crawled (200) <GET http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering> (referer: http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering)
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Live In Au pair-Urgent', 'location': u'Wanneroo Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'live in carer', 'location': u'Fraser Coast'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Mental Health Nurse', 'location': u'Perth Region'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Experienced NBN pit and pipe installers/node and cabinet wor...',
 'location': u'Marrickville Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Delivery Driver / Pizza Maker Job - Dominos Pizza',
 'location': u'Hurstville Area'}

非常感谢。

【问题讨论】:

  • 欢迎来到 Stack Overflow!如果您可以发布单独的问题而不是将您的问题合并为一个问题,则最好。这样,它可以帮助人们回答您的问题,也可以帮助其他人至少寻找您的一个问题。谢谢!
  • @Hatchet 非常感谢您的反馈。我将编辑我的问题

标签: python web-scraping scrapy screen-scraping scrapy-spider


【解决方案1】:

每个项目都有一个父选择器,并提取与其相关的joblocation

rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
for row in rows:
    item = GumtreeItems()
    item['jobs'] = row.xpath('.//*[@itemprop="name"]/text()').extract_first().strip()
    item['location'] = row.xpath('.//*[@class="rs-ad-location-area"]/text()').extract_first().strip()
    yield item

【讨论】:

  • 谢谢@alecxe。除了 for 循环之外,是否可以使用任何其他方式来做到这一点。另外,如果我有一些不在父选择器中的项目怎么办。我需要为另一个父选择器创建另一个 for 循环吗?
  • 嗨@alecxe。我尝试了代码,但它不起作用。它提到了 AttributeError: unicode object has no attribute xpath。我猜它与行末尾的 .extract() 有关。但如果我删除它,代码也不起作用。感谢帮助。谢谢
  • 试试这个rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
  • @Ming 是的,extract() 必须被删除。立即查看。
【解决方案2】:

说实话,使用 for 循环是正确的方法,但您可以在管道上解决它:

from scrapy.http import Response
from gumtree1.items import GumtreeItems, CustomItem
from scrapy.exceptions import DropItem


class CustomPipeline(object):

    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        if isinstance(item, GumtreeItems):
            for i, jobs in enumerate(item['jobs']):
                self.crawler.engine.scraper._process_spidermw_output(
                    CustomItem(jobs=jobs, location=item['location'][i]), None, Response(''), spider)
            raise DropItem("main item dropped")
        return item

同时添加自定义项:

class CustomItem(scrapy.Item):
    jobs = scrapy.Field()
    location = scrapy.Field()

希望这会有所帮助,我认为您应该再次使用循环。

【讨论】:

  • 感谢您的反馈。了解 for 循环是最好的方法。
  • 这就是 enumerate 的用途。因此您不必依赖 zip 之类的东西,并且可以枚举您知道匹配的多个列表的索引。
猜你喜欢
  • 2020-11-17
  • 1970-01-01
  • 1970-01-01
  • 2017-08-18
  • 1970-01-01
  • 1970-01-01
  • 2018-09-13
  • 2015-06-08
  • 2012-01-05
相关资源
最近更新 更多