【问题标题】:Scrapy follow link and collect emailScrapy跟随链接并收集电子邮件
【发布时间】:2015-05-11 15:41:26
【问题描述】:

我在使用 Scrapy 保存电子邮件方面需要帮助。 .csv 文件中应该收集电子邮件的行是空白的。非常感谢任何帮助。代码如下:

# -*- coding: utf-8 -*-
import scrapy


# item class included here 
class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["craigslist.org"]
    start_urls = [
    "http://chicago.craigslist.org/search/vgm?"
    ]

    BASE_URL = 'http://chicago.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_attr)

    def parse_attr(self, response):
        item = DmozItem()
        item["link"] = response.url
        item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
        return item

【问题讨论】:

    标签: python web-scraping web-crawler scrapy


    【解决方案1】:

    为了在 craiglist 项目页面上看到一封电子邮件,人们会点击“回复”按钮,这会向“回复/chi/vgm/”网址发起一个新请求。这是你需要在 Scrapy 中通过发出一个新的Request 并在回调中解析结果来模拟的东西:

    # -*- coding: utf-8 -*-
    import re
    import scrapy
    
    
    # item class included here
    class DmozItem(scrapy.Item):
        # define the fields for your item here like:
        link = scrapy.Field()
        attr = scrapy.Field()
    
    
    class DmozSpider(scrapy.Spider):
        name = "dmoz"
        allowed_domains = ["craigslist.org"]
        start_urls = [
        "http://chicago.craigslist.org/search/vgm?"
        ]
    
        BASE_URL = 'http://chicago.craigslist.org/'
    
        def parse(self, response):
            links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
            for link in links:
                absolute_url = self.BASE_URL + link
                yield scrapy.Request(absolute_url, callback=self.parse_attr)
    
        def parse_attr(self, response):
            match = re.search(r"(\w+)\.html", response.url)
            if match:
                item_id = match.group(1)
                url = self.BASE_URL + "reply/chi/vgm/" + item_id
    
                item = DmozItem()
                item["link"] = response.url
    
                return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)
    
        def parse_contact(self, response):
            item = response.meta['item']
            item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
            return item
    

    【讨论】:

      猜你喜欢
      • 2015-07-21
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2018-12-02
      • 2012-09-21
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多