【问题标题】:Scrapy and Selenium StaleElementReferenceExceptionScrapy 和 Selenium StaleElementReferenceException
【发布时间】:2016-04-28 07:42:38
【问题描述】:

页面上有几个可点击的元素,我正在尝试抓取后面的一些页面,但我有这个错误并且蜘蛛在第一次点击后关闭:

StaleElementReferenceException: Message: Element not found in the cache - perhaps the page has changed since it was looked up

现在我只是试图打开页面以捕获新的 url。这是我的代码

from scrapy import signals
from scrapy.http import TextResponse
from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.xlib.pydispatch import dispatcher

from MySpider.items import MyItem

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait

import time

class MySpider(Spider):
    name = "myspider"
    allowed_domains = ["http://example.com"]
    base_url = 'http://example.com'
    start_urls = ["http://example.com/Page.aspx",]

    def __init__(self):
        self.driver = webdriver.Firefox()
        dispatcher.connect(self.spider_closed, signals.spider_closed)

    def spider_closed(self, spider):
        self.driver.close()

    def parse(self, response):

        self.driver.get(response.url)
        item = MyItem()

        links = self.driver.find_elements_by_xpath("//input[@class='GetData']")

        for button in links:
            button.click()
            time.sleep(5)

            source = self.driver.page_source 
            sel = Selector(text=source) # create a Selector object

            item['url'] = self.driver.current_url

            print '\n\nURL\n', item['url'], '\n'
            yield item

【问题讨论】:

    标签: python-2.7 selenium scrapy scrapy-spider


    【解决方案1】:

    因为链接元素在第一页。如果您打开新页面,则链接元素已过时。

    您可以尝试以下两种解决方案:

    1、存储链接元素的链接url,使用driver.get(url)打开链接。

    def parse(self, response):
    
        self.driver.get(response.url)
        item = MyItem()
    
        links = self.driver.find_elements_by_xpath("//input[@class='GetData']")
        link_urls = links.get_attribute("href")
    
        for link_url in link_urls:
            self.driver.get(link_url)
            time.sleep(5)
    
            source = self.driver.page_source
            sel = Selector(text=source) # create a Selector object
    
            item['url'] = self.driver.current_url
    
            print '\n\nURL\n', item['url'], '\n'
            yield item
    

    2、点击链接获取url后,调用driver.back()返回首页。然后重新找到链接元素。

    def parse(self, response):
    
        self.driver.get(response.url)
        item = MyItem()
    
        links = self.driver.find_elements_by_xpath("//input[@class='GetData']")
    
        for i in range(len(links)):
            links[i].click()
            time.sleep(5)
    
            source = self.driver.page_source
            sel = Selector(text=source) # create a Selector object
    
            item['url'] = self.driver.current_url
    
            print '\n\nURL\n', item['url'], '\n'
            yield item
            self.driver.back()
            links = self.driver.find_elements_by_xpath("//input[@class='GetData']")
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2011-06-18
      • 2018-09-14
      • 1970-01-01
      • 2014-11-15
      • 1970-01-01
      • 2023-04-03
      • 2015-01-16
      • 2015-02-17
      相关资源
      最近更新 更多