【发布时间】:2021-03-30 11:03:03
【问题描述】:
我有一个带有 Lua 脚本的 Scrapy Splash 刮板。
Lua 脚本目前仅在页面上启动滚动以在搜索页面上加载更多结果。
从搜索页面我导航到我抓取的详细信息页面。
但是,在详细信息页面上,照片轮播尚未出现在 DOM 中,它会在用户单击 #showphotos 元素时动态加载。
单击该元素后,将加载以下照片轮播 HTML:
<div id="slider">
<div class="slider-inner">
<div class="item active">
<img src="https://www.example.com/images/1.jpg">
</div>
<div class="item">
<img src="https://www.example.com/images/2.jpg">
</div>
</div>
</div>
所以我尝试编写一些脚本:
click_script = """
function main(splash, args)
btn = splash:select_all('#showphotos')[0]
btn:mouse_click()
assert(splash:wait(0.5))
return {
num = #splash:select_all('#slider div.slider-inner'),
html = splash:html()
}
end
"""
由于我是 Splash 和 Lua 的新手,我不知道在哪里添加此代码或从哪里调用它。
我创建了一个测试详细信息页面here。
我当前的代码:
myscraper.py
import json
import re
import scrapy
import time
from scrapy_splash import SplashRequest
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
from myresults.items import MyResultItem
class Spider(scrapy.Spider):
name = 'myscraper'
allowed_domains = ['example.com']
start_urls = ['https://www.example.com/results']
def start_requests(self):
# lua script for scroll to bottom while all objects appeared
lua_script = """
function main(splash, args)
local object_count = 0
local url = splash.args.url
splash:go(url)
splash:wait(0.5)
local get_object_count = splash:jsfunc([[
function (){
var objects = document.getElementsByClassName("object-adres");
return objects.length;
}
]])
temp_object_count = get_object_count()
local retry = 3
while object_count ~= temp_object_count do
splash:evaljs('window.scrollTo(0, document.body.scrollHeight);')
splash:wait(0.5)
object_count = temp_object_count
temp_object_count = get_object_count()
end
return splash:html()
end
"""
# yield first splash request with lua script and parse it from parse def
yield SplashRequest(
self.start_urls[0], self.parse,
endpoint='execute',
args={'lua_source': lua_script},
)
def parse(self, response):
# get all properties from first page which was generated with lua script
# get all adreslink from a tag
object_links = response.css('a.adreslink::attr(href)').getall()
for link in object_links:
# send request with each link and parse it from parse_object def
yield scrapy.Request(link, self.parse_object)
def parse_object(self, response):
# create new MyResultItem which will saved to json file
item = MyResultItem()
item['url'] = response.url # get url
yield item
items.py
import scrapy
class RentalItem(scrapy.Item):
id = scrapy.Field()
photos = scrapy.Field()
url = scrapy.Field()
pass
【问题讨论】:
标签: python lua scrapy mouseclick-event dynamic-content