【发布时间】:2023-05-14 09:12:01
【问题描述】:
我目前正在使用 Scrapy 编写空缺抓取工具来解析大约 3M 的空缺项目。 现在,当蜘蛛工作并成功抓取项目并将其存储到 postgreesql 时,我就在现场,但事情是它做得很慢。 在 1 小时内,我只存储了 12k 个空缺,所以我离其中的 3M 还差得很远。 问题是最后我需要每天抓取和更新一次数据,以目前的性能,我需要超过一天的时间来解析所有数据。
我是数据抓取的新手,所以我可能会做一些基本的错误,如果有人能帮助我,我将非常感激。
我的蜘蛛代码:
import scrapy
import urllib.request
from lxml import html
from ..items import JobItem
class AdzunaSpider(scrapy.Spider):
name = "adzuna"
start_urls = [
'https://www.adzuna.ru/search?loc=136073&pp=10'
]
def parse(self, response):
job_items = JobItem()
items = response.xpath("//div[@class='sr']/div[@class='a']")
def get_redirect(url):
response = urllib.request.urlopen(url)
response_code = response.read()
result = str(response_code, 'utf-8')
root = html.fromstring(result)
final_url = root.xpath('//p/a/@href')[0]
final_final_url = final_url.split('?utm', 1)[0]
return final_final_url
for item in items:
id = None
data_aid = item.xpath(".//@data-aid").get()
redirect = item.xpath(".//h2/a/@href").get()
url = get_redirect(redirect)
url_header = item.xpath(".//h2/a/strong/text()").get()
if item.xpath(".//p[@class='as']/@data-company-name").get() == None:
company = item.xpath(".//p[@class='as']/text()").get()
else:
company = item.xpath(".//p[@class='as']/@data-company-name").get()
loc = item.xpath(".//p/span[@class='loc']/text()").get()
text = item.xpath(".//p[@class='at']/span[@class='at_tr']/text()").get()
salary = item.xpath(".//p[@class='at']/span[@class='at_sl']/text()").get()
job_items['id'] = id
job_items['data_aid'] = data_aid
job_items['url'] = url
job_items['url_header'] = url_header
job_items['company'] = company
job_items['loc'] = loc
job_items['text'] = text
job_items['salary'] = salary
yield job_items
next_page = response.css("table.pg td:last-child ::attr('href')").get()
if next_page is not None:
yield response.follow(next_page, self.parse)
【问题讨论】:
-
你能分享你的settings.py吗
-
您的数据库性能如何?你有正确的索引吗?