【发布时间】:2020-12-08 17:36:23
【问题描述】:
构建我的第一个网络爬虫。我只是想获取名称列表并将它们附加到 csv 文件中。刮板似乎可以工作,但不如预期。输出文件只产生一个名字,这个名字总是被刮掉的姓氏。当我重新运行刮板时,它总是一个不同的名字。在这种情况下,写入 csv 文件的名称是 Ola Aina。
#Create the spider class
class premSpider(scrapy.Spider):
name = "premSpider"
def start_requests(self):
# Create a List of Urls with which we wish to scrape
urls = ['https://www.premierleague.com/players']
#Iterate through each url and send it to be parsed
for url in urls:
#yield kind of acts like return
yield scrapy.Request(url = url, callback = self.parse)
def parse(self, response):
#extract links to player pages
plinks = response.xpath('//tr').css('a::attr(href)').extract()
#follow links to specific player pages
for plink in plinks:
yield response.follow(url = plink, callback = self.parse2)
def parse2(self, response):
plinks2 = response.xpath('//a[@href="stats"]').css('a::attr(href)').extract()
for link2 in plinks2:
yield response.follow(url = link2, callback = self.parse3)
def parse3(self, response):
names= response.xpath('//div[@class="name t-colour"]/text()').extract()
filepath = 'playerlinks.csv'
with open(filepath, 'w') as f:
f.writelines([name + '\n' for name in names])
process = CrawlerProcess()
process.crawl(premSpider)
process.start()
【问题讨论】:
-
您在写入 csv 文件时正在使用
w模式,这将创建一个新文件并删除所有其他文件。尝试使用追加中的a。 -
@Thymen 你的建议奏效了。感谢您的帮助,我很感激。
标签: python-3.x web-scraping scrapy web-crawler