【发布时间】:2022-01-14 17:06:41
【问题描述】:
我正在尝试使用 Beautifulsoup 抓取房地产数据,但是当我将抓取结果保存到 .csv 文件时,它只包含第一页中的信息。我想抓取我在“pages_number”变量中设置的页数。
# How many pages
pages_number =int(input('How many pages? '))
# inicializa o tempo de execução
tic = time.time()
# Chromedriver
chromedriver = "./chromedriver"
os.environ["webdriver.chrome.driver"] = chromedriver
driver = webdriver.Chrome(chromedriver)
#initial link
link = 'https://www.vivareal.com.br/aluguel/sp/sao-paulo/?__vt=lnv:a&page=1'
driver.get(link)
# creating looping pages
for page in range(1,pages_number+1):
time.sleep(15)
data = driver.execute_script("return document.getElementsByTagName('html' [0].innerHTML")
soup_complete_source = BeautifulSoup(data.encode('utf-8'), "lxml")
我已经尝试过这个解决方案,但出现错误:
link = 'https://www.vivareal.com.br/aluguel/sp/sao-paulo/?__vt=lnv:a&page={}.format(page)'
有人知道可以做什么吗?
完整代码
https://github.com/arturlunardi/webscraping_vivareal/blob/main/scrap_vivareal.ipynb
【问题讨论】:
标签: selenium beautifulsoup screen-scraping