【发布时间】:2021-08-22 06:53:12
【问题描述】:
我对 Python 和 Stack Overflow 社区也比较陌生。我正在使用 selenium 网络抓取 https://freightliner.com/dealer-search/ 以获取北美/南美的经销商名称和地址,并且能够毫无问题地将其打印为单个字符串,但我无法弄清楚如何将其导出到 csv 文件。我在代码中打印它的方式与我想将它导出到 csv 的方式之间的区别在于,我将名称和地址打印为由分号分隔的单个字符串,而我想将其单独导出到 csv列(名称,地址)。以下是我尝试过的:
'''
#! python3
# fl_dealers.py - Scrapes freightliner website for north american locations.
# import selenium
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time, os, csv
from bs4 import BeautifulSoup
# set Chrome options to automatically download file
options = webdriver.ChromeOptions()
prefs = {'download.default_directory': r'C:\Users\username\Downloads\\'}
options.add_experimental_option('prefs',prefs)
chromedriver = 'C:/Users/username/chromedriver.exe'
# change directory to Downloads folder
os.chdir("C:\\Users\\username\\Downloads")
# create webdriver object and call Chrome options
browser = webdriver.Chrome(executable_path=chromedriver, options=options)
# maximize the browser window
browser.maximize_window()
# set wait time to allow browser to open
browser.implicitly_wait(10) # seconds
# open freightliner website
browser.get('https://freightliner.com/dealer-search/')
# maximize the browser window
browser.maximize_window()
time.sleep(5)
# find all locations in north america
search = browser.find_element_by_xpath('//*[@id="by-location"]/div/div/input')
ActionChains(browser).move_to_element(search).click().key_down(Keys.CONTROL).send_keys('a').key_up(Keys.CONTROL).send_keys("USA").perform()
#search.send_keys('USA')
search_button = browser.find_element_by_xpath('//*[@id="by-location"]/button').click()
time.sleep(10)
# create variable for webpage AFTER searching for results
page_source = browser.page_source
# create bs4 object
soup = BeautifulSoup(page_source, 'html.parser')
# create variables for dealer name and address
names = soup.find_all('h2')[1:]
addresses = soup.find_all(class_='address')
# print the names and addresses
for name, address in zip(names, addresses):
print(name.get_text(separator=" ").strip(), ";", address.get_text(separator=", ").strip())
with open('fl_dealers.csv', mode='w', newline='') as outputFile:
dealershipsCSV = csv.writer(outputFile)
dealershipsCSV.writerow(['name', 'address'])
for name in names:
dealer_name = name.get_text
for address in addresses:
dealer_address = address.get_text
dealershipsCSV.writerow([dealer_name, dealer_address])
'''
该代码确实创建了一个 CSV 文件,但它只创建了列标题,并没有导出任何实际的名称和地址。我搜索了许多与该问题相关的堆栈溢出、github 和 youtube 帖子,但未能找到解决方案。到目前为止,我已经达到了我的知识极限。我很可能很简单地遗漏了一些东西。唉,我还是 Python 新手。
需要注意的一点 - 在搜索栏中输入“美国”的原因是要覆盖网站默认使用我的位置来搜索附近经销商的默认设置。即使查询是针对“美国”的,它也会返回我想要的所有北美/南美经销商。
非常感谢任何和所有帮助!谢谢。
【问题讨论】:
-
您希望我们调试并结束您的工作吗?
标签: python selenium csv web-scraping beautifulsoup