【问题标题】:Web scrape after search with Python, Selenium, BeautifulSoup使用 Python、Selenium、BeautifulSoup 搜索后的网页抓取
【发布时间】:2020-11-14 22:37:51
【问题描述】:

我想在输入所有必要信息后从网上抓取一张高中汇总表。但是,我不知道该怎么做,因为进入学校页面后 url 并没有改变。我没有找到与我正在尝试做的事情相关的任何事情。知道如何在完成搜索过程后刮一张桌子吗?谢谢。

import requests
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys
import time

driver = webdriver.Chrome("drivers/chromedriver")

driver.get("https://web3.ncaa.org/hsportal/exec/hsAction")

state_drop = driver.find_element_by_id("state")
state = Select(state_drop)
state.select_by_visible_text(input("New Jersey"))

driver.find_element_by_id("city").send_keys(input("Galloway"))
driver.find_element_by_id("name").send_keys(input("Absegami High School"))
driver.find_element_by_class_name("forms_input_button").send_keys(Keys.RETURN)
driver.find_element_by_id("hsSelectRadio_1").click()

url = driver.current_url
print(url)
r = requests.get(url)
soup = BeautifulSoup(r.text, 'html.parser')
school_info = soup.find('table', class_="border=")
print(school_info)

【问题讨论】:

  • 你要刮哪个表?因为页面上有多个表格可用。
  • 正如我在帖子中提到的,高中汇总表。

标签: python python-3.x selenium selenium-webdriver beautifulsoup


【解决方案1】:

试试这个:

from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()

driver.get("https://web3.ncaa.org/hsportal/exec/hsAction")

state_drop = driver.find_element_by_id("state")
state = Select(state_drop)
state.select_by_visible_text("New Jersey")

driver.find_element_by_id("city").send_keys("Galloway")
driver.find_element_by_id("name").send_keys("Absegami High School")
driver.find_element_by_class_name("forms_input_button").send_keys(Keys.RETURN)
driver.find_element_by_id("hsSelectRadio_1").click()

#scraping the caption of the tables
all_sub_head = driver.find_elements_by_class_name("tableSubHeaderForWsrDetail") 

#scraping all the headers of the tables
all_headers = driver.find_elements_by_class_name("tableHeaderForWsrDetail")

#filtering the desired headers
required_headers = all_headers[5:]

#scraoing all the table data
all_contents = driver.find_elements_by_class_name("tdTinyFontForWsrDetail")

#filtering the desired tabla data
required_contents = all_contents[45:]
    
print("                ",all_sub_head[1].text,"                ")
for i in range(15):
    print(required_headers[i].text, "              >     ", required_contents[i].text )
    
print("execution completed")

输出

                 High School Summary                 
NCAA High School Code               >      310759
CEEB Code               >      310759
High School Name               >      ABSEGAMI HIGH SCHOOL
Address               >      201 S WRANGLEBORO RD
GALLOWAY
NJ - 08205
Primary Contact Name               >      BONNIE WADE
Primary Contact Phone               >      609-652-1485
Primary Contact Fax               >      609-404-9683
Primary Contact Email               >      bwade@gehrhsd.net
Secondary Contact Name               >      MR. DANIEL KERN
Secondary Contact Phone               >      6096521372
Secondary Contact Fax               >      6094049683
Secondary Contact Email               >      dkern@gehrhsd.net
School Website               >      http://www.gehrhsd.net/
Link to Online Course Catalog/Program of Studies               >      Not Available
Last Update of List of NCAA Courses               >      12-Feb-20
execution completed

输出截图:click me!!!

【讨论】:

  • 用这个driver = webdriver.Chrome("drivers/chromedriver")代替driver = webdriver.Chrome()
  • 你能解释一下required_contents = all_contents[45:]吗?
  • 如您所见,共有三个表高中帐户状态高中摘要*、**高中信息,它们之间的共同点是这三个表是蓝色的标题存储在一个公共类tableSubHeaderForWsrDetail下,所有黄色背景的文本都存储在另一个公共类tableHeaderForWsrDetail中,所有表格数据都存储在一个公共类@ 987654329@
  • so required_contents = all_contents[45:] 只是简单地对表数据进行切片,即 High School Account Status 表的 5x9 = 45 个表数据块,并存储 High School 的剩余表数据块required_contents列表中的学校总结
猜你喜欢
  • 1970-01-01
  • 2020-11-23
  • 1970-01-01
  • 2014-08-16
  • 1970-01-01
  • 2013-06-30
  • 2020-09-13
  • 1970-01-01
  • 2018-04-25
相关资源
最近更新 更多