【问题标题】:How do I pull multiple values from html page using python?如何使用 python 从 html 页面中提取多个值?
【发布时间】:2018-10-27 03:14:08
【问题描述】:

我正在根据我自己的知识对 nhl 点差/投注赔率信息进行一些数据分析。我可以提取一些信息,但不能提取整个数据集。我想将游戏列表和相关联的内容拉到熊猫数据框中,但我已经能够围绕 html 标签执行正确的循环。我试过findAll 选项和xpath 路由。我也没有成功。

from bs4 import BeautifulSoup
import requests

page_link = 'https://www.thespread.com/nhl-hockey-public-betting-chart'

page_response = requests.get(page_link, timeout=5)

# here, we fetch the content from the url, using the requests library
page_content = BeautifulSoup(page_response.content, "html.parser")


# Take out the <div> of name and get its value
name_box = page_content.find('div', attrs={'class': 'datarow'})
name = name_box.text.strip()

print (name)

【问题讨论】:

    标签: python python-3.x web-scraping


    【解决方案1】:

    此脚本遍历每个数据行并单独提取每个项目,然后将它们附加到 pandas DataFrame 中。

    from bs4 import BeautifulSoup
    import requests
    import pandas as pd
    
    page_link = 'https://www.thespread.com/nhl-hockey-public-betting-chart'
    
    page_response = requests.get(page_link, timeout=5)
    
    # here, we fetch the content from the url, using the requests library
    page_content = BeautifulSoup(page_response.content, "html.parser")
    
    
    # Take out the <div> of name and get its value
    tables = page_content.find_all('div', class_='datarow')
    
    # Iterate through rows
    rows = []
    
    # Iterate through each datarow and pull out each home/away separately
    for table in tables:
        # Get time and date
        time_and_date_tag = table.find_all('div', attrs={"class": "time"})[0].contents
        date = time_and_date_tag[1]
        time = time_and_date_tag[-1]
        # Get teams
        teams_tag = table.find_all('div', attrs={"class": "datacell teams"})[0].contents[-1].contents
        home_team = teams_tag[1].text
        away_team = teams_tag[-1].text
        # Get opening
        opening_tag = table.find_all('div', attrs={"class": "child-open"})[0].contents
        home_open_value = opening_tag[1]
        away_open_value = opening_tag[-1]
        # Get current
        current_tag = table.find_all('div', attrs={"class": "child-current"})[0].contents
        home_current_value = current_tag[1]
        away_current_value = current_tag[-1]
        # Create list
        rows.append([time, date, home_team, away_team,
                     home_open_value, away_open_value,
                     home_current_value, away_current_value])
    
    columns = ['time', 'date', 'home_team', 'away_team',
               'home_open', 'away_open',
               'home_current', 'away_current']
    
    print(pd.DataFrame(rows, columns=columns))
    

    【讨论】:

    • 很棒的工作,所以您正在拉取称为数据行的大型数据集,然后在一行中找到每个标签并迭代?
    • 的确,当表格实际上是一个html表格而不是嵌套的div时,这通常要容易得多。每个项目都需要显式拉出。
    【解决方案2】:

    这是我对您问题的解决方案。

    from bs4 import BeautifulSoup
    import requests
    
    page_link = 'https://www.thespread.com/nhl-hockey-public-betting-chart'
    
    page_response = requests.get(page_link, timeout=5)
    
    # here, we fetch the content from the url, using the requests library
    page_content = BeautifulSoup(page_response.content, "html.parser")
    
    
    for cell in page_content.find_all('div', attrs={'class': 'datarow'}):
        name = cell.text.strip()
        print (name)
    

    【讨论】:

      猜你喜欢
      • 2019-07-21
      • 2012-06-04
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2023-01-01
      • 2019-10-24
      • 1970-01-01
      相关资源
      最近更新 更多