【问题标题】:How to get attribute from element using beautifulsoup?如何使用beautifulsoup从元素中获取属性?
【发布时间】:2021-09-09 04:36:30
【问题描述】:

这是来自网页的一些 html:

<bg-quote class="value negative" field="Last" format="0,0.00" channel="/zigman2/quotes/203558040/composite,/zigman2/quotes/203558040/lastsale" data-last-stamp="1624625999626" data-last-raw="671.68">671.68</bg-quote>

所以我想获取属性“data-last-raw”的值,但是 find() -方法在搜索这个元素时似乎返回 None。为什么会这样,我该如何解决?

下面是我的代码和Traceback:

import requests
from bs4 import BeautifulSoup as BS
import tkinter as tk


class Scraping:

    @classmethod
    def get_to_site(cls, stock_name):
        sitename = 'https://www.marketwatch.com/investing/stock/tsla' + stock_name
        site = requests.get(sitename, headers={
            "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
            "Accept-Encoding":"gzip, deflate",
            "Accept-Language":"en-GB,en;q=0.9,en-US;q=0.8,ml;q=0.7",
            "Connection":"keep-alive",
            "Host":"www.marketwatch.com",
            "Referer":"https://www.marketwatch.com",
            "Upgrade-Insecure-Requests":"1",
            "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.119 Safari/537.36"
        })
        print(site.status_code)
        src = site.content
        Scraping.get_price(src)
        
    @classmethod
    def get_price(cls, src):
        soup = BS(src, "html.parser")
        price_holder = soup.find("bg-quote", {"channel":"/zigman2/quotes/203558040/composite,/zigman2/quotes/203558040/lastsale"})
        price = price_holder["data-last-raw"]
        print(price)



Scraping.get_to_site('tsla')


200
Traceback (most recent call last):
  File "c:\Users\Aatu\Documents\python\pythonleikit\stock_price_scraper.py", line 41, in <module>
    Scraping.get_to_site('tsla')
  File "c:\Users\Aatu\Documents\python\pythonleikit\stock_price_scraper.py", line 30, in get_to_site
    Scraping.get_price(src)
  File "c:\Users\Aatu\Documents\python\pythonleikit\stock_price_scraper.py", line 36, in get_price
    price = price_holder["data-last-raw"]
TypeError: 'NoneType' object is not subscriptable

所以 site.status_code 返回 200 表示站点已正确打开,但我认为 soup.find() 方法返回 None 表示未找到我要查找的元素。

请有人帮忙!

【问题讨论】:

  • 您能否发布完整的回溯并从代码中分离出错误
  • PS C:\Users\Aatu\Documents\python\pythonleikit&gt; &amp; C:/Python39ni/python.exe c:/Users/Aatu/Documents/python/pythonleikit/stock_price_scraper.py 200 Scraping.get_to_site('tsla') File "c:\Users\Aatu\Documents\python\pythonleikit\stock_price_scraper.py", line 30, in get_to_site Scraping.get_price(src) File "c:\Users\Aatu\Documents\python\pythonleikit\stock_price_scraper.py", line 36, in get_price price = price_holder["data-last-raw"] TypeError: 'NoneType' object is not subscriptable
  • @AatuTahkola 请edit您的问题并在其中包含回溯

标签: python html web-scraping beautifulsoup python-requests


【解决方案1】:
import requests
from bs4 import BeautifulSoup


def main(ticker):
    r = requests.get(f'https://www.marketwatch.com/investing/stock/{ticker}')
    soup = BeautifulSoup(r.text, 'lxml')
    print(soup.select_one('bg-quote.value:nth-child(2)').text)


if __name__ == "__main__":
    main('tsla')

输出:

670.99

【讨论】:

    猜你喜欢
    • 2021-05-21
    • 1970-01-01
    • 2017-10-04
    • 2023-03-09
    • 1970-01-01
    • 1970-01-01
    • 2022-06-23
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多