【问题标题】:Not able to parse webpage contents using beautiful soup无法使用漂亮的汤解析网页内容
【发布时间】:2017-08-21 19:50:12
【问题描述】:
我一直在使用 Beautiful Soup 来解析网页以提取一些数据。到目前为止,它对我来说非常好,对于其他网页。但是,我正在尝试计算此page 中 标记的数量,
from bs4 import BeautifulSoup
import requests
catsection = "cricket"
url_base = "http://www.dnaindia.com/"
i = 89
url = url_base + catsection + "?page=" + str(i)
print(url)
#This is the page I'm trying to parse and also the one in the hyperlink
#I get the correct url i'm looking for at this stage
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data, 'html.parser')
j=0
for num in soup.find_all('a'):
j=j+1
print(j)
我得到的输出为 0。这让我认为 r=requests.get(url) 之后的 2 行可能不起作用(页面中显然不可能有零个 标记),而且我不确定我可以在这里使用什么替代解决方案。以前有没有人有任何解决方案或遇到过类似的问题?
提前致谢。
【问题讨论】:
标签:
python-3.x
web-scraping
beautifulsoup
【解决方案1】:
您需要将一些信息与请求一起传递给服务器。
以下代码应该可以工作...您也可以与其他参数一起玩
from bs4 import BeautifulSoup
import requests
catsection = "cricket"
url_base = "http://www.dnaindia.com/"
i = 89
url = url_base + catsection + "?page=" + str(i)
print(url)
headers = {
'User-agent': 'Mozilla/5.0'
}
#This is the page I'm trying to parse and also the one in the hyperlink
#I get the correct url i'm looking for at this stage
r = requests.get(url, headers=headers)
data = r.text
soup = BeautifulSoup(data, 'html.parser')
j=0
for num in soup.find_all('a'):
j=j+1
print(j)
【解决方案2】:
将任何 url 放入解析器并检查该页面上可用的“a”标签的数量:
from bs4 import BeautifulSoup
import requests
url_base = "http://www.dnaindia.com/cricket?page=1"
res = requests.get(url_base, headers={'User-agent': 'Existed'})
soup = BeautifulSoup(res.text, 'html.parser')
a_tag = soup.select('a')
print(len(a_tag))