【发布时间】:2018-10-09 07:37:03
【问题描述】:
我正在尝试抓取具有相同 HTML 结构的多个网站并将内容写入 JSON 文件。每个 URL 的结果都会在终端中打印出来,但只有列出的最后一个 URL 的内容会写入 JSON 文件。我一直无法找到解决方案。这是我的代码
from urllib.request import urlopen
from bs4 import BeautifulSoup as soup
import json
urls = ['https://scholarworks.gvsu.edu/books/', 'https://pdxscholar.library.pdx.edu/pdxopen/', 'https://oer.galileo.usg.edu/all-textbooks/index.html', 'https://oer.galileo.usg.edu/all-textbooks/index.2.html', 'https://digitalcommons.trinity.edu/textbooks/']
#scrape elements
for url in urls:
uClient = urlopen(url)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
containers = page_soup.findAll("div",{"class":"content_block"})
source = page_soup.find("div",{"id":"series-header"})
data = []
for container in containers:
item = {}
item['type'] = "Textbook"
item['title'] = container.h2.text
item['author'] = container.p.text
item['link'] = container.a["href"]
item['source'] = source.h2.text
data.append(item) # add the item to the list
print(container.h2.text)
with open("./json/multiple.json", "w") as writeJSON:
json.dump(data, writeJSON, ensure_ascii=False)
【问题讨论】:
-
您的代码格式错误;我认为您的内部循环需要不缩进四个空格。
标签: python json beautifulsoup