【发布时间】:2016-10-30 18:11:43
【问题描述】:
我正在尝试创建一个包含几篇文章的数据库,用于文本挖掘。 我通过网络抓取提取正文,然后将这些文章的正文保存在 csv 文件中。但是,我无法保存所有正文。 我想出的代码只保存最后一个 URL(文章)的文本,而如果我打印我正在抓取的内容(以及我应该保存的内容),我会获得所有文章的正文。
我只是从列表中包含了一些 URL(其中包含大量 URL),只是为了给你一个想法:
import requests
from bs4 import BeautifulSoup
import csv
r=["http://www.nytimes.com/2016/10/12/world/europe/germany-arrest-syrian-refugee.html",
"http://www.nytimes.com/2013/06/16/magazine/the-effort-to-stop-the- attack.html",
"http://www.nytimes.com/2016/10/06/world/europe/police-brussels-knife-terrorism.html",
"http://www.nytimes.com/2016/08/23/world/europe/france-terrorist-attacks.html",
"http://www.nytimes.com/interactive/2016/09/09/us/document-Review-of-the-San-Bernardino-Terrorist-Shooting.html",
]
for url in r:
t= requests.get(url)
t.encoding = "ISO-8859-1"
soup = BeautifulSoup(t.content, 'lxml')
text = soup.find_all(("p",{"class": "story-body-text story-content"}))
print(text)
with open('newdb30.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ',quotechar='|', quoting=csv.QUOTE_MINIMAL)
spamwriter.writerow(text)
【问题讨论】:
标签: python csv web screen-scraping