【发布时间】:2015-07-02 14:45:56
【问题描述】:
我制作了一个基本上是网络爬虫的 python 脚本。 我的目标是从一些 blogspot 获取文件的直接下载链接,然后找出直接下载链接。
def trade_spider(max_pages):
page=1
i=1
while page < max_pages:
url='http://comicsmegacity.blogspot.in/'
source_code=requests.get(url)
plain_text=source_code.text
soup=BeautifulSoup(plain_text)
for link in soup.findAll('a' , href=re.compile('http://www\.mediafire\.com/')):
href=link.get('href')
print('link no ' + str(i) +' title ' + link.string)
i+=1
print(href)
get_download_link(href)
page+=1
def get_download_link(url):
source_code = requests.get(url)
plain_text = source_code.text
soup = BeautifulSoup(plain_text)
for link in soup.findAll('div', {"class": "download_link"}):
href = link.get('href')
print('Download link ')
print(href)
trade_spider(2)
但是输出:
link no 1 title Prem Ritu
http://www.mediafire.com/download/1vkgv8i0a151vqm/Prem+Ritu-1.pdf
Download link
None
Download link
None
link no 2 title Kobi Prem
http://www.mediafire.com/download/b46y4fe61cgyfts/kobi+prem-2.pdf
Download link
None
Download link
None
【问题讨论】:
标签: python web web-crawler