【问题标题】:Python simple web crawler error (infinite loop crawling)Python简单网络爬虫报错(无限循环爬取)
【发布时间】:2017-09-24 15:59:49
【问题描述】:

我用python写了一个简单的爬虫。它似乎工作正常并找到新链接,但重复找到相同链接并且它没有下载找到的新网页。即使在达到设定的爬行深度限制后,它似乎也会无限爬行。我没有收到任何错误。它只是永远运行。这是代码和运行。我在 Windows 7 64 位上使用 Python 2.7。

import sys
import time
from bs4 import *
import urllib2
import re
from urlparse import urljoin

def crawl(url):
    url = url.strip()
    page_file_name = str(hash(url))
    page_file_name = page_file_name + ".html" 
    fh_page = open(page_file_name, "w")
    fh_urls = open("urls.txt", "a")
    fh_urls.write(url + "\n")
    html_page = urllib2.urlopen(url)
    soup = BeautifulSoup(html_page, "html.parser")
    html_text = str(soup)
    fh_page.write(url + "\n")
    fh_page.write(page_file_name + "\n")
    fh_page.write(html_text)
    links = []
    for link in soup.findAll('a', attrs={'href': re.compile("^http://")}):
    links.append(link.get('href'))
    rs = []
    for link in links:
    try:
            #r = urllib2.urlparse.urljoin(url, link)
            r = urllib2.urlopen(link)
            r_str = str(r.geturl())
            fh_urls.write(r_str + "\n")
            #a = urllib2.urlopen(r)
            if r.headers['content-type'] == "html" and r.getcode() == 200:
                rs.append(r)
                print "Extracted link:"
        print link
        print "Extracted link final URL:"
        print r
    except urllib2.HTTPError as e:
            print "There is an error crawling links in this page:"
            print "Error Code:"
            print e.code
    return rs
    fh_page.close()
    fh_urls.close()

if __name__ == "__main__":
    if len(sys.argv) != 3:
    print "Usage: python crawl.py <seed_url> <crawling_depth>"
    print "e.g: python crawl.py https://www.yahoo.com/ 5"
    exit()
    url = sys.argv[1]
    depth = sys.argv[2]
    print "Entered URL:"
    print url
    html_page = urllib2.urlopen(url)
    print "Final URL:"
    print html_page.geturl()
    print "*******************"
    url_list = [url, ]
    current_depth = 0
    while current_depth < depth:
        for link in url_list:
            new_links = crawl(link)
            for new_link in new_links:
                if new_link not in url_list:
                    url_list.append(new_link)
            time.sleep(5)
            current_depth += 1
            print current_depth

这是我运行它时得到的:

C:\Users\Hussam-Den\Desktop>python test.py https://www.yahoo.com/ 4
Entered URL:
https://www.yahoo.com/
Final URL:
https://www.yahoo.com/
*******************
1

而存储抓取到的url的输出文件就是这个:

https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://www.yahoo.com/
https://www.yahoo.com/lifestyle/horoscope/libra/daily-20170924.html
https://policies.yahoo.com/us/en/yahoo/terms/utos/index.htm
https://policies.yahoo.com/us/en/yahoo/privacy/adinfo/index.htm
https://www.oath.com/careers/work-at-oath/
https://help.yahoo.com/kb/account

知道有什么问题吗?

【问题讨论】:

  • 这段代码没有正确缩进。
  • 缩进错误太多了。你能修复它并上传以进行故障排除吗?

标签: python beautifulsoup urllib2


【解决方案1】:
  1. 这里有一个错误:depth = sys.argv[2]sys 返回str 而不是int。你应该写depth = int(sys.argv[2])
  2. 因为1点,条件while current_depth &lt; depth:总是返回True

尝试通过将argv[2] 转换为int 来修复它。我细的错误在那里

【讨论】:

  • @hussam-hallak 上面的答案是正确的,我建议查看 python 的 arparse 模块,它可以为你做类似的事情 - 你可以将 max_depth 定义为 int ,它会做需要的事情 - 非常有用的模块。
  • 更重要的是:切换到 Python 3。除此之外,它将 int-str 比较标记为错误,因此 this 问题很明显。明天,您将尝试使用不同的编码抓取网站,并尝试使用 Python 2 的编码方法。立即切换!
  • @alexis,是的,Python3 是个不错的选择。我不明白那些在 Py2 上开始新项目的人)
猜你喜欢
  • 2017-01-26
  • 2016-06-23
  • 2023-03-13
  • 2021-12-06
  • 2016-10-31
  • 1970-01-01
  • 2015-05-12
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多