【问题标题】:Getting "Too many requests" error when scraping a particular website using scrapy使用 scrapy 抓取特定网站时出现“请求过多”错误
【发布时间】:2017-11-03 10:23:37
【问题描述】:

我编写了一个蜘蛛来从http://allevents.in 获取详细信息。 每次我尝试报废时,我都会收到一个响应正文

Too many requests, please try after some time or report this problem at contact@allevents.in

我也尝试过使用 shell 命令。

 scrapy shell 'http://allevents.in/new%20delhi/all'

但我仍然收到response.body 的相同回复。 我尝试过其他网站,例如amazon,效果很好。 也可以使用requestsurllib.urlopen() 获取上述网址。

这是我的settings.py 文件

# -*- coding: utf-8 -*-

# Scrapy settings for tutorial project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'tutorial'

SPIDER_MODULES = ['tutorial.spiders']
NEWSPIDER_MODULE = 'tutorial.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'tutorial (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
# CONCURRENT_REQUESTS = 1

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
DOWNLOAD_DELAY = 5
# The download delay setting will honor only one of:
CONCURRENT_REQUESTS_PER_DOMAIN = 1
CONCURRENT_REQUESTS_PER_IP = 1

# Disable cookies (enabled by default)
COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
# TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
# }

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'tutorial.middlewares.TutorialSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
# #    'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
#      'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': None,
#      # 'tutorial.middlewares.ProxyMiddleware': 100,
# }

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'tutorial.pipelines.TutorialPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
AUTOTHROTTLE_ENABLED = True
# The initial download delay
AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
# HTTPCACHE_ENABLED = True
# HTTPCACHE_EXPIRATION_SECS = 0
# HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
# HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

我是scrapy的初学者。请帮忙

【问题讨论】:

  • 基本上,他们所有的事件都必须跟踪您的抓取工具,并且作为预防措施,他们可能已经阻止了您的源系统的 IP 地址。您的所有事件服务都已禁用您的蜘蛛,这并没有错。您可以更改您的 IP 并检查是否适合您。
  • 我尝试过使用不同的 ip。但仍然得到相同的输出@MaheshKaria
  • 还有更多的因素需要考虑,例如客户端类型、请求等,所以目前我们真的不知道他们被阻止的所有标准。

标签: python web-scraping scrapy python-requests


【解决方案1】:

Scrapy 使用多个并发请求(默认为 8 个)来抓取您指定的网站。似乎 allevents.in 不喜欢你打得太多。

您的解决方案很可能是设置以下配置选项之一:

  • CONCURRENT_REQUESTS_PER_DOMAIN(默认为 8,尝试使用较小的数字)
  • CONCURRENT_REQUESTS_PER_IP(默认为 0,如果设置为正数则覆盖前一个)

或者,您也可以使用AutoThrottle extension

【讨论】:

  • 如何设置?我尝试过使用scrapy shell @yorah
  • 您可以使用以下语法将命令行选项传递给scrapy shell:scrapy shell -s CONCURRENT_REQUESTS_PER_DOMAIN='8' http://...
  • 另外,如果你从你的项目路径启动了scrapy shell,那么你的项目设置应该会被自动使用。
  • 我试过 CONCURRENT_REQUESTS_PER_DOMAIN=1 还是一样的错误
  • 基本上,网站是在告诉你:“别打我那么重”。通过设置CONCURRENT_REQUESTS_PER_DOMAIN=1,您将连接数限制为 1。您还可以尝试设置选项DOWNLOAD_DELAY=5 以设置这些请求之间的 5 秒延迟(随意增加/减少此值以找到最佳值) .
【解决方案2】:

您好,尝试在settings.py 中分配CONCURRENT_REQUESTS = 1,如果您发现它有效,请逐渐增加它,如果仍然收到相同的警告,请尝试设置更高的DOWNLOAD_DELAY

【讨论】:

    【解决方案3】:

    使用scrapy-random-proxies 而不是应用自动油门,当您可以以更高的速度实现目标时,限制爬虫没有任何乐趣 - 相信我,如果您使用数百个代理,他们永远不会知道您在哪里{更多总是更好}。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2022-12-01
      • 1970-01-01
      • 1970-01-01
      • 2013-05-09
      相关资源
      最近更新 更多