【发布时间】:2015-10-09 21:08:24
【问题描述】:
这对有经验的用户来说可能很容易,但我是scrapy的新手,我想要的是一个爬到用户定义页面的蜘蛛。现在我正在尝试修改__init__ 中的allow pattern,但它似乎不起作用。目前我的代码摘要是:
class MySpider(CrawlSpider):
name = "example"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/alpha"]
pattern = "/[\d]+$"
rules = [
Rule(LinkExtractor(allow=[pattern] , restrict_xpaths=('//*[@id = "imgholder"]/a', )), callback='parse_items', follow=True),
]
def __init__(self, argument='' ,*a, **kw):
super(MySpider, self).__init__(*a, **kw)
#some inputs and operations based on those inputs
i = str(raw_input()) #another input
#need to change the pattern here
self.pattern = '/' + i + self.pattern
#some other operations
pass
def parse_items(self, response):
hxs = HtmlXPathSelector(response)
img = hxs.select('//*[@id="imgholder"]/a')
item = MyItem()
item["field1"] = "something"
item["field2"] = "something else"
yield item
pass
现在假设用户输入i=2,所以我想转到以/2/*some number* 结尾的网址,但现在发生的是蜘蛛正在抓取/*some number 模式的任何内容。更新似乎没有传播。我正在使用scrapy version 1.0.1。
有什么办法吗?提前致谢。
【问题讨论】:
标签: python web-scraping scrapy screen-scraping