【问题标题】:scrapy - handling multiple types of items - multiple and related Django models and saving them to database in pipelinesscrapy - 处理多种类型的项目 - 多个相关的 Django 模型并将它们保存到管道中的数据库
【发布时间】:2025-12-12 18:15:01
【问题描述】:

我有以下 Django 模型。我不确定在蜘蛛中使用scrapy管道将这些相互关联的对象扫描到Django中的数据库时,保存这些相互关联对象的最佳方法是什么。似乎scrapy管道仅用于处理一种“类型”的项目

models.py

class Parent(models.Model):
    field1 = CharField()


class ParentX(models.Model):
    field2 = CharField()
    parent = models.OneToOneField(Parent, related_name = 'extra_properties')


class Child(models.Model):
    field3 = CharField()
    parent = models.ForeignKey(Parent, related_name='childs')

items.py

# uses DjangoItem https://github.com/scrapy-plugins/scrapy-djangoitem

class ParentItem(DjangoItem):
    django_model = Parent

class ParentXItem(DjangoItem):
    django_model = ParentX

class ChildItem(DjangoItem):
    django_model = Child

spiders.py

class MySpider(scrapy.Spider):
    name = "myspider"
    allowed_domains = ["abc.com"]
    start_urls = [
        "http://www.example.com",       # this page has ids of several Parent objects whose full details are in their individual pages

    ]

    def parse(self, response):
        parent_object_ids = [] #list from scraping the ids of the parent objects

        for parent_id in parent_object_ids:
            url = "http://www.example.com/%s" % parent_id
            yield scrapy.Request(url, callback=self.parse_detail)

    def parse_detail(self, response):
        p = ParentItem()
        px = ParentXItem()
        c = ChildItem()



        # populate p, px and c1, c2 with various data from the response.body

        yield p
        yield px
        yield c1
        yield c2 ... etc c3, c4

pipelines.py -- 不知道在这里做什么

class ScrapytestPipeline(object):
    def process_item(self, item, spider):


        # This is where typically storage to database happens
        # Now, I dont know whether the item is a ParentItem or ParentXItem or ChildItem

        # Ideally, I want to first create the Parent obj and then ParentX obj (and point p.extra_properties = px), and then child objects 
        # c1.parent = p, c2.parent = p

        # But I am not sure how to have pipeline do this in a sequential way from any order of items received

【问题讨论】:

  • isintance(item, ParentItem) 有用吗?
  • @dowjones123 你解决问题了吗?

标签: python django scrapy scrapy-spider scrapy-pipeline


【解决方案1】:

如果您想以顺序方式执行此操作,我想如果您将一个项目存储在另一个项目中,一个 depakage-it 在管道中,它可能会起作用。

我认为在保存到数据库之前更容易关联对象。

在 spiders.py 中,当您“使用 response.body 中的各种数据填充 p、px 和 c1、c2”时,您可以填充从对象数据构造的“假”主键。

然后你可以保存数据并在模型中更新它,如果已经只在一个管道中被抓取:

class ItemPersistencePipeline(object):
    def process_item(self, item, spider):
        try:
             item_model = item_to_model(item)
        except TypeError:
            return item   
        model, created = get_or_create(item_model)
        try:
            update_model(model, item_model)
        except Exception,e:
            return e
        return item

当然是方法:

def item_to_model(item):
    model_class = getattr(item, 'django_model')
    if not model_class:
        raise TypeError("Item is not a `DjangoItem` or is misconfigured")   
    return item.instance   

def get_or_create(model):
    model_class = type(model)
    created = False
    try:
        #We have no unique identifier at the moment
        #use the model.primary for now
        obj = model_class.objects.get(primary=model.primary)
    except model_class.DoesNotExist:
        created = True
        obj = model  # DjangoItem created a model for us.

    return (obj, created)

from django.forms.models import model_to_dict

def update_model(destination, source, commit=True):
    pk = destination.pk

    source_dict = model_to_dict(source)
    for (key, value) in source_dict.items():
        setattr(destination, key, value)

    setattr(destination, 'pk', pk)

    if commit:
        destination.save()

    return destination

来自:How to update DjangoItem in Scrapy

另外你应该在 django 的模型中定义 Field "primary" 来搜索是否已经在被抓取的新项目中

models.py

class Parent(models.Model):
    field1 = CharField()   
    #primary_key=True
    primary = models.CharField(max_length=80)
class ParentX(models.Model):
    field2 = CharField()
    parent = models.OneToOneField(Parent, related_name = 'extra_properties')
    primary = models.CharField(max_length=80) 
class Child(models.Model):
    field3 = CharField()
    parent = models.ForeignKey(Parent, related_name='childs')
    primary = models.CharField(max_length=80)

【讨论】:

    【解决方案2】:

    作为eLRuLL pointed out,您可以使用isinstance 来判断您每次解析的项目。

    但是,如果您不希望自己在管道中先于其父项解析子项,请考虑使用单个 scrapy 项作为父项、parentX 和子项的组合。

    您可能希望使用 nested items 干净利落地做到这一点。

    然后,在您的管道上,注意将相应的单独项插入到数据库中。

    【讨论】:

      最近更新 更多