【问题标题】:Celery, Django and Scrapy: error importing from django appCelery、Django 和 Scrapy:从 django 应用程序导入时出错
【发布时间】:2016-04-10 01:43:37
【问题描述】:

我正在使用celery(和django-celery)来允许用户通过 django 管理员启动定期抓取。这是一个更大项目的一部分,但我已将问题归结为一个最小的示例。

首先,celery/celerybeat 正在后台运行。相反,如果我使用 django 项目目录中的 celery -A evofrontend worker -B -l info 运行它们,那么 我没有遇到任何问题,这很奇怪。

当我将 celery/celerybeat 作为守护进程运行时,我得到一个奇怪的导入错误:

[2016-01-06 03:05:12,292: ERROR/MainProcess] Task evosched.tasks.scrapingTask[e18450ad-4dc3-47a0-b03d-4381a0e65c31] raised unexpected: ImportError('No module named myutils',)
Traceback (most recent call last):
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 240, in trace_task
    R = retval = fun(*args, **kwargs)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/celery/app/trace.py", line 438, in __protected_call__
    return self.run(*args, **kwargs)
  File "evosched/tasks.py", line 35, in scrapingTask
    cs = CrawlerScript('TestSpider', scrapy_settings)
  File "evosched/tasks.py", line 13, in __init__
    self.crawler = CrawlerProcess(scrapy_settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 209, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 115, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/crawler.py", line 296, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 30, in from_settings
    return cls(settings)
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 21, in __init__
    for module in walk_modules(name):
  File "/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "retail/spiders/Retail_spider.py", line 16, in <module>
ImportError: No module named myutils

即尽管将相关内容添加到 syslog 并执行 django.setup(),但蜘蛛仍无法从 django 项目应用程序导入。

我的预感是这可能是由初始化期间的“循环导入”引起的,但我不确定(有关相同错误的说明,请参阅 here

Celery 守护进程配置

为了完整起见,celeryd 和 celerybeat 配置脚本是:

# /etc/default/celeryd
CELERYD_NODES="worker1"

CELERY_BIN="/home/lee/Desktop/pyco/evo-scraping-min/venv/bin/celery"

CELERY_APP="evofrontend"
DJANGO_SETTINGS_MODULE="evofrontend.settings"

CELERYD_CHDIR="/home/lee/Desktop/pyco/evo-scraping-min/evofrontend"

CELERYD_OPTS="--concurrency=1"

# Workers should run as an unprivileged user.
CELERYD_USER="lee"
CELERYD_GROUP="lee"

CELERY_CREATE_DIRS=1

# /etc/default/celerybeat 
CELERY_BIN="/home/lee/Desktop/pyco/evo-scraping-min/venv/bin/celery"

CELERY_APP="evofrontend"
CELERYBEAT_CHDIR="/home/lee/Desktop/pyco/evo-scraping-min/evofrontend/"

# Django settings module
export DJANGO_SETTINGS_MODULE="evofrontend.settings"

它们主要基于the generic ones,使用 Django 设置并在我的 virtualenv 而不是系统中使用 celery bin。

我也在使用init.d 脚本,它们是the generic ones

项目结构

至于项目:它位于/home/lee/Desktop/pyco/evo-scraping-min。它下的所有文件都拥有所有权lee:lee。 该目录包含一个 Scrapy (evo-retail) 和 Django (evofrontend) 项目,它们位于其下,完整的树结构看起来像

├── evofrontend
│   ├── db.sqlite3
│   ├── evofrontend
│   │   ├── celery.py
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   ├── evosched
│   │   ├── __init__.py
│   │   ├── myutils.py
│   │   └── tasks.py
│   └── manage.py
└── evo-retail
    └── retail
        ├── logs
        ├── retail
        │   ├── __init__.py
        │   ├── settings.py
        │   └── spiders
        │       ├── __init__.py
        │       └── Retail_spider.py
        └── scrapy.cfg

Django 项目相关文件

现在相关文件:evofrontend/evofrontend/celery.py 看起来像

# evofrontend/evofrontend/celery.py
from __future__ import absolute_import
import os
from celery import Celery

# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'evofrontend.settings')

from django.conf import settings

app = Celery('evofrontend')

# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

来自 Django 设置文件的潜在相关设置,evofrontend/evofrontend/settings.py

import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PROJECT_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir))

INSTALLED_APPS = (
    ...
    'djcelery',
    'evosched',
)

# Celery settings
BROKER_URL = 'amqp://guest:guest@localhost//'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'Europe/London'
CELERYD_MAX_TASKS_PER_CHILD = 1  # Each worker is killed after one task, this prevents issues with reactor not being restartable
# Use django-celery backend database
CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend'
# Set periodic task
CELERYBEAT_SCHEDULER = "djcelery.schedulers.DatabaseScheduler"

调度应用程序中的tasks.py evosched 看起来像(它只是在更改目录后使用相关设置启动 Scrapy 蜘蛛)

# evofrontend/evosched/tasks.py
from __future__ import absolute_import
from celery import shared_task
from celery.utils.log import get_task_logger
logger = get_task_logger(__name__)
import os
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from django.conf import settings as django_settings


class CrawlerScript(object):
    def __init__(self, spider, scrapy_settings):
        self.crawler = CrawlerProcess(scrapy_settings)
        self.spider = spider  # just a string

    def run(self, **kwargs):
        # Pass the kwargs (usually command line args) to the crawler
        self.crawler.crawl(self.spider, **kwargs)
        self.crawler.start()


@shared_task
def scrapingTask(**kwargs):

    logger.info("Start scrape...")

    # scrapy.cfg file here pointing to settings...
    base_dir = django_settings.BASE_DIR
    os.chdir(os.path.join(base_dir, '..', 'evo-retail/retail'))
    scrapy_settings = get_project_settings()

    # Run crawler
    cs = CrawlerScript('TestSpider', scrapy_settings)
    cs.run(**kwargs)

evofrontend/evosched/myutils.py 仅包含(在此最小示例中):

 # evofrontend/evosched/myutils.py
 SCRAPY_XHR_HEADERS = 'SOMETHING'

Scrapy 项目相关文件

在完整的 Scrapy 项目中,设置文件如下所示

# evo-retail/retail/retail/settings.py
BOT_NAME = 'retail'

import os
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))

SPIDER_MODULES = ['retail.spiders']
NEWSPIDER_MODULE = 'retail.spiders'

并且(在这个最小的例子中)蜘蛛只是

# evo-retail/retail/retail/spiders/Retail_spider.py
from scrapy.conf import settings as scrapy_settings
from scrapy.spiders import Spider
from scrapy.http import Request
import sys
import django
import os
import posixpath
SCRAPY_BASE_DIR = scrapy_settings['PROJECT_ROOT']
DJANGO_DIR = posixpath.normpath(os.path.join(SCRAPY_BASE_DIR, '../../../', 'evofrontend'))
sys.path.insert(0, DJANGO_DIR)
os.environ.setdefault("DJANGO_SETTINGS_MODULE", 'evofrontend.settings')
django.setup()
from evosched.myutils import SCRAPY_XHR_HEADERS

class RetailSpider(Spider):

    name = "TestSpider"

    def start_requests(self):
        print SCRAPY_XHR_HEADERS
        yield Request(url='http://www.google.com', callback=self.parse)

    def parse(self, response):
        print response.url
        return []

编辑:

我通过大量试验和错误发现,如果我尝试从中导入的应用程序位于我的 INSTALLED_APPS django 设置中,那么它会因导入错误而失败,但如果我从那里删除该应用程序,则不再我是否收到导入错误(例如从INSTALLED_APPS 中删除evosched 然后蜘蛛中的导入正常...)。显然不是解决方案,但可能是一个线索。

编辑 2

我在蜘蛛导入失败之前打印了sys.path,结果是

/home/lee/Desktop/pyco/evo-scraping-min/evofrontend/../evo-retail/retail 
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/plat-x86_64-linux-gnu
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-tk
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-old  
/home/lee/Desktop/pyco/evo-scraping-min/venv/lib/python2.7/lib-dynload
/usr/lib/python2.7
/usr/lib/python2.7/plat-x86_64-linux-gnu
/usr/lib/python2.7/lib-tk
/home/lee/Desktop/pyco/evo-scraping-min/venv/local/lib/python2.7/site-packages
/home/lee/Desktop/pyco/evo-scraping-min/evofrontend 
/home/lee/Desktop/pyco/evo-scraping-min/evo-retail/retail`

编辑 3

如果我执行import evosched 然后print dir(evosched),我会看到“任务”,如果我选择包含这样的文件,我还可以看到“模型”,因此实际上可以从模型导入。但是我没有看到“myutils”。即使from evosched import myutils 失败并且如果将语句放在下面的函数中而不是作为全局函数也会失败(我认为这可能会解决循环导入问题......)。直接的import evosched 有效……可能import evosched.utils 有效。还没试过……

【问题讨论】:

    标签: python django scrapy celery django-celery


    【解决方案1】:

    似乎 celery 守护进程正在使用系统的 python 而不是 virtualenv 中的 python 二进制文件运行。你需要使用

    # Python interpreter from environment. 
    ENV_PYTHON="$CELERYD_CHDIR/env/bin/python"
    

    here 所述,告诉 celeryd 使用 virtualenv 中的 python 运行。

    【讨论】:

    • 该死的,我以为您对此有所了解,但我在设置CELERYD_CHDIR 后将其添加到/etc/default/celeryd,然后重新启动所有内容,但仍然遇到相同的导入错误。只是为了仔细检查这不是问题我还在有问题的导入行之前做了print 'Python bin is %s' % sys.executable,在芹菜日志中我看到“Python bin 是 /home/lee/Desktop/pyco/evo-scraping-min/venv/ bin/python"
    • 我通过大量试验和错误发现,如果我尝试从中导入的应用程序在我的 INSTALLED_APPS django 设置中,那么它会因导入错误而失败,但如果我删除该应用程序从那里我不再收到导入错误(例如,从INSTALLED_APPS 中删除evosched 然后蜘蛛中的导入正常...)
    • Som 似乎 celery 无法在您的 dir 结构中仅找到特定的应用程序。如果您删除evosched,那么evosched/tasks.py 将永远不会被使用,因此python 将永远不会尝试导入retail/spiders/Retail_spider.py。但这有点没用,因为现在scrapy应用什么都不做。
    • 是的,我尝试使用另一个模块来验证(而不是 evosched)...如果项目中的任何 django 应用程序在 INSTALLED_APPS 中列出并且我尝试导入,那么项目中的任何 django 应用程序都会出现同样的问题从它在蜘蛛中
    • 我在导入失败之前打印了sys.path(在将应用程序添加回djangoINSTALLED_APPS之后),我将添加到OP中。
    猜你喜欢
    • 2015-04-30
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2015-10-10
    • 2013-04-03
    • 2023-03-28
    相关资源
    最近更新 更多