【发布时间】:2019-03-17 16:00:45
【问题描述】:
我目前正在创建一个 Django 应用程序,该应用程序应该在启动后立即运行网络抓取代码,然后通过 REST API 响应请求的某些数据。要求是它必须在 Docker 上运行,这导致我出现以下问题:使用 docker-compose up 时,图像正在正确构建,db 服务运行但随后我收到一条错误消息,指出我的数据库中的关系不存在。我可以通过运行 docker-compose run [service] manage.py migrate 来纠正这个问题,但这是一个手动解决方案,当有人从 git 克隆应用程序并尝试通过 docker-compose up 运行它时,它将不起作用。
我在docker-compose.yml 中使用了command: python /teonite_webscraper/manage.py migrate --noinput,但由于某种原因它似乎没有运行。
docker-compose.yml:
version: '3.6'
services:
db:
image: postgres:10.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
web:
build: .
command: python /teonite_webscraper/manage.py migrate --noinput
command: python /teonite_webscraper/manage.py runserver 0.0.0.0:8080
volumes:
- .:/teonite_webscraper
ports:
- 8080:8080
environment:
- SECRET_KEY=changemeinprod
depends_on:
- db
volumes:
postgres_data:
Dockerfile:
# Use an official Python runtime as a parent image
FROM python:3.7
# Set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set the working directory
WORKDIR /teonite_webscraper
# Copy the current directory contents into the container
COPY . /teonite_webscraper
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
在初始化阶段运行的代码位于 ready() 函数内的 Django 应用文件夹内的 apps.py 中,如下所示:
from django.apps import AppConfig
class ScraperConfig(AppConfig):
name = 'scraper'
def ready(self):
import requests
from bs4 import BeautifulSoup
from .helpers import get_links
from .models import Article, Author
import json
import re
# For implementation check helpers.py, grabs all the article links from blog
links = get_links('https://teonite.com/blog/')
# List of objects to batch inject into DB to save I/Os
objects_to_inject = []
links_in_db = list(Article.objects.all().values_list('article_link', flat=True))
authors_in_db = list(Author.objects.all().values_list('author_stub', flat=True))
for link in links:
if not link in links_in_db:
# Grab article page
blog_post = requests.get(link)
# Prepare soup
soup = BeautifulSoup(blog_post.content, 'lxml')
# Gets the json with author data from page meta
json_element = json.loads(soup.find_all('script')[1].get_text())
# All of the below can be done within Articles() as parameters, but for clarity
# I prefer separate lines, and DB models cannot be accessed outside
# ready() at this stage anyway so refactoring to separate function wouldn't be possible
post_data = Article()
post_data.article_link = link
post_data.article_content = soup.find('section', class_='post-content').get_text()
# Regex only grabs the last part of author's URL that contains the "nickname"
author_stub = re.search(r'\/(\w+\-?_?\.?\w+)\/$', json_element['author']['url']).group(1)
# Check if author is already in DB if so assign the key.
if author_stub in authors_in_db:
post_data.article_author = Author.objects.get(author_stub=author_stub)
else:
# If not, create new DB Authors item and then assign.
new_author = Author(author_fullname=json_element['author']['name'],
author_stub=author_stub)
new_author.save()
# Unlike links which are unique, author might appear many times and we only grab
# them from DB once at the beginning, so adding it here to the checklist to avoid trying to
# add same author multiple times
authors_in_db.append(author_stub)
post_data.article_author = new_author
post_data.article_title = json_element['headline']
# Append object to the list and continue
objects_to_inject.append(post_data)
Article.objects.bulk_create(objects_to_inject)
我知道在ready() 中访问数据库不是最佳做法,但我不知道如何在 Django 应用程序启动时运行此代码而不将其连接到视图(无法连接到视图由于规格)。
这是我尝试运行docker-compose up后得到的日志:
db_1 | 2018-10-12 11:46:55.928 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2018-10-12 11:46:55.928 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2018-10-12 11:46:55.933 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2018-10-12 11:46:55.955 UTC [19] LOG: database system was interrupted; last known up at 2018-10-12 11:40:40 UTC
db_1 | 2018-10-12 11:46:56.159 UTC [19] LOG: database system was not properly shut down; automatic recovery in progress
db_1 | 2018-10-12 11:46:56.161 UTC [19] LOG: redo starts at 0/15C0320
db_1 | 2018-10-12 11:46:56.161 UTC [19] LOG: invalid record length at 0/15C0358: wanted 24, got 0
db_1 | 2018-10-12 11:46:56.161 UTC [19] LOG: redo done at 0/15C0320
db_1 | 2018-10-12 11:46:56.172 UTC [1] LOG: database system is ready to accept connections
db_1 | 2018-10-12 11:48:06.831 UTC [26] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 11:48:06.831 UTC [26] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
db_1 | 2018-10-12 11:48:10.649 UTC [27] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 11:48:10.649 UTC [27] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
db_1 | 2018-10-12 11:48:36.193 UTC [28] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 11:48:36.193 UTC [28] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
db_1 | 2018-10-12 11:48:39.820 UTC [29] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 11:48:39.820 UTC [29] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
web_1 | /usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
web_1 | """)
db_1 | 2018-10-12 12:02:03.474 UTC [44] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 12:02:03.474 UTC [44] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
web_1 | /usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
web_1 | """)
db_1 | 2018-10-12 12:02:07.084 UTC [45] ERROR: relation "scraper_article" does not exist at character 46
db_1 | 2018-10-12 12:02:07.084 UTC [45] STATEMENT: SELECT "scraper_article"."article_link" FROM "scraper_article"
web_1 | Unhandled exception in thread started by <function check_errors.<locals>.wrapper at 0x7fb5e5ac6e18>
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
web_1 | return self.cursor.execute(sql, params)
web_1 | psycopg2.ProgrammingError: relation "scraper_article" does not exist
web_1 | LINE 1: SELECT "scraper_article"."article_link" FROM "scraper_articl...
web_1 | ^
web_1 |
web_1 |
web_1 | The above exception was the direct cause of the following exception:
web_1 |
web_1 | Traceback (most recent call last):
web_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 225, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
web_1 | autoreload.raise_last_exception()
web_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 248, in raise_last_exception
web_1 | raise _exception[1]
web_1 | File "/usr/local/lib/python3.7/site-packages/django/core/management/__init__.py", line 337, in execute
web_1 | autoreload.check_errors(django.setup)()
web_1 | File "/usr/local/lib/python3.7/site-packages/django/utils/autoreload.py", line 225, in wrapper
web_1 | fn(*args, **kwargs)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 120, in populate
web_1 | app_config.ready()
web_1 | File "/teonite_webscraper/scraper/apps.py", line 19, in ready
web_1 | links_in_db = list(Article.objects.all().values_list('article_link', flat=True))
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 268, in __iter__
web_1 | self._fetch_all()
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 1186, in _fetch_all
web_1 | self._result_cache = list(self._iterable_class(self))
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 176, in __iter__
web_1 | for row in compiler.results_iter(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size):
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1017, in results_iter
web_1 | results = self.execute_sql(MULTI, chunked_fetch=chunked_fetch, chunk_size=chunk_size)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/compiler.py", line 1065, in execute_sql
web_1 | cursor.execute(sql, params)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 100, in execute
web_1 | return super().execute(sql, params)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 68, in execute
web_1 | return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers
web_1 | return executor(sql, params, many, context)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
web_1 | return self.cursor.execute(sql, params)
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/utils.py", line 89, in __exit__
web_1 | raise dj_exc_value.with_traceback(traceback) from exc_value
web_1 | File "/usr/local/lib/python3.7/site-packages/django/db/backends/utils.py", line 85, in _execute
web_1 | return self.cursor.execute(sql, params)
web_1 | django.db.utils.ProgrammingError: relation "scraper_article" does not exist
web_1 | LINE 1: SELECT "scraper_article"."article_link" FROM "scraper_articl...
我曾尝试使用entrypoint,但最终收到错误消息,提示该文件不存在。尝试使用依赖于db 构建映像的附加服务,然后在 Web 服务器也无法工作之前运行 migrate 和 start,我最终得到了代码 0 的 Web 服务退出。
已解决(答案中的解决方案)
【问题讨论】:
-
您不能在 YAML 中使用重复的键。至少在转换为 python(docker-compose 是 Python)时,第一个
command将被覆盖。通过附加到容器使其成为一个命令、添加脚本或运行迁移:docker exec -ti container bash。 -
请将您的解决方案转移到自己的答案中,谢谢。
标签: python django docker docker-compose