【问题标题】:Postgres export large table to another databasePostgres 将大表导出到另一个数据库
【发布时间】:2021-03-02 22:22:22
【问题描述】:

有什么问题? 我的 Postgres 数据库中有一个表,其中包含大约 5600 万行,20~ GB 的摘要。它存储在具有 16GB RAM 和 i7-7700 3.6GHz 的本地机器上。 为了管理我的数据库,我使用 DataGrip 并一次打开多个数据库服务器连接。我需要将表从一台服务器导出到另一台服务器,但是当我尝试通过简单的鼠标拖动(从本地服务器到远程)来执行此操作时,我收到下一个错误 "Database client process needs more memory to perform the request"

DataGrip 允许导出/导入表格 DataGrip 顾问说:

要配置:打开“PostgreSQL 10 - postgres@localhost”数据源 属性,转到“高级”选项卡并将“-XmxNNNm”添加到“VM选项” 字段,其中 NNN 是兆字节数(例如 -Xmx256m)。

我尝试了几个 VM options (256, 1024, 8048) 的值,还调整了我的 Postgres 本地服务器的配置,但它并没有解决我的问题。 这是配置:

#effective_cache_size = 8GB

#------------------------------------------------------------------------------
# RESOURCE USAGE (except WAL)
#------------------------------------------------------------------------------

# - Memory -

#shared_buffers = 4GB           # min 128kB
                    # (change requires restart)
#huge_pages = try           # on, off, or try
                    # (change requires restart)
#temp_buffers = 256MB           # min 800kB
#max_prepared_transactions = 0      # zero disables the feature
                    # (change requires restart)
# Caution: it is not advisable to set max_prepared_transactions nonzero unless
# you actively intend to use prepared transactions.
#work_mem = 4MB             # min 64kB
#maintenance_work_mem = 1024MB      # min 1MB
#replacement_sort_tuples = 150000   # limits use of replacement selection sort
#autovacuum_work_mem = -1       # min 1MB, or -1 to use maintenance_work_mem
#max_stack_depth = 2MB          # min 100kB
dynamic_shared_memory_type = windows    # the default is the first option
                    # supported by the operating system:
                    #   posix
                    #   sysv
                    #   windows
                    #   mmap
                    # use none to disable dynamic shared memory
                    # (change requires restart)

# - Disk -

#temp_file_limit = -1           # limits per-process temp file space
                    # in kB, or -1 for no limit

# - Kernel Resource Usage -

#max_files_per_process = 1000       # min 25
                    # (change requires restart)
#shared_preload_libraries = ''      # (change requires restart)

# - Cost-Based Vacuum Delay -

#vacuum_cost_delay = 0          # 0-100 milliseconds
#vacuum_cost_page_hit = 1       # 0-10000 credits
#vacuum_cost_page_miss = 10     # 0-10000 credits
#vacuum_cost_page_dirty = 20        # 0-10000 credits
#vacuum_cost_limit = 200        # 1-10000 credits

# - Background Writer -

#bgwriter_delay = 200ms         # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100        # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0      # 0-10.0 multiplier on buffers scanned/round
#bgwriter_flush_after = 0       # measured in pages, 0 disables

# - Asynchronous Behavior -

#effective_io_concurrency = 0       # 1-1000; 0 disables prefetching
#max_worker_processes = 8       # (change requires restart)
#max_parallel_workers_per_gather = 2    # taken from max_parallel_workers
#max_parallel_workers = 8       # maximum number of max_worker_processes that
                    # can be used in parallel queries
#old_snapshot_threshold = -1        # 1min-60d; -1 disables; 0 is immediate
                    # (change requires restart)
#backend_flush_after = 0        # measured in pages, 0 disables

【问题讨论】:

  • 或许用pg_dump/pg_restore
  • 如果有合理的方法来拆分数据,您可以尝试分块导出。您可以 copy 部分表从第一台服务器中取出,然后 copy 将它们发送到第二台服务器。
  • 您没有遵循给出的建议,即在 DataGrip 中转到 PostgreSQL 10 - postgres@localhost 并更改建议的设置。正如错误所说,问题出在客户端而不是服务器端。
  • @Adrian Klaver,已将 VM 选项设置为 -Xmx256m,无效
  • 在您的问题中指出这一点。此外,256 值只是建议中的一个示例,您可能需要提高它。你是如何在 Datagrip 中进行导出(添加问题信息)的?

标签: postgresql postgresql-10 datagrip


【解决方案1】:

DataGrip 将整个文件放入 RAM,然后尝试将其导出。为了获得最佳性能,最好使用原生工具。

阅读有关以下内容的 DataGrip 帮助主题:

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2011-03-12
    • 1970-01-01
    • 2017-12-01
    • 1970-01-01
    • 2016-09-27
    • 2023-01-11
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多