【问题标题】:Stuck loading large dataset with GraphDB使用 GraphDB 加载大型数据集时卡住了
【发布时间】:2020-03-10 17:12:28
【问题描述】:

当我将这个 DBpedia(2015-10,en,~10 亿三元组)加载到 GraphDB 9.1.1 中时,CPU 负载在大约 1300 万个三元组和此后空闲后下降到 0%。在我手动杀死它之前,该进程不会终止。

与通过 Xmx CMD 选项分配给 java 的 512GB 相比,该机器有足够的磁盘空间和足够多的 RAM。

这里提供了我尝试加载的文件:https://hobbitdata.informatik.uni-leipzig.de/dbpedia_2015-10_en_wo-comments_c.nt.zst

可以用以下方式解压:

zstd -d "dbpedia_2015-10_en_wo-comments_c.nt.zst" -o "dbpedia_2015-10_en_wo-comments_c.nt"

我使用以下命令加载数据:

java -Xmx512G -cp "$HOME/graphdb/graphdb-free-9.1.1/lib/*" -Dgraphdb.dist=$HOME/graphdb/graphdb-free-9.1.1 -Dgraphdb.home.data=$HOME/dbpedia2015/data/ -Djdk.xml.entityExpansionLimit=0 com.ontotext.graphdb.loadrdf.LoadRDF -f -m parallel -p -c $HOME/graphdb/graphdb-dbpedia2015.ttl $HOME/dbpedia_2015-10_en_wo-comments_c.nt

$HOME/graphdb/graphdb-dbpedia2015.ttl 看起来像:

@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix rep: <http://www.openrdf.org/config/repository#>.
@prefix sr: <http://www.openrdf.org/config/repository/sail#>.
@prefix sail: <http://www.openrdf.org/config/sail#>.
@prefix owlim: <http://www.ontotext.com/trree/owlim#>.

[] a rep:Repository ;
    rep:repositoryID "dbpedia2015" ;
    rdfs:label "Repository for dataset dbpedia2015" ;
    rep:repositoryImpl [
        rep:repositoryType "graphdb:FreeSailRepository" ;
        sr:sailImpl [
            sail:sailType "graphdb:FreeSail" ;

                        # ruleset to use
                        owlim:ruleset "rdfsplus-optimized" ;

                        # disable context index(because my data do not uses contexts)
                        owlim:enable-context-index "false" ;

                        # indexes to speed up the read queries
                        owlim:enablePredicateList "true" ;
                        owlim:enable-literal-index "true" ;
                        owlim:in-memory-literal-properties "true" ;
        ]
    ].

输出的日志是:

16:11:07.438 [main] INFO  com.ontotext.graphdb.loadrdf.Params - MODE: parallel
16:11:07.439 [main] INFO  com.ontotext.graphdb.loadrdf.Params - STOP ON FIRST ERROR: false
16:11:07.439 [main] INFO  com.ontotext.graphdb.loadrdf.Params - PARTIAL LOAD: true
16:11:07.439 [main] INFO  com.ontotext.graphdb.loadrdf.Params - CONFIG FILE: /home/me/graphdb-dbpedia2015.ttl
16:11:07.444 [main] INFO  com.ontotext.graphdb.loadrdf.LoadRDF - Attaching to location: /home/me/graphdb/dbpedia2015/data
16:11:07.618 [main] INFO  c.o.t.u.l.LimitedObjectCacheFactory - Using LRU cache type: synch
16:11:08.025 [main] WARN  com.ontotext.plugin.literals-index - Rebuilding literals indexes. Starting from id:1
16:11:08.029 [main] WARN  com.ontotext.plugin.literals-index - Complete in 0.004, num entries indexed:0
16:11:08.780 [main] INFO  c.o.rio.parallel.ParallelLoader - Data will be parsed + resolved + loaded.
16:11:08.788 [main] INFO  c.o.rio.parallel.ParallelLoader - Using 128 threads for inference
16:11:09.984 [main] INFO  com.ontotext.graphdb.loadrdf.LoadRDF - Loading file: dbpedia_2015-10_en_wo-comments_c.nt
16:11:09.991 [main] INFO  c.o.rio.parallel.ParallelLoader - Using 128 threads for inference
16:11:19.987 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 2,111,690 stmts. Rate: 211,147 st/s. Statements overall: 2,111,690. Global average rate: 211,000 st/s. Now: Tue Mar 10 16:11:19 UTC 2020. Total memory: 22144M, Free memory: 4890M, Max memory: 524288M.
16:11:30.515 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 3,955,363 stmts. Rate: 192,662 st/s. Statements overall: 3,955,363. Global average rate: 192,596 st/s. Now: Tue Mar 10 16:11:30 UTC 2020. Total memory: 66432M, Free memory: 53925M, Max memory: 524288M.
16:11:40.515 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 6,889,662 stmts. Rate: 225,661 st/s. Statements overall: 6,889,662. Global average rate: 225,609 st/s. Now: Tue Mar 10 16:11:40 UTC 2020. Total memory: 199296M, Free memory: 177241M, Max memory: 524288M.
16:11:51.185 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 9,124,978 stmts. Rate: 221,474 st/s. Statements overall: 9,124,978. Global average rate: 221,437 st/s. Now: Tue Mar 10 16:11:51 UTC 2020. Total memory: 199296M, Free memory: 185106M, Max memory: 524288M.
16:12:02.877 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 11,083,153 stmts. Rate: 209,539 st/s. Statements overall: 11,083,153. Global average rate: 209,511 st/s. Now: Tue Mar 10 16:12:02 UTC 2020. Total memory: 199296M, Free memory: 184331M, Max memory: 524288M.
16:12:15.800 [main] INFO  c.o.rio.parallel.ParallelRDFInserter - Parsed 13,166,352 stmts. Rate: 200,047 st/s. Statements overall: 13,166,352. Global average rate: 200,026 st/s. Now: Tue Mar 10 16:12:15 UTC 2020. Total memory: 329312M, Free memory: 313496M, Max memory: 524288M.

知道为什么在大约 1300 万个三元组之后它会卡住吗?

【问题讨论】:

    标签: dbpedia graphdb


    【解决方案1】:

    首先 - 为进程分配更少的 Xmx(大约 38-42 GB 就足够了)。数据库将需要额外的内存用于堆外,因此请确保不要分配所有内存。如果仍然无法加载数据集,请发送进程的 jstack,或者如果您使用 Oracle JDK,则可以使用 Java Flight Records:

    jcmd <pid> VM.unlock_commercial_features
    jcmd <pid> JFR.start duration=60s name=production filename=production.jfr settings=profile
    

    将持续时间设置为允许跟踪执行的值。您可以将结果发送到 support@ontotext.com,因为它将包含有关您的环境的信息。

    另一种选择是使用预加载工具 - 它的目的是加载大型数据集 - http://graphdb.ontotext.com/documentation/enterprise/loading-data-using-preload.html

    【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2013-04-12
    • 1970-01-01
    • 2018-11-20
    • 1970-01-01
    • 2014-11-06
    • 1970-01-01
    • 1970-01-01
    • 2019-01-26
    相关资源
    最近更新 更多