【问题标题】:tensorflow: ValueError: GraphDef cannot be larger than 2GBtensorflow:ValueError:GraphDef不能大于2GB
【发布时间】:2017-04-16 22:58:56
【问题描述】:

这是我遇到的错误

Traceback (most recent call last):
  File "fully_connected_feed.py", line 387, in <module>
    tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
  File "/home/-/.local/lib/python2.7/site-
packages/tensorflow/python/platform/app.py", line 44, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "fully_connected_feed.py", line 289, in main
    run_training()
  File "fully_connected_feed.py", line 256, in run_training
    saver.save(sess, checkpoint_file, global_step=step)
  File "/home/-/.local/lib/python2.7/site-
packages/tensorflow/python/training/saver.py", line 1386, in save
    self.export_meta_graph(meta_graph_filename)
  File "/home/-/.local/lib/python2.7/site-
packages/tensorflow/python/training/saver.py", line 1414, in export_meta_graph
    graph_def=ops.get_default_graph().as_graph_def(add_shapes=True),
  File "/home/-/.local/lib/python2.7/site-
packages/tensorflow/python/framework/ops.py", line 2257, in as_graph_def
    result, _ = self._as_graph_def(from_version, add_shapes)
  File "/home/-/.local/lib/python2.7/site-
packages/tensorflow/python/framework/ops.py", line 2220, in _as_graph_def
    raise ValueError("GraphDef cannot be larger than 2GB.")
ValueError: GraphDef cannot be larger than 2GB.

我相信是这段代码的结果

weights = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden1")[0]
weights = tf.scatter_nd_update(weights,indices, updates)
weights = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="hidden2")[0]
weights = tf.scatter_nd_update(weights,indices, updates)

我不确定为什么我的模型会变得如此之大(15k 步和 240MB)。有什么想法吗?谢谢!

【问题讨论】:

    标签: tensorflow neural-network conv-neural-network convolution


    【解决方案1】:

    不看代码很难说发生了什么,但通常 TensorFlow 模型大小不会随着步数的增加而增加 - 它们应该是固定的。

    如果模型大小随着步数的增加而增加,则表明计算图在每一步都被添加。例如,类似:

    import tensorflow as tf
    
    with tf.Session() as sess:
      for i in xrange(1000):
        sess.run(tf.add(1, 2))
        # or perhaps sess.run(tf.scatter_nd_update(...)) in your case
    

    将在图中创建 3000 个节点(每次迭代一个用于添加,一个用于“1”,一个用于“2”)。相反,您希望定义一次计算图并重复运行,例如:

    import tensorflow as tf
    
    x = tf.add(1, 2)
    # or perhaps x = tf.scatter_nd_update(...) in your case
    with tf.Session() as sess:
      for i in xrange(1000):
        sess.run(x)
    

    对于所有 1000 次(以及更多)迭代,它将具有 3 个节点的固定图。希望对您有所帮助。

    【讨论】:

    • 谢谢!我明白你对模型增加的看法,但我嵌套了tf.scatter_nd_update(...),因为我需要在每一步更新我的权重(有点像手动卷积)。也许这是错误的做法?
    • 也许我理解错了,但不是同样适用吗?不要在循环内调用tf.scatter_nd_update,而是保存返回的操作并在循环中调用它。从documentation of tf.scatter_nd_update - 它应用更新并返回与第一个参数相同的值以方便起见。所以你可以这样做: update = tf.scatter_nd_update(weights, indices, updates) for i in xrange(num_steps): sess.run(update)
    猜你喜欢
    • 2018-06-18
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2016-12-15
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多