【问题标题】:ValueError: dimension of the inputs to `Dense` should be defined. Found `None`ValueError:应定义“密集”输入的维度。找到`无`
【发布时间】:2019-12-18 00:54:52
【问题描述】:

我一直在研究 TensorFlow 2 模型,但我经常遇到这个错误。我试图为每一层定义形状,但仍然没有改变。此外,该错误仅在我在输入层中指定 sparse=True 时出现,我必须指定它,因为我的输入张量是稀疏的,并且脚本的其他部分需要它。张量流版本:Version: 2.0.0-beta1。如果我使用比这更新的版本,由于输入稀疏,会出现其他模糊错误。 TF 2.0 似乎在这种类型的输入上出现了多少问题。

当前方法定义:

def make_feed_forward_model():
    #'''
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
    #'''

然后当我运行以下,出现错误:

model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

追溯:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-56-720f117bb231> in <module>
      1 # Feel free to use an architecture of your choice.
----> 2 model = make_feed_forward_model()
      3 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

<ipython-input-55-5f35f6f22300> in make_feed_forward_model()
     18     #embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
     19     #pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(inputs)
---> 20     dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
     21     dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
     22     dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
    614           # Build layer if applicable (if the `build` method has been
    615           # overridden).
--> 616           self._maybe_build(inputs)
    617 
    618           # Wrapping `call` function in autograph to allow for dynamic control

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   1964         # operations.
   1965         with tf_utils.maybe_init_scope(self):
-> 1966           self.build(input_shapes)
   1967       # We must set self.built since user defined build functions are not
   1968       # constrained to set self.built.

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\layers\core.py in build(self, input_shape)
   1003     input_shape = tensor_shape.TensorShape(input_shape)
   1004     if tensor_shape.dimension_value(input_shape[-1]) is None:
-> 1005       raise ValueError('The last dimension of the inputs to `Dense` '
   1006                        'should be defined. Found `None`.')
   1007     last_dim = tensor_shape.dimension_value(input_shape[-1])

ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`.

编辑:稀疏张量错误

似乎如果我使用比 TF 2.0.0-beta1 更新的任何版本,训练完全失败:

ValueError: The two structures don't have the same nested structure.

    First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)

    Second structure: type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))

    More specifically: Substructure "type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)" is not
    Entire first structure:
    .
    Entire second structure:
    .

编辑 2:将 batch_size 添加到 Input 层后出错

def make_feed_forward_model():  
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True, batch_size=HPARAMS.batch_size)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

当我运行model.compile():

TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. 

Contents: SparseTensor(indices=Tensor("sample/indices_3:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_3:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_3:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.

【问题讨论】:

  • 你提到了其他晦涩的错误,你能详细说明一下吗?
  • HPARAMS.num_fc_units 有一个“无”值作为其最终维度,它应该有一个准确的值。 HPPARAMS.num_fc_units 的显式值是什么?如果它的形式是 [256, None](或 256 以外的任何数字),只需反转这些值。
  • @AlexanderCécile,我在上面添加了描述
  • @jhso,HPARAMS.num_fc_units 的 int 值只有 128

标签: python python-3.x tensorflow keras tensorflow2.0


【解决方案1】:

发生这种情况是因为当输入张量是稀疏形状时,此张量的计算结果为 (None,None) 而不是 (HPARAMS.max_seq_length,)

inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True)
print(inputs.shape)
# output: (?, ?)

这似乎也是一个开放的issue
一种解决方案是编写自定义层子类化层类(参考this)。

作为 work-around(在 tf-gpu 2.0.0 上测试)在输入层中添加批量大小可以正常工作:

inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True ,batch_size=32)
print(inputs.shape)
# output: (32, 100)

【讨论】:

  • 感谢您的想法,似乎它正在某个地方。但是添加batch_size之后,我有一个新的错误。在我的初始帖子中编辑 2
  • 您能否在遇到此错误的地方添加部分代码?
  • @JohnSzatmari 我在编辑 2 后执行了您的代码。我没有收到任何错误。可能包括您的完整错误跟踪?也尝试在稳定的 TF 2.0 中执行此操作。
  • Vivek,在我安装 tf-gpu 2.0.0 并在我的输入中添加填充之后,你的解决方法是正确的。非常感谢你的帮助。不过,这种类型的错误不应该出现在稳定版本中。
猜你喜欢
  • 2018-10-09
  • 2019-11-16
  • 1970-01-01
  • 2018-06-24
  • 1970-01-01
  • 2021-05-07
  • 2019-04-11
  • 2020-05-23
  • 1970-01-01
相关资源
最近更新 更多