【问题标题】:Keras Custom Layer: ValueError: An operation has `None` for gradientKeras 自定义层:ValueError:一个操作对渐变有“无”
【发布时间】:2019-08-06 10:35:32
【问题描述】:

我正在尝试使用以下论文中提到的 Keras 自定义层来实现图形卷积层:GCNN

当我尝试训练我的模型时,它给了我以下错误:

Traceback (most recent call last):
File "main.py", line 35, in <module>
model.fit(train_images, train_labels, validation_data=(test_images, test_labels), epochs=50, batch_size=32)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1010, in fit
self._make_train_function()
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 509, in _make_train_function
loss=self.total_loss)
File "/usr/local/lib/python2.7/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 256, in get_updates
grads = self.get_gradients(loss, params)
File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 91, in get_gradients
raise ValueError('An operation has `None` for gradient. '
ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.

我不知道如何解决这个问题。

谁能简单解释一下我应该怎么做?

我已经阅读了有关编写自定义层的 Keras 官方文档,但没有具体说明。 Link

以下是我的自定义层的代码。

class GraphConvolutionalLayer(Layer):

def __init__(self, A, num_input_features, num_output_features, **kwargs):
    self.A = A
    self.num_input_features = num_input_features
    self.num_output_features = num_output_features

    self.num_vertices = A.get_shape().as_list()[0]
    self.input_spec = (self.num_vertices, num_input_features)

    super(GraphConvolutionalLayer, self).__init__(**kwargs)

def build(self, input_shape):
    self.k0 = self.add_weight(name='k0', 
                                  shape=(self.num_output_features, self.num_input_features),
                                  initializer='uniform',
                                  trainable=True)
    self.k1 = self.add_weight(name='k1', 
                          shape=(self.num_output_features, self.num_input_features),
                          initializer='uniform',
                          trainable=True)

    self.H = tf.einsum('ab,cd->abcd', tf.convert_to_tensor(self.k0, dtype=tf.float32), tf.eye(self.num_vertices)) 
    self.built = True

def call(self, Vin):

    Vin2 = tf.reshape(tf.transpose(Vin, [0, 2, 1]), [Vin.get_shape().as_list()[1] * Vin.get_shape().as_list()[2], -1])

    H_tmp = tf.reshape(tf.transpose(self.H, [0, 2, 1, 3]), [ self.num_output_features, self.num_vertices, self.num_vertices * self.num_input_features])

    Vout = tf.transpose(K.dot(H_tmp, Vin2), [2, 1, 0])

    return Vout

def compute_output_shape(self, input_shape):
    return (self.num_vertices, self.num_output_features)

以下是主文件的代码。

main_input = Input(shape=train_images[0].shape)
Vout1 = GraphConvolutionalLayer(A, 1, 4)(main_input)
Vout2 = GraphConvolutionalLayer(A, 4, 8)(Vout1)
Vout3 = Flatten()(Vout2)
Vout4 = Dense(10, activation='sigmoid')(Vout3)
print(train_images.shape, train_labels.shape)

model = Model(inputs=main_input, outputs=Vout4)
print(model.summary())
model.compile(optimizer='rmsprop', loss='binary_crossentropy')
model.fit(train_images, train_labels, validation_data=(test_images, test_labels), epochs=50, batch_size=32)

【问题讨论】:

  • 请添加完整的代码和发生错误的行
  • 我已经添加了完整的代码。请帮帮我。
  • A 代表什么?
  • A 是图的邻接矩阵。

标签: python tensorflow machine-learning keras


【解决方案1】:

在这里,我将uniform 作为初始化器。当我更改它时,我没有收到任何错误。我不知道为什么会发生这种情况,但我可以通过更改该行来解决我的错误。

【讨论】:

    【解决方案2】:

    由于错误状态,您的某些函数是不可微分的。很难说它究竟为什么会发生。比如看看

    List of Differentiable Ops in Tensorflow

    How to make sure your computation graph is differentiable

    编辑:考虑示例,我使用标准 cifar10 数据。

    class GraphConvolutionalLayer(layers.Layer):
        def __init__(self, A, num_input_features, num_output_features, **kwargs):
            #self.A = A
            self.num_input_features = num_input_features
            self.num_output_features = num_output_features
    
            self.num_vertices = A
            self.input_spec = (self.num_vertices, num_input_features)
    
            super(GraphConvolutionalLayer, self).__init__(**kwargs)
    
        def build(self, input_shape):
            self.k0 = self.add_weight(name='k0',
                                      shape=(self.num_output_features, self.num_input_features),
                                      initializer='uniform',
                                      trainable=True)
    
            self.H = tf.einsum('ab,cd->abcd', tf.convert_to_tensor(self.k0, dtype=tf.float32), tf.eye(self.num_vertices))
            self.H = tf.reshape(self.H, [32*32, 3])
            self.built = True
    
        def call(self, Vin):
    
            Vin2 = tf.reshape(Vin, [Vin.get_shape().as_list()[1] * Vin.get_shape().as_list()[1],Vin.get_shape().as_list()[-1]])
            Vin2 = tf.transpose(Vin2)
            Vout = tf.matmul(self.H, Vin2)
            return Vout
    
    def input_fn():
        train, test = tf.keras.datasets.cifar10.load_data()
        dataset = tf.data.Dataset.from_tensor_slices((train[0], train[1]))
        dataset = dataset.batch(1)
        return dataset
    
    main_input = layers.Input(shape=[32, 32, 3])
    Vout1 = GraphConvolutionalLayer(32, 3, 1)(main_input)    
    Vout3 = layers.Flatten()(Vout1)
    Vout4 = layers.Dense(10, activation='sigmoid')(Vout3)
    model = Model(inputs=main_input, outputs=Vout4)
    model.compile(optimizer='rmsprop', loss='binary_crossentropy')
    model.fit(input_fn(), epochs=50, steps_per_epoch=10)
    

    在这种情况下,计算梯度。所以问题显然不在于你如何构造GraphConvolutionalLayer,而在于一些内部操作,这取决于数据。您需要使用数据形状一一检查每个操作。

    附:您可以尝试用 matmul 代替 einsum,因为前者只是后者的语法包装。

    【讨论】:

    • 仍然,这段代码给了我一个错误,因为 tf.eigsum 函数是不可微的。我尝试了很多来转换我的代码,以便我可以使用 tf.matmul 但我什么也得不到。你能告诉 tf.eigsum 的任何替代方案,它是可微的。
    • 更新答案。
    • 我想用多个 Graph CNN 层训练我的 mnist 或 cifar10。您能否分享完美运行的代码。这有助于我理解问题。我运行了你的代码,它向我显示了这个错误:AttributeError: 'NoneType' object has no attribute '_inbound_nodes'
    猜你喜欢
    • 2018-12-20
    • 2020-05-28
    • 2021-08-02
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2019-11-01
    • 1970-01-01
    • 2019-02-10
    相关资源
    最近更新 更多