【问题标题】:how should batch size be customised?批量大小应该如何定制?
【发布时间】:2021-02-02 23:50:00
【问题描述】:

我在 Keras 运行 VAE。模型编译,其摘要为:

但是,当我尝试训练模型时,出现以下错误:

ValueError: Dimensions must be equal, but are 32 and 16 for '{{node vae_mlp/tf_op_layer_AddV2_14/AddV2_14}} = AddV2[T=DT_FLOAT, _cloned=true](vae_mlp/tf_op_layer_Mul_10/Mul_10, vae_mlp/tf_op_layer_Mul_11/Mul_11)' with input shapes: [16,32,32], [16].  

16 是批量大小。我知道,因为如果我更改为大于 1 的任何数字,我会得到与批量大小相同的错误(并且它适用于批量大小为 1)。我怀疑问题在于刺激有 3 个通道,并且由于某种原因,它把它当作灰度处理。但我不确定。
我也附上完整的代码:

"""### VAE Cifar 10"""

from keras import layers
from keras.layers import Conv2D, MaxPool2D, Flatten, Dense
from keras.layers import Dropout
from keras import regularizers

from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()

x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

input_shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3])
original_dim=x_train.shape[1]*x_train.shape[2]
latent_dim = 12

import keras

#encoder architecture
encoder_input = keras.Input(shape=input_shape)

cx=layers.Conv2D(filters=64, 
                kernel_size=(3, 3),
                activation='relu',
                padding='same')(encoder_input)
cx=layers.Conv2D(filters=64, 
                kernel_size=(3, 3),
                activation='relu',
                input_shape=(32, 32, 3),padding='same')(cx)

cx=layers.MaxPool2D(2,2)(cx)
cx=layers.Dropout(0.2)(cx)

cx=layers.Conv2D(filters=64, 
                kernel_size=(3, 3),
                activation='relu',padding='same')(cx)
cx=layers.Conv2D(filters=64, 
                kernel_size=(3, 3),
                activation='relu',padding='same')(cx)


cx=layers.MaxPool2D(2,2)(cx)
cx=layers.Dropout(0.2)(cx)

cx=layers.Conv2D(filters=128,
                kernel_size=(3, 3),
                activation='relu',padding='same')(cx)
cx=layers.Conv2D(filters=128,
                kernel_size=(3, 3),
                activation='relu',padding='same')(cx)

cx=layers.MaxPool2D(2,2)(cx)
cx=layers.Dropout(0.2)(cx)

x=layers.Flatten()(cx)

z_mean=layers.Dense(latent_dim, activation='relu', name = 'z_mean')(x) #I removed the softmax layer
z_log_sigma=layers.Dense(latent_dim, activation='relu',name = 'z_sd' )(x)

from keras import backend as K #what is that...

def sampling(args):
    z_mean, z_log_sigma = args
    epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim),
                              mean=0., stddev=0.1)
    return z_mean + K.exp(z_log_sigma) * epsilon

z = layers.Lambda(sampling)([z_mean, z_log_sigma])

# Create encoder
encoder = keras.Model(encoder_input, [z_mean, z_log_sigma, z], name='encoder')

encoder.summary()

# Get Conv2D shape for Conv2DTranspose operation in decoder
conv_shape = K.int_shape(cx)

# Create decoder
#look at : https://www.machinecurve.com/index.php/2019/12/30/how-to-create-a-variational-autoencoder-with-keras/

from keras.layers import Conv2DTranspose, Reshape

latent_inputs = keras.Input(shape=(latent_dim, ), name='z_sampling') #shape=(latent_dim,) or shape=late_dim?

d0 = layers.Dense(conv_shape[1] * conv_shape[2] * conv_shape[3], activation='relu')(latent_inputs)

d05     = Reshape((conv_shape[1], conv_shape[2], conv_shape[3]))(d0)

d1=layers.Conv2DTranspose(filters=128,
                kernel_size=(3, 3),
                strides=2,
                activation='relu',padding='same')(d05)#(latent_inputs)
d2=layers.Conv2DTranspose(filters=128,
                kernel_size=(3, 3),
                strides=2,
                activation='relu',padding='same')(d1)

d3=layers.Conv2DTranspose(filters=64, 
                kernel_size=(3, 3),
                strides=2,
                activation='relu',padding='same')(d2)

d4=layers.Conv2DTranspose(filters=64, 
                kernel_size=(3, 3),
                activation='relu',padding='same')(d3)

d5=layers.Conv2DTranspose(filters=64, 
                kernel_size=(3, 3),
                activation='relu',
                padding='same')(d4)

d6=layers.Conv2DTranspose(filters=64, 
                kernel_size=(3, 3),
                activation='relu',
                input_shape=input_shape,padding='same')(d5)

outputs = layers.Conv2D(filters=3, kernel_size=3, activation='sigmoid', padding='same', name='decoder_output')(d6) #Dense(128, activation='relu')

from keras import Model
decoder = Model(latent_inputs, outputs, name='decoder')
decoder.summary()

# instantiate VAE model
outputs = decoder(encoder(encoder_input)[2]) 
vae = keras.Model(encoder_input, outputs, name='vae_mlp')

vae.summary()



#loss
reconstruction_loss = keras.losses.binary_crossentropy(encoder_input, outputs)
reconstruction_loss *= original_dim
kl_loss = 1 + z_log_sigma - K.square(z_mean) - K.exp(z_log_sigma)
kl_loss = K.sum(kl_loss, axis=-1)
kl_loss *= -0.5
vae_loss = K.mean(reconstruction_loss + kl_loss)
vae.add_loss(vae_loss)
vae.compile(optimizer='adam')

#batch size = 1 doens't break after one epoch
print('you use x_train_t')
vae.fit(x_train, x_train,
        epochs=20,
        batch_size=16,
        validation_data=(x_test, x_test))
  

【问题讨论】:

    标签: machine-learning keras deep-learning autoencoder


    【解决方案1】:

    解决这个问题需要做两件事:
    首先,将损失函数附加到模型的方式应该是:

    vae.compile(optimizer='adam', loss=val_loss_func)
    

    第二,在训练之前应该跑:

    import tensorflow as tf
    tf.config.run_functions_eagerly(True) 
    

    我不确定这是做什么的..

    【讨论】:

      猜你喜欢
      • 2022-01-04
      • 1970-01-01
      • 2018-12-01
      • 1970-01-01
      • 2019-04-01
      • 2012-01-30
      • 2020-11-08
      • 2017-12-21
      • 1970-01-01
      相关资源
      最近更新 更多