【发布时间】:2022-01-12 17:11:23
【问题描述】:
对于 TensorFlow 2.6、Python 3.9 和 CIFAR-10 数据集,我正在尝试训练一个简单的卷积神经网络模型,定义如下:
def conv6_cnn():
"""
Function to define the architecture of a neural network model
following Conv-6 architecture for CIFAR-10 dataset and using
provided parameter which are used to prune the model.
Conv-6 architecture-
64, 64, pool -- convolutional layers
128, 128, pool -- convolutional layers
256, 256, pool -- convolutional layers
256, 256, 10 -- fully connected layers
Output: Returns designed and compiled neural network model
"""
# l = tf.keras.layers
model = Sequential()
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same',
input_shape=(32, 32, 3)
)
)
model.add(
Conv2D(
filters = 64, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 128, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
Conv2D(
filters = 256, kernel_size = (3, 3),
activation='relu', kernel_initializer = tf.keras.initializers.GlorotNormal(),
strides = (1, 1), padding = 'same'
)
)
model.add(
MaxPooling2D(
pool_size = (2, 2),
strides = (2, 2)
)
)
model.add(Flatten())
model.add(
Dense(
units = 256, activation = 'relu',
kernel_initializer = tf.keras.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 256, activation = 'relu',
kernel_initializer = tf.keras.initializers.GlorotNormal()
)
)
model.add(
Dense(
units = 10, activation = 'softmax'
)
)
return model
# Initialize a Conv-6 CNN object-
model = conv6_cnn()
# Define data Augmentation using ImageDataGenerator:
# Initialize and define the image data generator-
datagen = ImageDataGenerator(
# featurewise_center=True,
# featurewise_std_normalization=True,
rotation_range = 90,
width_shift_range = 0.1,
height_shift_range = 0.1,
horizontal_flip = True
)
# Compute quantities required for featurewise normalization
# (std, mean, and principal components if ZCA whitening is applied)
datagen.fit(X_train)
# Compile defined model-
model.compile(
optimizer = optimizer,
loss = loss_fn,
metrics = ['accuracy']
)
# Define early stopping criterion-
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor = 'val_loss', min_delta = 0.001,
patience = 4, verbose = 0,
mode = 'auto', baseline = None,
restore_best_weights = True
)
当我使用以下代码在没有任何数据增强的情况下训练这个 CNN 模型时,似乎没有问题:
# Train model without any data augmentation-
history = model.fit(
x = X_train, y = y_train,
batch_size = batch_size, epochs = num_epochs,
callbacks = [early_stopping],
validation_data = (X_test, y_test)
)
但是,当使用数据(图像)增强时:
# Train model on batches with real-time data augmentation-
training_history = model.fit(
datagen.flow(
X_train, y_train,
batch_size = batch_size, subset = 'training'
),
validation_data = (X_test, y_test),
steps_per_epoch = len(X_train) / batch_size,
epochs = num_epochs,
callbacks = [early_stopping]
)
它给出了错误:
ValueError:训练和验证子集的数量不同 分手后的课。如果您的 numpy 数组按标签排序, 你可能想要随机播放它们。
【问题讨论】:
-
一句话,可能不相关:steps_per_epoch 实际上是一个整数 (len(X_train)/batch_size) 吗?为什么不做 // 而不是 / ?
-
@EricMarchand 在进行 // 更改时仍然存在相同的错误
-
回答有用吗?
-
@AloneTogether 是的,标记为有帮助
标签: python-3.x deep-learning neural-network conv-neural-network tensorflow2.0