【问题标题】:CNN learning stagnationCNN学习停滞
【发布时间】:2017-06-14 00:18:44
【问题描述】:

我创建了一个 CNN 的模拟,我试图在视频数据集上使用它。 我将测试数据设置为所有帧上的所有单个图像作为正样本,0 作为负样本。我认为这会学得很快。但它根本不动。 在 Windows 10 64 位上使用当前版本的 Keras 和 Tensorflow。

第一个问题,我的逻辑错了吗?我是否应该期望这些测试数据的学习能够快速达到高精度?

我的模型或参数有问题吗?我一直在尝试一些更改,但仍然遇到同样的问题。

样本量 (56) 是否太小?

# testing  feature extraction model. 
import time
import numpy as np, cv2
import sys
import os
import keras
import tensorflow as tf

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization
from keras.layers import Conv3D, MaxPooling3D

from keras.optimizers import SGD,rmsprop, adam

from keras import regularizers
from keras.initializers import Constant

from keras.models import Model

#set gpu options
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=.99, allocator_type = 'BFC') 
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True, gpu_options=gpu_options))
config = tf.ConfigProto()

batch_size = 5
num_classes = 1
epochs = 50
nvideos = 56
nframes = 55
nchan = 3
nrows = 480
ncols = 640

#load any single image, resize if needed
img = cv2.imread('C:\\Users\\david\\Documents\\AutonomousSS\\single frame.jpg',cv2.IMREAD_COLOR)
img = cv2.resize(img,(640,480))

x_learn = np.random.randint(0,255,(nvideos,nframes,nrows,ncols,nchan),dtype=np.uint8)
y_learn = np.array([[1],[1],[1],[0],[1],[0],[1],[0],[1],[0],
                    [1],[0],[0],[1],[0],[0],[1],[0],[1],[0],
                    [1],[0],[1],[1],[0],[1],[0],[0],[1],[1],
                    [1],[0],[1],[0],[1],[0],[1],[0],[1],[0],
                    [0],[1],[0],[0],[1],[0],[1],[0],[1],[0],
                    [1],[1],[0],[1],[0],[0]],np.uint8)

#each sample, each frame is either the single image for postive examples or 0 for negative examples.

for i in range (nvideos):
    if y_learn[i] == 0 : 
        x_learn[i]=0
    else:
        x_learn[i,:nframes]=img



#build model     
m_loss = 'mean_squared_error'
m_opt = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
m_met = 'acc' 


model = Sequential()

# 1st layer group
model.add(Conv3D(32, (3, 3,3), activation='relu',padding="same", name="conv1a", strides=(3, 3, 3),
                 kernel_initializer = 'glorot_normal',
                 trainable=False,
                 input_shape=(nframes,nrows,ncols,nchan)))
#model.add(BatchNormalization(axis=1))
model.add(Conv3D(32, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv1b", activation="relu"))
#model.add(BatchNormalization(axis=1))
model.add(MaxPooling3D(padding="valid", trainable=False, pool_size=(1, 5, 5), name="pool1", strides=(2, 2, 2)))


# 2nd layer group
model.add(Conv3D(128, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv2a", activation="relu"))
model.add(Conv3D(128, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv2b", activation="relu"))
#model.add(BatchNormalization(axis=1))
model.add(MaxPooling3D(padding="valid", trainable=False, pool_size=(1, 5, 5), name="pool2", strides=(2, 2, 2)))

# 3rd layer group
model.add(Conv3D(256, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv3a", activation="relu"))
model.add(Conv3D(256, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv3b", activation="relu"))
#model.add(BatchNormalization(axis=1))
model.add(MaxPooling3D(padding="valid", trainable=False, pool_size=(1, 5, 5), name="pool3", strides=(2, 2, 2)))

# 4th layer group
model.add(Conv3D(512, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv4a", activation="relu"))
model.add(Conv3D(512, (3, 3, 3), trainable=False, strides=(1, 1, 1), padding="same", name="conv4b", activation="relu"))
#model.add(BatchNormalization(axis=1))
model.add(MaxPooling3D(padding="valid", trainable=False, pool_size=(1, 5, 5), name="pool4", strides=(2, 2, 2)))

model.add(Flatten(name='flatten',trainable=False))

model.add(Dense(512,activation='relu', trainable=True,name='den0'))

model.add(Dense(num_classes,activation='softmax',name='den1'))
print (model.summary())

#compile model
model.compile(loss=m_loss,
              optimizer=m_opt,
              metrics=[m_met])
print ('compiled')


#set callbacks
from keras import backend as K
K.set_learning_phase(0) #set learning phase
tb = keras.callbacks.TensorBoard(log_dir=sample_root_path+'logs', histogram_freq=0,
                          write_graph=True, write_images=False)
tb.set_model(model)
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='loss', factor=0.2,verbose=1,
              patience=2, min_lr=0.000001)
reduce_lr.set_model(model)
ear_stop = keras.callbacks.EarlyStopping(monitor='loss', min_delta=0, patience=4, verbose=1, mode='auto')
ear_stop.set_model(model)


#fit

history = model.fit(x_learn, y_learn,
                    batch_size=batch_size,
                    callbacks=[reduce_lr,tb, ear_stop],
                    verbose=1,
                    validation_split=0.1,
                    shuffle = True,
                    epochs=epochs)


score = model.evaluate(x_learn, y_learn, batch_size=batch_size)
print(str(model.metrics_names) + ": " + str(score))

像往常一样,感谢您提供的所有帮助。

添加输出...

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1a (Conv3D)              (None, 19, 160, 214, 32)  2624      
_________________________________________________________________
conv1b (Conv3D)              (None, 19, 160, 214, 32)  27680     
_________________________________________________________________
pool1 (MaxPooling3D)         (None, 10, 78, 105, 32)   0         
_________________________________________________________________
conv2a (Conv3D)              (None, 10, 78, 105, 128)  110720    
_________________________________________________________________
conv2b (Conv3D)              (None, 10, 78, 105, 128)  442496    
_________________________________________________________________
pool2 (MaxPooling3D)         (None, 5, 37, 51, 128)    0         
_________________________________________________________________
conv3a (Conv3D)              (None, 5, 37, 51, 256)    884992    
_________________________________________________________________
conv3b (Conv3D)              (None, 5, 37, 51, 256)    1769728   
_________________________________________________________________
pool3 (MaxPooling3D)         (None, 3, 17, 24, 256)    0         
_________________________________________________________________
conv4a (Conv3D)              (None, 3, 17, 24, 512)    3539456   
_________________________________________________________________
conv4b (Conv3D)              (None, 3, 17, 24, 512)    7078400   
_________________________________________________________________
pool4 (MaxPooling3D)         (None, 2, 7, 10, 512)     0         
_________________________________________________________________
flatten (Flatten)            (None, 71680)             0         
_________________________________________________________________
den0 (Dense)                 (None, 512)               36700672  
_________________________________________________________________
den1 (Dense)                 (None, 1)                 513       
=================================================================
Total params: 50,557,281
Trainable params: 36,701,185
Non-trainable params: 13,856,096
_________________________________________________________________
None
compiled
Train on 50 samples, validate on 6 samples
Epoch 1/50
50/50 [==============================] - 20s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 2/50
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 3/50
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 4/50
45/50 [==========================>...] - ETA: 1s - loss: 0.5111 - acc: 0.4889
Epoch 00003: reducing learning rate to 0.00020000000949949026.
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 5/50
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 6/50
45/50 [==========================>...] - ETA: 1s - loss: 0.5111 - acc: 0.4889
Epoch 00005: reducing learning rate to 4.0000001899898055e-05.
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 7/50
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 8/50
45/50 [==========================>...] - ETA: 1s - loss: 0.4889 - acc: 0.5111
Epoch 00007: reducing learning rate to 8.000000525498762e-06.
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 9/50
50/50 [==============================] - 16s - loss: 0.5000 - acc: 0.5000 - val_loss: 0.5000 - val_acc: 0.5000
Epoch 00008: early stopping
56/56 [==============================] - 12s    
['loss', 'acc']: [0.50000001516725334, 0.5000000127724239]

【问题讨论】:

  • 您能否详细说明您的总体目标、您想要训练的最终数据,尤其是为什么您尝试在单个图像上进行训练?而且由于您将所有层都设置为不可训练(除了最后一个密集层):您是否加载了任何预训练的权重?我没有看到您导入像 VGG 或 Inception 这样的 Keras 应用程序或加载任何权重。
  • 最终目标是针对某个动作进行训练。这是帧之间的一系列移动。上面的测试只是一个测试示例。我得到了相同的行为,正面示例是 1)所有帧,2)帧是随机的,3)帧是真实的视频序列。我认为内置的 Keras 应用程序不会有帮助。
  • 好的,感谢您澄清一些观点。据我了解,您想微调预训练模型。如果是这样:你如何加载重量? Keras 应用程序正是实现这一目标的一种好方法。例如,请参阅 Keras 的本教程:blog.keras.io/… 否则您的网络不会学到任何东西,因为几乎所有层都设置为不可训练。来自您的代码:…trainable=False…
  • 训练更多层时遇到资源问题。

标签: machine-learning tensorflow computer-vision keras


【解决方案1】:

您的层设置为trainable=False(除了最后一个密集层)。因此,您的 CNN 无法学习。此外,您将无法仅对单个样本进行训练。

如果您在 GPU 上遇到性能问题,请切换到 CPU 或 AWS 或减小图像大小。

【讨论】:

  • 我可能不太清楚,我想训练一个 3d 网络。 keras 应用程序适用于 2d 图像,所以我看不出它们的预定义值有什么帮助。
  • 好的。再说一遍:为什么要将 95% 的 ConvNet 设置为 trainable=False?
  • 我遵循了最近一篇研究论文中描述的方法。我尝试降低数据大小并添加更多层以进行训练。没有任何区别。
  • 如果将所有层都设置为可训练会发生什么?否则,只有随机初始化值会从第一个卷积层流向网络的其余部分,并且您没有任何学习到的低级特征可供下一层叠加。
  • 训练了所有层,结果没有变化。
猜你喜欢
  • 1970-01-01
  • 2021-04-15
  • 1970-01-01
  • 2020-09-17
  • 2020-07-17
  • 1970-01-01
  • 2020-11-07
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多