【发布时间】:2020-12-24 07:22:18
【问题描述】:
我是 CNN 的新手,我正在尝试在 dataset 上使用 Keras 制作一个基本的猫对狗 CNN 模型,该模型由 12500 张猫和狗的图像组成,即总共 25000 张图像。 我目前处理数据的方法如下:
将所有图像转换为 128x128 大小 --> 将它们转换为 numpy 数组 --> 将它们全部转换为黑白图像 --> 将它们除以 255 以进行归一化 --> 使用数据增强 --> 使用训练 CNN他们
(如果我们使用彩色图像会出现内存问题)
这是我正在尝试训练的模型:
model = Sequential()
model.add(Conv2D(filters = 64, kernel_size = (5,5),padding = 'Same', activation ='relu', input_shape = (128,128, 1)))
model.add(Conv2D(filters = 64, kernel_size = (5,5),padding = 'Same', activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same', activation ='relu'))
model.add(Conv2D(filters = 128, kernel_size = (3,3),padding = 'Same', activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Conv2D(filters = 32, kernel_size = (2,2),padding = 'Same', activation ='relu'))
model.add(Conv2D(filters = 32, kernel_size = (2,2),padding = 'Same', activation ='relu'))
model.add(MaxPool2D(pool_size=(2,2), strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(1, activation = "sigmoid"))
optimizer = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
model.compile(optimizer = optimizer , loss = "binary_crossentropy", metrics=["accuracy"])
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', patience=3, verbose=1, factor=0.5, min_lr=0.00001)
但是,每当我尝试开始训练时,即call model.fit_generator,它会打印 Epoch(1/30),然后抛出此错误:
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[86,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_4/convolution}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[metrics/accuracy/Identity/_117]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[86,128,64,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_4/convolution}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
训练停止。
我知道它有一些与我的电脑内存有关的问题,因为我正试图在我的本地 Windows 系统上训练它。 我的问题是,我该怎么做才能解决这个问题。
我无法进一步降低图像质量,我很喜欢使用黑白图像来减少内存消耗。
我的系统内存: 8GB 内存, 2GB Nvidia GeForce 940MX 显卡
如果有人需要完整代码,这是我完整的 python 笔记本link。
另外,当我执行from keras.models import Sequential时,它会引发以下警告
FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
【问题讨论】:
-
2GB也不多,把512降到128什么的。
标签: numpy keras deep-learning jupyter-notebook conv-neural-network