【问题标题】:Is there a python way for reducing the training time of convolution neural network?有没有一种python方法可以减少卷积神经网络的训练时间?
【发布时间】:2019-05-24 00:44:17
【问题描述】:

我正在构建卷积神经网络的 keras 模型,用于预测正确的类别并对测试对象进行分类。该模型具有 conv2D、activation、maxpooling、dropout、flatten、dense 层。之后我在大型数据集上训练网络,但是训练时间很长,可能达到 3,4 天,我需要的是减少训练网络所需的时间,有什么办法可以做到这一点在 python 中?

我尝试通过使用 LR_Finder 类来优化学习率,如下所示:

from LR_Finder import LRFinder
lr_finder = LRFinder(min_lr=1e-5,max_lr=1e-2, steps_per_epoch=np.ceil(len(trainX) // BS), epochs=100)

但这也没有减少我所需的时间。

这是我的模型的代码:

class SmallerVGGNet:
@staticmethod
def build(width, height, depth, classes):
    # initialize the model along with the input shape to be
    # "channels last" and the channels dimension itself
    model = Sequential()
    inputShape = (height, width, depth)
    chanDim = -1

    # if we are using "channels first", update the input shape
    # and channels dimension
    if K.image_data_format() == "channels_first":
        inputShape = (depth, height, width)
        chanDim = 1

    # CONV => RELU => POOL
    model.add(Conv2D(32, (3, 3), padding="same",
        input_shape=inputShape))
    model.add(Activation("relu"))
    model.add(BatchNormalization(axis=chanDim))
    model.add(MaxPooling2D(pool_size=(3, 3)))
    model.add(Dropout(0.25))

    # (CONV => RELU) * 2 => POOL
    model.add(Conv2D(64, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization(axis=chanDim))
    model.add(Conv2D(64, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization(axis=chanDim))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    # (CONV => RELU) * 2 => POOL
    model.add(Conv2D(128, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization(axis=chanDim))
    model.add(Conv2D(128, (3, 3), padding="same"))
    model.add(Activation("relu"))
    model.add(BatchNormalization(axis=chanDim))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))

    # first (and only) set of FC => RELU layers
    model.add(Flatten())
    model.add(Dense(1024))
    model.add(Activation("relu"))
    model.add(BatchNormalization())
    model.add(Dropout(0.5))

    # softmax classifier
    model.add(Dense(classes))
    model.add(Activation("softmax"))

    # return the constructed network architecture
    return model

然后我用以下代码训练模型:

EPOCHS = 100
INIT_LR = 1e-3
BS = 32
IMAGE_DIMS = (96, 96, 3)

data = []
labels = []

# grab the image paths and randomly shuffle them
imagePaths = sorted(list(paths.list_images("Dataset")))
random.seed(42)
random.shuffle(imagePaths)
# loop over the input images
for imagePath in imagePaths:
    # load the image, pre-process it, and store it in the data list
    image = cv2.imread(imagePath)
    image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))
    image = img_to_array(image)
    data.append(image)

    label = imagePath.split(os.path.sep)[-2]
    labels.append(label)

# scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
print("[INFO] data matrix: {:.2f}MB".format(data.nbytes / (1024 * 1000.0)))

# binarize the labels
lb = LabelBinarizer()
labels = lb.fit_transform(labels)

# partition the data into training and testing splits using 80% of
# the data for training and the remaining 20% for testing
(trainX, testX, trainY, testY) = train_test_split(data,
                                 labels, test_size=0.2, random_state=42)

# construct the image generator for data augmentation
aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
               height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
                     horizontal_flip=True, fill_mode="nearest")

# initialize the model
model = SmallerVGGNet.build(width=IMAGE_DIMS[1], height=IMAGE_DIMS[0],
                        depth=IMAGE_DIMS[2], classes=len(lb.classes_))
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.compile(loss="categorical_crossentropy", optimizer= opt,
          metrics=["accuracy"])
print("model compiled in few minutes successfully ^_^")

# train the network
H = model.fit_generator(aug.flow(trainX, trainY, batch_size=BS),
validation_data=(testX, testY), steps_per_epoch=len(trainX) // BS,
epochs=EPOCHS, verbose=1)

根据这段代码,我预计输出需要几分钟或几个小时,但是当它达到model.fit_generator 步骤中的训练时,每个时期所需的实际时间大约是几个小时,并且需要几天训练所有网络,否则可能会崩溃并停止工作。有什么办法可以减少训练时间?

【问题讨论】:

  • Keras 在PyPy 中可用。您可以尝试这样做以加快速度。另一种方法是优化您的后端(例如通过running TensorFlow on GPU)。
  • @agtoever 感谢您的回复,我的设备没有 GPU,我只使用 CPU,您还有其他方法可以加快训练过程吗?

标签: python keras neural-network conv-neural-network training-data


【解决方案1】:

在调用 fit_generator 时设置 use_multiprocessing=Trueworkers>1,因为默认情况下只在主线程上执行生成器

【讨论】:

  • works的默认值实际上是1,当works == 0时生成器会在主线程上执行。 more
  • @Jenia 谢谢你的回答,我试过你提到的,但没有加快进程,而且 use_multiprocessing=True 导致崩溃并停止训练过程,你还有其他方法吗?
猜你喜欢
  • 2018-05-01
  • 2017-08-31
  • 2019-01-14
  • 1970-01-01
  • 1970-01-01
  • 2023-03-29
  • 2016-07-21
  • 1970-01-01
  • 2016-07-01
相关资源
最近更新 更多