【发布时间】:2019-11-09 17:56:22
【问题描述】:
我有一个 5GB 的大型数据集,我想用它来训练使用 Keras 设计的神经网络模型。虽然我使用的是 Nvidia Tesla P100 GPU,但训练速度确实很慢(每个 epoch 大约需要 60-70 秒)(我选择 batch size=10000)。经过阅读和搜索,我发现使用keras fit_generator而不是典型的fit可以提高训练速度。为此,我编写了以下代码:
from __future__ import print_function
import numpy as np
from keras import Sequential
from keras.layers import Dense
import keras
from sklearn.model_selection import train_test_split
def generator(C, r, batch_size):
samples_per_epoch = C.shape[0]
number_of_batches = samples_per_epoch / batch_size
counter = 0
while 1:
X_batch = np.array(C[batch_size * counter:batch_size * (counter + 1)])
y_batch = np.array(r[batch_size * counter:batch_size * (counter + 1)])
counter += 1
yield X_batch, y_batch
# restart counter to yeild data in the next epoch as well
if counter >= number_of_batches:
counter = 0
if __name__ == "__main__":
X, y = readDatasetFromFile()
X_tr, X_ts, y_tr, y_ts = train_test_split(X, y, test_size=.2)
model = Sequential()
model.add(Dense(16, input_dim=X.shape[1]))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(16))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(16))
model.add(keras.layers.advanced_activations.PReLU())
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
batch_size = 1000
model.fit_generator(generator(X_tr, y_tr, batch_size), epochs=200, steps_per_epoch=X.shape[0]/ batch_size,
validation_data=generator(X_ts, y_ts, batch_size * 2),
validation_steps=X.shape[0] / batch_size * 2, verbose=2, use_multiprocessing=True)
loss, accuracy = model.evaluate(X_ts, y_ts, verbose=0)
print(loss, accuracy)
在使用fit_generator 运行后,训练时间有所改善,但仍然很慢(现在每个 epoch 大约需要 40-50 秒)。在终端中运行nvidia-smi 时,我发现GPU 利用率仅为~15%,这让我怀疑我的代码是否错误。我在上面发布我的代码是为了询问您是否存在导致 GPU 性能下降的错误。
谢谢,
【问题讨论】:
-
您是否尝试使用
CUDA_VISIBLE_DEVICES强制为其分配 GPU? -
@ParthasarathySubburaj 感谢您的快速回复!我该怎么做?
标签: python tensorflow keras