【发布时间】:2018-05-13 23:00:41
【问题描述】:
这个问题似乎存在很长时间了,很多用户都面临这个问题。
stream_executor/cuda/cuda_dnn.cc:444] 无法转换 BatchDescriptor {count: 0 feature_map_count: 64 spatial: 7 264 value_min: 0.000000 value_max: 0.000000 layout: BatchDepthYX} t o cudnn 张量描述符:CUDNN_STATUS_BAD_PARAM
消息太神秘了,我不知道我的代码中发生了什么,但是,我的代码在 CPU tensorflow 上运行良好。
我听说我们可以使用 tf.cond 来解决这个问题,但我是 tensorflow-gpu 的新手,所以有人可以帮助我吗?我的代码使用 Keras 并采用生成器之类的输入,这是为了避免任何内存不足的问题。生成器是由一个 while True 循环构建的,该循环按一定的批量大小吐出数据。
def resnet_model(bin_multiple):
#input and reshape
inputs = Input(shape=input_shape)
reshape = Reshape(input_shape_channels)(inputs)
#normal convnet layer (have to do one initially to get 64 channels)
conv = Conv2D(64,(1,bin_multiple*note_range),padding="same",activation='relu')(reshape)
pool = MaxPooling2D(pool_size=(1,2))(conv)
for i in range(int(np.log2(bin_multiple))-1):
print( i)
#residual block
bn = BatchNormalization()(pool)
re = Activation('relu')(bn)
freq_range = int((bin_multiple/(2**(i+1)))*note_range)
print(freq_range)
conv = Conv2D(64,(1,freq_range),padding="same",activation='relu')(re)
#add and downsample
ad = add([pool,conv])
pool = MaxPooling2D(pool_size=(1,2))(ad)
flattened = Flatten()(pool)
fc = Dense(1024, activation='relu')(flattened)
do = Dropout(0.5)(fc)
fc = Dense(512, activation='relu')(do)
do = Dropout(0.5)(fc)
outputs = Dense(note_range, activation='sigmoid')(do)
model = Model(inputs=inputs, outputs=outputs)
return model
model = resnet_model(bin_multiple)
init_lr = float(args['init_lr'])
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=init_lr,momentum=0.9), metrics=['accuracy', 'mae', 'categorical_accuracy'])
model.summary()
history = model.fit_generator(trainGen.next(),trainGen.steps(), epochs=epochs,
verbose=1,validation_data=valGen.next(),validation_steps=valGen.steps(),callbacks=callbacks, workers=8, use_multiprocessing=True)
【问题讨论】:
标签: tensorflow deep-learning gpu tensorflow-gpu