【问题标题】:Why GPU memory-usage is quite different when using multi-gpu in tensorflow?为什么在 tensorflow 中使用多 GPU 时 GPU 内存使用情况大不相同?
【发布时间】:2019-02-28 11:49:39
【问题描述】:

我使用的是 Tensorflow 1.4.0,两个 gpus 训练。

为什么两个gpu的内存使用有很大的不同?这是gpu的情况:

+-------------------------------+----------------------+----------------------+
|   4  Tesla K80           On   | 00000000:00:1B.0 Off |                    0 |
| N/A   50C    P0    70W / 149W |   8538MiB / 11439MiB |    100%   E. Process |
+-------------------------------+----------------------+----------------------+
|   5  Tesla K80           On   | 00000000:00:1C.0 Off |                    0 |
| N/A   42C    P0    79W / 149W |   4442MiB / 11439MiB |     48%   E. Process |
+-------------------------------+----------------------+----------------------+

GPU4 使用的 GPU 内存是 GPU5 的两倍。我认为两个 gpus 中使用的 gpu 内存应该差不多。为什么会出现这种情况?有人帮我吗?非常感谢!

这是计算平均梯度的代码和两个函数:

tower_grads = []
lossList = []
accuracyList = []

for gpu in range(NUM_GPUS):
    with tf.device(assign_to_device('/gpu:{}'.format(gpu), ps_device='/cpu:0')):
        print '============ GPU {} ============'.format(gpu)
        imageBatch, labelBatch, epochNow = read_and_decode_TFRecordDataset(
            args.tfrecords, BATCH_SIZE, EPOCH_NUM)
        identityPretrainModel = identity_pretrain_inference.IdenityPretrainNetwork(IS_TRAINING,
                                                                                   BN_TRAINING, CLASS_NUM, DROPOUT_TRAINING)
        logits = identityPretrainModel.inference(
            imageBatch)
        losses = identityPretrainModel.cal_loss(logits, labelBatch)
        accuracy = identityPretrainModel.cal_accuracy(logits, labelBatch)
        optimizer = tf.train.AdamOptimizer(learning_rate=LEARNING_RATE)
        grads_and_vars = optimizer.compute_gradients(losses)
        lossList.append(losses)
        accuracyList.append(accuracy)
        tower_grads.append(grads_and_vars)
grads_and_vars = average_gradients(tower_grads)
train = optimizer.apply_gradients(grads_and_vars)
global_step = tf.train.get_or_create_global_step()
incr_global_step = tf.assign(global_step, global_step + 1)
losses = sum(lossList) / NUM_GPUS
accuracy = sum(accuracyList) / NUM_GPUS



def assign_to_device(device, ps_device='/cpu:0'):
    def _assign(op):
        node_def = op if isinstance(op, tf.NodeDef) else op.node_def
        if node_def.op in PS_OPS:
            return ps_device
        else:
            return device
    return _assign


def average_gradients(tower_grads):
    average_grads = []
    for grad_and_vars in zip(*tower_grads):
        # Note that each grad_and_vars looks like the following:
        #   ((grad0_gpu0, var0_gpu0), ... , (grad0_gpuN, var0_gpuN))
        grads = []
        for g, _ in grad_and_vars:
            # Add 0 dimension to the gradients to represent the tower.
            expanded_g = tf.expand_dims(g, 0)

            # Append on a 'tower' dimension which we will average over below.
            grads.append(expanded_g)

        # Average over the 'tower' dimension.
        grad = tf.concat(grads, 0)
        grad = tf.reduce_mean(grad, 0)

        # Keep in mind that the Variables are redundant because they are shared
        # across towers. So .. we will just return the first tower's pointer to
        # the Variable.
        v = grad_and_vars[0][1]
        grad_and_var = (grad, v)
        average_grads.append(grad_and_var)
    return average_grads

【问题讨论】:

    标签: python tensorflow


    【解决方案1】:

    多 GPU 代码来自:multigpu_cnn.py。原因是错过了第 124 行,with tf.device('/cpu:0'):!在这种情况下,所有操作都放在 GPU0 上。所以 gpu0 上的内存消耗比其他的要多。

    【讨论】:

      猜你喜欢
      • 2017-10-15
      • 2018-09-10
      • 2016-06-17
      • 1970-01-01
      • 2019-10-09
      • 1970-01-01
      • 2020-04-22
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多