【问题标题】:Multi GPU/Tower setup Tensorflow 1.2 Estimator多 GPU/Tower 设置 Tensorflow 1.2 Estimator
【发布时间】:2017-07-05 09:54:37
【问题描述】:

我想将我的 _model_fn for Estimator 变成多 GPU 解决方案。

有没有办法在 Esitmator API 中做到这一点,或者我必须明确编码设备放置和同步。

我知道我可以使用 tf.device('gpu:X') 将我的模型放在 GPU X 上。我也知道我可以遍历可用的 GPU 名称,以便在多个 GPU 上复制我的模型。我也知道我可以为多个 GPU 使用一个输入队列。

我不知道哪些部分(优化器、损失计算)实际上可以转移到 GPU 以及我必须在哪里同步计算。

Cifar10 示例中,我认为我必须只同步渐变。

尤其是在使用时

train_op = tf.contrib.layers.optimize_loss(
        loss=loss,
        global_step=tf.contrib.framework.get_global_step(),
        learning_rate=learning_rate,
        learning_rate_decay_fn=_learning_rate_decay_fn,
        optimizer=optimizer)

我不能再手动调用optimizer.compute_gradients()optimizer.apply_gradients(),因为这是由.optimize_loss(..) 内部处理的

我想知道如何像 cifar10 示例 Cifar10-MultiGPU 中那样平均梯度,或者这是否是 Estimator 的正确方法。

【问题讨论】:

    标签: python tensorflow multi-gpu


    【解决方案1】:

    其实你可以在model_fn函数中实现多GPU。
    您可以在here 中找到完整代码。它支持多线程队列读取器和多 GPU 以在使用 Estimator 时进行非常高速的训练。

    代码 sn-p: (GET FULL CODE)

    def model_fn(features, labels, mode, params):
        # network
        network_fn = nets_factory.get_network_fn(
            FLAGS.model_name,
            num_classes=params['num_classes'],
            weight_decay=0.00004,
            is_training=(mode == tf.estimator.ModeKeys.TRAIN))
    
        # if predict. Provide an estimator spec for `ModeKeys.PREDICT`.
        if mode == tf.estimator.ModeKeys.PREDICT:
            logits, end_points = network_fn(features)
            return tf.estimator.EstimatorSpec(mode=mode, predictions={"output": logits})
    
        # Create global_step and lr
        global_step = tf.train.get_global_step()
        learning_rate = get_learning_rate("exponential", FLAGS.base_lr,
                                          global_step, decay_steps=10000)
    
        # Create optimizer
        optimizer = get_optimizer(FLAGS.optimizer, learning_rate)
    
        # Multi GPU support - need to make sure that the splits sum up to 
        # the batch size (in case the batch size is not divisible by 
        # the number of gpus. This code will put remaining samples in the
        # last gpu. E.g. for a batch size of 15 with 2 gpus, the splits 
        # will be [7, 8].
        batch_size = tf.shape(features)[0]
        split_size = batch_size // len(params['gpus_list'])
        splits = [split_size, ] * (len(params['gpus_list']) - 1)
        splits.append(batch_size - split_size * (len(params['gpus_list']) - 1))
    
        # Split the features and labels
        features_split = tf.split(features, splits, axis=0)
        labels_split = tf.split(labels, splits, axis=0)
        tower_grads = []
        eval_logits = []
    
        with tf.variable_scope(tf.get_variable_scope()):
            for i in xrange(len(params['gpus_list'])):
                with tf.device('/gpu:%d' % i):
                    with tf.name_scope('%s_%d' % ("classification", i)) as scope:
                        # model and loss
                        logits, end_points = network_fn(features_split[i])
                        tf.losses.softmax_cross_entropy(labels_split[i], logits)
                        update_ops = tf.get_collection(
                            tf.GraphKeys.UPDATE_OPS, scope)
                        updates_op = tf.group(*update_ops)
                        with tf.control_dependencies([updates_op]):
                            losses = tf.get_collection(tf.GraphKeys.LOSSES, scope)
                            total_loss = tf.add_n(losses, name='total_loss')
                        # reuse var
                        tf.get_variable_scope().reuse_variables()
                        # grad compute
                        grads = optimizer.compute_gradients(total_loss)
                        tower_grads.append(grads)
                        # for eval metric ops
                        eval_logits.append(logits)
    
        # We must calculate the mean of each gradient. Note that this is the
        # synchronization point across all towers.
        grads = average_gradients(tower_grads)
    
        # Apply the gradients to adjust the shared variables.
        apply_gradient_op = optimizer.apply_gradients(
            grads, global_step=global_step)
    
        # Track the moving averages of all trainable variables.
        variable_averages = tf.train.ExponentialMovingAverage(0.9999, global_step)
        variables_averages_op = variable_averages.apply(tf.trainable_variables())
    
        # Group all updates to into a single train op.
        train_op = tf.group(apply_gradient_op, variables_averages_op)
    
        # Create eval metric ops
        _predictions = tf.argmax(tf.concat(eval_logits, 0), 1)
        _labels = tf.argmax(labels, 1)
        eval_metric_ops = {
            "acc": slim.metrics.streaming_accuracy(_predictions, _labels)}
    
        # Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes.
        return tf.estimator.EstimatorSpec(
            mode=mode,
            loss=total_loss,
            train_op=train_op,
            eval_metric_ops=eval_metric_ops)
    

    【讨论】:

      猜你喜欢
      • 2018-06-29
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2017-06-26
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多