【问题标题】:Running out of memory on Google ColabGoogle Colab 内存不足
【发布时间】:2021-09-21 02:34:05
【问题描述】:

我正在尝试在 Google Colab Pro GPU(RAM:25GB,磁盘:147GB)上运行带有 Faster RCNN 的 TF 对象检测模型演示,但它失败并出现以下错误:

Tensorflow/core/common_runtime/bfc_allocator.cc:456] Allocator (GPU_0_bfc) ran out of memory trying to allocate 7.18GiB (rounded to 7707033600)requested by op MultiLevelMatMulCropAndResize/MultiLevelRoIAlign/AvgPool-0-TransposeNHWCToNCHW-LayoutOptimizer
If the cause is memory fragmentation maybe the environment variable 'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation. 

然后它给了我这些统计数据:

I tensorflow/core/common_runtime/bfc_allocator.cc:1058] Sum Total of in-use chunks: 7.46GiB
I tensorflow/core/common_runtime/bfc_allocator.cc:1060] total_region_allocated_bytes_: 15034482688 memory_limit_: 16183459840 available bytes: 1148977152 curr_region_allocation_bytes_: 8589934592
I tensorflow/core/common_runtime/bfc_allocator.cc:1066] Stats: 
Limit:                     16183459840
InUse:                      8013051904
MaxInUse:                   8081602560
NumAllocs:                        6801
MaxAllocSize:               7707033600
Reserved:                            0
PeakReserved:                        0
LargestFreeBlock:                    0

tensorflow.python.framework.errors_impl.ResourceExhaustedError:  OOM when allocating tensor with shape[2400,1024,28,28] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node MultiLevelMatMulCropAndResize/MultiLevelRoIAlign/AvgPool-0-TransposeNHWCToNCHW-LayoutOptimizer}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference__dummy_computation_fn_32982]

我真的不明白为什么它在 25GB 系统上只分配 7GB 内存不足?我该如何解决?这是我的这个任务的配置文件:

# Faster R-CNN with Resnet-50 (v1)
# Trained on COCO, initialized from Imagenet classification checkpoint

# Achieves -- mAP on COCO14 minival dataset.

# This config is TPU compatible.

model {
  faster_rcnn {
    num_classes: 7
    image_resizer {
      keep_aspect_ratio_resizer {
        min_dimension: 640
        max_dimension: 640
        pad_to_max_dimension: true
      }
    }
    feature_extractor {
      type: 'faster_rcnn_resnet50_keras'
      batch_norm_trainable: true
    }
    first_stage_anchor_generator {
      grid_anchor_generator {
        scales: [0.25, 0.5, 1.0, 2.0]
        aspect_ratios: [0.5, 1.0, 2.0]
        height_stride: 16
        width_stride: 16
      }
    }
    first_stage_box_predictor_conv_hyperparams {
      op: CONV
      regularizer {
        l2_regularizer {
          weight: 0.0
        }
      }
      initializer {
        truncated_normal_initializer {
          stddev: 0.01
        }
      }
    }
    first_stage_nms_score_threshold: 0.0
    first_stage_nms_iou_threshold: 0.7
    first_stage_max_proposals: 300
    first_stage_localization_loss_weight: 2.0
    first_stage_objectness_loss_weight: 1.0
    initial_crop_size: 14
    maxpool_kernel_size: 2
    maxpool_stride: 2
    second_stage_box_predictor {
      mask_rcnn_box_predictor {
        use_dropout: false
        dropout_keep_probability: 1.0
        fc_hyperparams {
          op: FC
          regularizer {
            l2_regularizer {
              weight: 0.0
            }
          }
          initializer {
            variance_scaling_initializer {
              factor: 1.0
              uniform: true
              mode: FAN_AVG
            }
          }
        }
        share_box_across_classes: true
      }
    }
    second_stage_post_processing {
      batch_non_max_suppression {
        score_threshold: 0.0
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 300
      }
      score_converter: SOFTMAX
    }
    second_stage_localization_loss_weight: 2.0
    second_stage_classification_loss_weight: 1.0
    use_static_shapes: true
    use_matmul_crop_and_resize: true
    clip_anchors_to_image: true
    use_static_balanced_label_sampler: true
    use_matmul_gather_in_matcher: true
  }
}

train_config: {
  batch_size: 8
  sync_replicas: true
  startup_delay_steps: 0
  replicas_to_aggregate: 8
  num_steps: 25000
  optimizer {
    momentum_optimizer: {
      learning_rate: {
        cosine_decay_learning_rate {
          learning_rate_base: .04
          total_steps: 25000
          warmup_learning_rate: .013333
          warmup_steps: 2000
        }
      }
      momentum_optimizer_value: 0.9
    }
    use_moving_average: false
  }
  fine_tune_checkpoint_version: V2
  fine_tune_checkpoint: "faster_rcnn_resnet50_v1_640x640_coco17_tpu-8/checkpoint/ckpt-0"
  fine_tune_checkpoint_type: "detection"
  data_augmentation_options {
    random_horizontal_flip {
    }
  }

  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false
  use_bfloat16: true  # works only on TPUs
}

train_input_reader: {
  label_map_path: "label_map.pbtxt"
  tf_record_input_reader {
    input_path: "train.record"
  }
}

eval_config: {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
  batch_size: 1;
}

eval_input_reader: {
  label_map_path: "label_map.pbtxt"
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "test.record"
  }
}

【问题讨论】:

  • GPU 似乎只有 16 GB 的 RAM,并且已经分配了大约 8 GB,所以不是分配 7 GB 的 25 GB 的情况,因为已经分配了一些 RAM,这是一个非常普遍的误解,分配不会在真空中发生。此外,这里没有我们可以建议更改的代码或任何内容。
  • @Dr.Snoopy 感谢您的评论,我只是编辑添加了用于训练此模型的配置文件。此任务不涉及构建模型的代码,因为我只使用对象检测 API。其次,我的 Google Colab 上的资源分配说我有 24GB 的 GPU,那么有什么办法可以利用这 24GB 吗?谢谢!
  • 啊,我刚刚意识到这是因为样本中的图像占用了大量内存,我将批量大小更改为 2 并且它起作用了!

标签: tensorflow memory google-colaboratory


【解决方案1】:

根据https://github.com/tensorflow/models/issues/1817,我意识到这是图像在样本大小中占用过多内存的问题,因此我将批量大小更改为 2 并且它起作用了

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2020-03-31
    • 1970-01-01
    • 2020-03-21
    • 2020-05-04
    • 2019-03-11
    • 2021-04-26
    • 1970-01-01
    • 2022-11-22
    相关资源
    最近更新 更多