【问题标题】:Running multiple tensorflow sessions subsequently随后运行多个 tensorflow 会话
【发布时间】:2019-01-23 17:08:11
【问题描述】:

我正在使用 gunicorn 和烧瓶开发一个简单的 REST 控制器。

在每次 REST 调用时,我都会执行以下代码

@app.route('/objects', methods=['GET'])
def get_objects():
    video_title = request.args.get('video_title')
    video_path = "../../video/" + video_title
    cl.logger.info(video_path)
    start = request.args.get('start')
    stop = request.args.get('stop')
    scene = [start, stop]

    frames = images_utils.extract_frames(video_path, scene[0], scene[1], 1)
    cl.logger.info(scene[0]+" "+scene[1])
    objects = list()
    ##objects
    model = GenericDetector('../resources/open_images/frozen_inference_graph.pb', '../resources/open_images/labels.txt')
    model.run(frames)
    for result in model.get_boxes_and_labels():
        if result is not None:
            objects.append(result)

    data = {'message': {
        'start_time': scene[0],
        'end_time': scene[1],
        'path': video_path,
        'objects':objects,
    }, 'metadata_type': 'detection'}

    return jsonify({'status': data}), 200

这段代码运行一个tensorflow冻结模型如下:

class GenericDetector(Process):

    def __init__(self, model, labels):
        # ## Load a (frozen) Tensorflow model into memory.
        self.detection_graph = tf.Graph()
        with self.detection_graph.as_default():
            od_graph_def = tf.GraphDef()
            with tf.gfile.GFile(model, 'rb') as fid:
                serialized_graph = fid.read()
                od_graph_def.ParseFromString(serialized_graph)
                tf.import_graph_def(od_graph_def, name='')

        self.boxes_and_labels = []

        # ## Loading label map
        with open(labels) as f:
            txt_labels = f.read()
            self.labels = json.loads(txt_labels)


    def run(self, frames):
        tf.reset_default_graph()
        with self.detection_graph.as_default():
            config = tf.ConfigProto()
            config.gpu_options.allow_growth = True
            with tf.Session(graph=self.detection_graph, config=config) as sess:

                image_tensor = self.detection_graph.get_tensor_by_name('image_tensor:0')
                # Each box represents a part of the image where a particular object was detected.
                detection_boxes = self.detection_graph.get_tensor_by_name('detection_boxes:0')
                # Each score represent how level of confidence for each of the objects.
                detection_scores = self.detection_graph.get_tensor_by_name('detection_scores:0')
                detection_classes = self.detection_graph.get_tensor_by_name('detection_classes:0')
                num_detections = self.detection_graph.get_tensor_by_name('num_detections:0')

                i = 0
                for frame in frames:

                    # Expand dimensions since the model expects images to have shape: [1, None, None, 3]
                    image_np_expanded = np.expand_dims(frame, axis=0)

                    # Actual detection.
                    (boxes, scores, classes, num) = sess.run(
                        [detection_boxes, detection_scores, detection_classes, num_detections], \
                        feed_dict={image_tensor: image_np_expanded})

                    boxes = np.squeeze(boxes)
                    classes = np.squeeze(classes).astype(np.int32)
                    scores = np.squeeze(scores)

                    for j, box in enumerate(boxes):
                        if all(v == 0 for v in box):
                            continue

                        self.boxes_and_labels.append(
                            {
                                "ymin": str(box[0]),
                                "xmin": str(box[1]),
                                "ymax": str(box[2]),
                                "xmax": str(box[3]),
                                "label": self.labels[str(classes[j])],
                                "score": str(scores[j]),
                                "frame":i
                            })
                    i += 1
            sess.close()
    def get_boxes_and_labels(self):
        return self.boxes_and_labels

似乎一切正常,但是一旦我向服务器发送第二个请求,我的 GPU(GTX 1050)就会出现内存不足:

ResourceExhaustedError(回溯见上文):分配时出现 OOM 形状为 [3,3,256,256] 且类型为 float 的张量

如果我在那之后尝试拨打电话,它大部分时间都能正常工作。有时它也适用于后续调用。我尝试在单独的进程上执行 GenericDetector(使 GEnericDetector 继承进程),但它没有帮助。我读到一旦执行 REST GET 的进程死了,GPU 的内存应该被释放,所以我也尝试在执行 tensorflow 模型后添加一个 sleep(30) ,但没有运气。我哪里做错了?

【问题讨论】:

    标签: python tensorflow flask gunicorn


    【解决方案1】:

    问题是Tensorflow为进程而不是Session分配内存,关闭会话是不够的(即使你放了allow_growth option)。

    第一个是 allow_growth 选项,它尝试根据运行时分配只分配尽可能多的 GPU 内存:它开始分配非常少的内存,随着会话开始运行并且需要更多 GPU 内存,我们扩展 GPU 内存区域TensorFlow 过程所需的。 请注意,我们不会释放内存,因为这会导致更严重的内存碎片。

    在 TF github 上有一个 issue 有一些解决方案,例如,您可以使用线程中提出的 RunAsCUDASubprocess 来装饰您的运行方法。

    【讨论】:

    • 我在您的问题链接中找到了解决方案,方法是:sess.close() cuda.select_device(0) cuda.close()
    【解决方案2】:

    此错误表示您正在尝试将比可用内存更大的东西放入 GPU。也许您可以减少模型中某处的参数数量以使其更轻?

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2018-03-18
      • 2019-06-08
      • 2020-01-04
      • 2019-08-02
      • 2019-01-06
      • 2016-04-18
      • 1970-01-01
      • 2018-03-24
      相关资源
      最近更新 更多