【问题标题】:multi-gpu inference tensorflow多 GPU 推理张量流
【发布时间】:2019-07-10 15:37:49
【问题描述】:

我想使用 tensorflow/Keras 执行多 GPU 推理

这是我的预测

 model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)

 # Load weights trained on MS-COCO
 model.load_weights(COCO_MODEL_PATH, by_name=True)

 # COCO Class names
 # Index of the class in the list is its ID. For example, to get ID of
 # the teddy bear class, use: class_names.index('teddy bear')
 class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',
                'bus', 'train', 'truck', 'boat', 'traffic light',
                'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',
                'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',
                'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',
                'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
                'kite', 'baseball bat', 'baseball glove', 'skateboard',
                'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
                'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
                'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
                'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',
                'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',
                'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',
                'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',
                'teddy bear', 'hair drier', 'toothbrush']


 # Load a random image from the images folder
 file_names = next(os.walk(IMAGE_DIR))[2]
 image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names)))

 # Run detection
 results = model.detect([image], verbose=1)

 # Visualize results
 r = results[0]

有没有办法在多个 GPU 上运行这个模型?

提前致谢。

【问题讨论】:

    标签: tensorflow keras


    【解决方案1】:

    根据系统中的 GPU 数量增加 GPU_COUNT,并在使用modellib.MaskRCNN 创建模型时传递新的config

    class InferenceConfig(coco.CocoConfig):
        GPU_COUNT = 1 # increase the GPU count based on number of GPUs
        IMAGES_PER_GPU = 1
    
    config = InferenceConfig()
    model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
    

    https://github.com/matterport/Mask_RCNN/blob/master/samples/demo.ipynb

    【讨论】:

    • 嘿,当我尝试使用两个 gpus 运行时,它工作正常,但使用 4 个 gpus 它会给出 InvalidArgumentError: Input to reshape is a tensor with 600 values, but the requested shape has 4800 [[{{node tower_0_4/mask_rcnn/mrcnn_detection/Reshape_1}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](tower_0_4/mask_rcnn/mrcnn_detection/打包,tower_0_4/mask_rcnn/mrcnn_detection/Reshape_1/shape)]] 有什么想法吗?谢谢
    • 当我们设置GPU_COUNT = 2IMAGES_PER_GPU = 1 - 它是如何工作的?这两个模型是否作为一个 GPU 组合执行,并在一个图像或模型上进行推理,或者数据在推理期间被拆分并重新加入?
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2020-10-14
    相关资源
    最近更新 更多