【问题标题】:ssd_resnet50 model stuck when loading IR to the plugin将 IT 加载到插件时,ssd resnet50 模型卡住了
【发布时间】:2025-12-05 22:55:01
【问题描述】:

我正在尝试使用 MYRIAD、Python API 在 NCS2 上运行 SSD ResNet50 FPN COCO (ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03) 模型,但在将 IR 加载到插件时它会卡住并出现以下错误。

E: [xLink] [     80143] handleIncomingEvent:240 handleIncomingEvent() Read failed -4

E: [xLink] [     80143] dispatcherEventReceive:308  dispatcherEventReceive() Read failed -4 | event 0x7f35137fde80 USB_WRITE_REQ

E: [xLink] [     80143] eventReader:256 eventReader stopped
E: [xLink] [     80144] dispatcherEventSend:908 Write failed event -4

E: [watchdog] [     81144] sendPingMessage:164  Failed send ping message: X_LINK_ERROR
E: [watchdog] [     82144] sendPingMessage:164  Failed send ping message: X_LINK_ERROR
E: [watchdog] [     83144] sendPingMessage:164  Failed send ping message: X_LINK_ERROR
E: [watchdog] [     84145] sendPingMessage:164  Failed send ping message: X_LINK_ERROR

...

Failed send ping message: X_LINK_ERROR 一直显示,直到我按下 ctrl+C 来终止脚本。我注意到错误中的USB_WRITE_REQ,所以我认为它与USB3 端口有关,但是当我尝试使用更轻的型号ssd_mobilenet_v2_coco 时,它就像一个魅力。

这是生成IR的脚本(IR生成成功)

python mo_tf.py --input_model ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/frozen_inference_graph.pb --output_dir ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/openvino_model/FP16 --tensorflow_use_custom_operations_config ~/intel/computer_vision_sdk/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/pipeline.config --data_type FP16

这是我用来测试的脚本

python test.py -m ~/workspace/pi/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03/openvino_model/FP16/frozen_inference_graph.xml -i ~/workspace/object-detection/test_images/image.jpg -d MYRIAD

这是 Python 脚本的 sn-p

plugin = IEPlugin(device=args.device, plugin_dirs=args.plugin_dir)
if args.cpu_extension and 'CPU' in args.device:
    plugin.add_cpu_extension(args.cpu_extension)
# Read IR
log.info("Reading IR...")
net = IENetwork(model=model_xml, weights=model_bin)

if plugin.device == "CPU":
    supported_layers = plugin.get_supported_layers(net)
    not_supported_layers = [l for l in net.layers.keys() if l not in supported_layers]
    if len(not_supported_layers) != 0:
        log.error("Following layers are not supported by the plugin for specified device {}:\n {}".
                  format(plugin.device, ', '.join(not_supported_layers)))
        log.error("Please try to specify cpu extensions library path in demo's command line parameters using -l "
                  "or --cpu_extension command line argument")
        sys.exit(1)
assert len(net.inputs.keys()) == 1, "Demo supports only single input topologies"
assert len(net.outputs) == 1, "Demo supports only single output topologies"
input_blob = next(iter(net.inputs))
out_blob = next(iter(net.outputs))

n, c, h, w = net.inputs[input_blob].shape

log.info("Loading IR to the plugin...")
exec_net = plugin.load(network=net) # <== stuck at this line

我能想到为什么ssd_mobilenet_v2_coco_2018_03_29 有效而ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03 无效的唯一原因是前者的大小为 33MB,后者约为 100MB。我认为 SSD Resnet50 型号可能已经达到了我的笔记本电脑资源限制。如果这是原因,我该如何解决?我在 Ubuntu 18.04 上使用 l_openvino_toolkit_p_2018.5.455

SSD ResNet50 FPN COCO 模型来自 TensorFlow Object Detection Models Zoo,并由 Openvino 工具包 (https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow) 提供支持。

【问题讨论】:

  • 能否请您指定您正在使用的 NCSDK 版本,以便问题随着时间的推移保持相关性?你可以发现在一个名为version.txt的文件中,我的路径是/opt/movidius/version.txt(比如我的版本是2.10.01.01)。

标签: python object-detection-api openvino


【解决方案1】:

MYRIAD 目前不支持此模型,此问题已被开发团队知晓。当我们支持它时,我们会及时通知您。

【讨论】: