【问题标题】:Can't convert Pytorch to ONNX无法将 Pytorch 转换为 ONNX
【发布时间】:2020-03-14 12:17:22
【问题描述】:

尝试使用 ONNX 转换 this pytorch 模型时出现此错误。我在 github 上搜索过,这个错误之前在 1.1.0 版本中出现过,但显然已得到纠正。现在我在火炬 1.4.0 上。 (python 3.6.9),我看到了这个错误。

File "/usr/local/lib/python3.6/dist-packages/torch/onnx/init.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 416, in _export
fixed_batch_size=fixed_batch_size)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 296, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 135, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/init.py", line 179, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 657, in _run_symbolic_function
return op_fn(g, *inputs, **attrs)
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 128, in wrapper
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 128, in
args = [_parse_arg(arg, arg_desc) for arg, arg_desc in zip(args, arg_descriptors)]
File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 81, in _parse_arg
"', since it's not constant, please try to make "
RuntimeError: Failed to export an ONNX attribute 'onnx::Gather', since it's not constant, please try to make things (e.g., kernel size) static if possible

如何解决?我也尝试了最新的夜间构建,同样的错误出现了。

我的代码:

from model import BiSeNet
import torch.onnx
import torch

net = BiSeNet(19)
net.cuda()
net.load_state_dict(torch.load('/content/drive/My Drive/Collab/fp/res/cp/79999_iter.pth'))
net.eval()

dummy = torch.rand(1,3,512,512).cuda()
torch.onnx.export(net, dummy, "Model.onnx", input_names=["image"], output_names=["output"])

我在引发运行时错误之前将 print (v.node ()) 添加到 symbolic_helper.py 以查看导致错误的原因。

这是输出:%595 : Long() = onnx::Gather[axis=0](%592, %594) # /content/drive/My Drive/Collab/fp/model.py:111:0

model.py 中 111 中的那一行是:avg = F.avg_pool2d(feat32, feat32.size()[2:])

这个source表示pytorch中的tensor.size方法无法被onnx识别,需要修改成常量。

【问题讨论】:

    标签: python pytorch onnx


    【解决方案1】:

    以前使用导出时也遇到过类似的错误

    torch.onnx.export(model, x, ONNX_FILE_PATH)

    我通过像这样指定opset_version 来修复它:

    torch.onnx.export(model, x, ONNX_FILE_PATH, opset_version = 11)

    【讨论】:

    【解决方案2】:

    你可以:

    print(feat32.size()[2:])
    

    并替换:

    F.avg_pool2d(feat32, feat32.size()[2:]) 
    

    与:

    F.avg_pool2d(feat32, your_print_contant_result)
    

    【讨论】:

      【解决方案3】:

      我也遇到过同样的问题。

      我的情况不支持 F.adaptive_avg_pool2d。您必须尝试其他操作。

      我希望这会有所帮助。

      谢谢

      【讨论】:

        【解决方案4】:

        更改实例

        x = F.avg_pool2d(x, x.shape[2:])
        

        x_shape = [int(s) for s in x.shape[2:]]
        x = F.avg_pool2d(x, x_shape)
        

        这样avg_pool2d 的输入形状是恒定的[k,k],而不是torch.Size([k, k]) 提到的here

        【讨论】:

          【解决方案5】:

          对于那些在 Google 搜索中遇到此问题并获得 Unable to cast from non-held to held instance (T& to Holder) (compile in debug mode for type information) 的用户,请尝试像这样添加 operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK (as mentioned here):

          torch.onnx.export(model, input, "output-name.onnx", export_params=True, opset_version=12, operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
          

          这解决了我的情况下的“持有实例”问题。

          【讨论】:

            猜你喜欢
            • 2019-11-10
            • 2019-07-08
            • 2022-10-13
            • 2023-01-31
            • 2018-11-25
            • 2021-10-20
            • 2018-10-05
            • 1970-01-01
            • 2021-11-11
            相关资源
            最近更新 更多