【发布时间】:2020-04-21 18:18:28
【问题描述】:
我已经冻结并导出了一个SavedModel,它根据saved_model_cli将一批具有以下格式的视频作为输入:
The given SavedModel SignatureDef contains the following input(s):
inputs['ims_ph'] tensor_info:
dtype: DT_UINT8
shape: (1, 248, 224, 224, 3)
name: Placeholder:0
inputs['samples_ph'] tensor_info:
dtype: DT_FLOAT
shape: (1, 173774, 2)
name: Placeholder_1:0
The given SavedModel SignatureDef contains the following output(s):
... << OUTPUTS >> ......
Method name is: tensorflow/serving/predict
我有一个 TF-Serving (HTTP/REST) 服务器在本地成功运行。在我的 Python 客户端代码中,我有 2 个类型为 numpy.ndarray 的填充对象,名为 ims,形状为 (1, 248, 224, 224, 3) - 和 samples,形状为 (1, 173774, 2)。
我正在尝试对我的 TF 模型服务器进行推理(请参阅下面的客户端代码),但收到以下错误:{u'error': u'JSON Parse error: Invalid value. at offset: 0'}
# I have tried the following combinations without success:
data = {"instances" : [{"ims_ph": ims.tolist()}, {"samples_ph": samples.tolist()} ]}
data = {"inputs" : { "ims_ph": ims, "samples_ph": samples} }
r = requests.post(url="http://localhost:9000/v1/models/multisensory:predict", data=data)
TF-Serving REST docs 似乎并不表示这两个输入张量需要任何额外的转义/编码。由于这些不是二进制数据,我认为 base64 编码也不是正确的方法。任何指向此处工作方法的指针将不胜感激!
【问题讨论】:
标签: python rest tensorflow tensorflow-serving