我创建了一个模型并将其保存在路径“/usr/local/google/home/abc/Jupyter_Notebooks/export”中。
然后,我将其提交到 Tensorflow Serving Docker Container 并推断该模型并得到结果。
在命令提示符中运行的命令,用于实现上述说明如下所示:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/export sb:/models/export
sudo docker commit --change "ENV MODEL_NAME export" sb rak_iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/export,target=/models/export -e MODEL_NAME=export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/export/1554294699 --all
curl -d '{"examples":[{"SepalLength":[5.1],"SepalWidth":[3.3],"PetalLength":[1.7],"PetalWidth":[0.5]}]}' \
-X POST http://localhost:8501/v1/models/export:classify
上述推理的输出是
{
"results": [[["0", 0.998091], ["1", 0.00190929], ["2", 1.46236e-08]]
]
}
使用下面提到的代码保存模型:
feature_spec = tf.feature_column.make_parse_example_spec(my_feature_columns)
serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
export_dir = classifier.export_savedmodel('export', serving_input_receiver_fn)
print('Exported to {}'.format(export_dir))
上述命令的输出是:
导出到 b'/usr/local/google/home/abc/Jupyter_Notebooks/export/1554980806'