【发布时间】:2022-01-22 11:17:12
【问题描述】:
我从 vertex ai 中部署的模型中得到了我认为奇怪的行为。我有一个使用 tensorflow/keras 2.7 版构建的 CNN 模型。我的输入数据是一个 3 维数组,形状如下 (1, 570, 33)。当我在本地将输入数据传递给模型时,我得到了正确的响应。
model = keras.models.load_model('model')
x = model.predict(input_data) # input_data is a numpy array of shape (1, 570, 33)
print(x)
[[0.1259355 0.9124526 0.65782744 0.2628207 ]]
这是一个正确的预测,模型会按照训练好的方式进行。没问题
当我使用预构建的 Tensorflow 2.7 docker 容器将模型上传到 Vertex AI 时,没有额外的设置(例如没有加速)并将该模型部署到端点,这就是我使用相同的 input_data 格式调用 predict 时得到的结果顶点人工智能。
resp = client.predict(
endpoint=endpoint_path,
instances=input_data.toList(),
parameters=parameters,
)
input must be 4-dimensional[1,570,33]\n\t [[{{function_node __inference__wrapped_model_28143}}{{node sequential/conv2d/BiasAdd}}]]
这里是模型的总结
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 570, 33, 32) 320
batch_normalization (BatchN (None, 570, 33, 32) 128
ormalization)
activation (Activation) (None, 570, 33, 32) 0
conv2d_1 (Conv2D) (None, 570, 33, 32) 9248
batch_normalization_1 (Batc (None, 570, 33, 32) 128
hNormalization)
activation_1 (Activation) (None, 570, 33, 32) 0
conv2d_2 (Conv2D) (None, 570, 33, 32) 9248
batch_normalization_2 (Batc (None, 570, 33, 32) 128
hNormalization)
activation_2 (Activation) (None, 570, 33, 32) 0
conv2d_3 (Conv2D) (None, 285, 17, 64) 18496
batch_normalization_3 (Batc (None, 285, 17, 64) 256
hNormalization)
activation_3 (Activation) (None, 285, 17, 64) 0
conv2d_4 (Conv2D) (None, 285, 17, 64) 36928
batch_normalization_4 (Batc (None, 285, 17, 64) 256
hNormalization)
activation_4 (Activation) (None, 285, 17, 64) 0
conv2d_5 (Conv2D) (None, 285, 17, 64) 36928
batch_normalization_5 (Batc (None, 285, 17, 64) 256
hNormalization)
activation_5 (Activation) (None, 285, 17, 64) 0
conv2d_6 (Conv2D) (None, 285, 17, 64) 36928
batch_normalization_6 (Batc (None, 285, 17, 64) 256
hNormalization)
activation_6 (Activation) (None, 285, 17, 64) 0
conv2d_7 (Conv2D) (None, 143, 9, 96) 55392
batch_normalization_7 (Batc (None, 143, 9, 96) 384
hNormalization)
activation_7 (Activation) (None, 143, 9, 96) 0
conv2d_8 (Conv2D) (None, 143, 9, 96) 83040
batch_normalization_8 (Batc (None, 143, 9, 96) 384
hNormalization)
activation_8 (Activation) (None, 143, 9, 96) 0
conv2d_9 (Conv2D) (None, 143, 9, 96) 83040
batch_normalization_9 (Batc (None, 143, 9, 96) 384
hNormalization)
activation_9 (Activation) (None, 143, 9, 96) 0
conv2d_10 (Conv2D) (None, 143, 9, 96) 83040
batch_normalization_10 (Bat (None, 143, 9, 96) 384
chNormalization)
activation_10 (Activation) (None, 143, 9, 96) 0
conv2d_11 (Conv2D) (None, 72, 5, 128) 110720
batch_normalization_11 (Bat (None, 72, 5, 128) 512
chNormalization)
activation_11 (Activation) (None, 72, 5, 128) 0
conv2d_12 (Conv2D) (None, 72, 5, 128) 147584
batch_normalization_12 (Bat (None, 72, 5, 128) 512
chNormalization)
activation_12 (Activation) (None, 72, 5, 128) 0
conv2d_13 (Conv2D) (None, 72, 5, 128) 147584
batch_normalization_13 (Bat (None, 72, 5, 128) 512
chNormalization)
activation_13 (Activation) (None, 72, 5, 128) 0
conv2d_14 (Conv2D) (None, 72, 5, 128) 147584
batch_normalization_14 (Bat (None, 72, 5, 128) 512
chNormalization)
activation_14 (Activation) (None, 72, 5, 128) 0
conv2d_15 (Conv2D) (None, 36, 3, 160) 184480
batch_normalization_15 (Bat (None, 36, 3, 160) 640
chNormalization)
activation_15 (Activation) (None, 36, 3, 160) 0
conv2d_16 (Conv2D) (None, 36, 3, 160) 230560
batch_normalization_16 (Bat (None, 36, 3, 160) 640
chNormalization)
activation_16 (Activation) (None, 36, 3, 160) 0
conv2d_17 (Conv2D) (None, 36, 3, 160) 230560
batch_normalization_17 (Bat (None, 36, 3, 160) 640
chNormalization)
activation_17 (Activation) (None, 36, 3, 160) 0
conv2d_18 (Conv2D) (None, 36, 3, 160) 230560
batch_normalization_18 (Bat (None, 36, 3, 160) 640
chNormalization)
activation_18 (Activation) (None, 36, 3, 160) 0
conv2d_19 (Conv2D) (None, 18, 2, 192) 276672
batch_normalization_19 (Bat (None, 18, 2, 192) 768
chNormalization)
activation_19 (Activation) (None, 18, 2, 192) 0
conv2d_20 (Conv2D) (None, 18, 2, 192) 331968
batch_normalization_20 (Bat (None, 18, 2, 192) 768
chNormalization)
activation_20 (Activation) (None, 18, 2, 192) 0
conv2d_21 (Conv2D) (None, 18, 2, 192) 331968
batch_normalization_21 (Bat (None, 18, 2, 192) 768
chNormalization)
activation_21 (Activation) (None, 18, 2, 192) 0
conv2d_22 (Conv2D) (None, 18, 2, 192) 331968
batch_normalization_22 (Bat (None, 18, 2, 192) 768
chNormalization)
activation_22 (Activation) (None, 18, 2, 192) 0
conv2d_23 (Conv2D) (None, 9, 1, 224) 387296
batch_normalization_23 (Bat (None, 9, 1, 224) 896
chNormalization)
activation_23 (Activation) (None, 9, 1, 224) 0
reshape (Reshape) (None, 9, 224) 0
masking (Masking) (None, 9, 224) 0
lambda (Lambda) (None, 224) 0
dense (Dense) (None, 4) 900
=================================================================
Total params: 3,554,532
Trainable params: 3,548,772
Non-trainable params: 5,760
我在这里有一个“它在我的机器上工作”的经典案例,可以使用任何输入或帮助:)
【问题讨论】:
标签: python numpy tensorflow keras google-cloud-vertex-ai