【问题标题】:how to save, restore, make predictions with siamese network (with triplet loss)如何使用连体网络进行保存、恢复和预测(三重损失)
【发布时间】:2018-05-28 01:49:11
【问题描述】:

我正在尝试开发一个连体网络,用于简单的人脸验证(以及第二阶段的识别)。我有一个我设法训练的网络,但是当谈到如何保存和恢复模型 + 使用经过训练的模型进行预测时,我有点困惑。希望该领域有经验的人可以帮助取得进展..

这是我创建连体网络的方法,首先...

model = ResNet50(weights='imagenet')   # get the original ResNet50 model
model.layers.pop()   # Remove the last layer
for layer in model.layers:
    layer.trainable = False   # do not train any of original layers

x = model.get_layer('flatten_1').output
model_out = Dense(128, activation='relu',  name='model_out')(x)
model_out = Lambda(lambda  x: K.l2_normalize(x,axis=-1))(model_out)
new_model = Model(inputs=model.input, outputs=model_out)

# At this point, a new layer (with 128 units) added and normalization applied.

# Now create siamese network on top of this

anchor_in = Input(shape=(224, 224, 3))
positive_in = Input(shape=(224, 224, 3))
negative_in = Input(shape=(224, 224, 3))

anchor_out = new_model(anchor_in)
positive_out = new_model(positive_in)
negative_out = new_model(negative_in)

merged_vector = concatenate([anchor_out, positive_out, negative_out], axis=-1)

# Define the trainable model
siamese_model = Model(inputs=[anchor_in, positive_in, negative_in],
                      outputs=merged_vector)
siamese_model.compile(optimizer=Adam(lr=.0001), 
                      loss=triplet_loss, 
                      metrics=[dist_between_anchor_positive,
                               dist_between_anchor_negative])

我训练 siamese_model。当我训练它时,如果我对结果的解释正确,它并没有真正训练底层模型,它只是训练新的连体网络(本质上,只训练最后一层)。

但是这个模型有 3 个输入流。训练后,我需要以某种方式保存这个模型,以便它只需要 1 或 2 个输入,以便我可以通过计算 2 个给定图像之间的距离来执行预测。如何保存此模型并立即重复使用?

提前谢谢你!

附录:

如果您想知道,这里是连体模型的摘要。

siamese_model.summary()

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_2 (InputLayer)            (None, 224, 224, 3)  0                                            
__________________________________________________________________________________________________
input_3 (InputLayer)            (None, 224, 224, 3)  0                                            
__________________________________________________________________________________________________
input_4 (InputLayer)            (None, 224, 224, 3)  0                                            
__________________________________________________________________________________________________
model_1 (Model)                 (None, 128)          23849984    input_2[0][0]                    
                                                                 input_3[0][0]                    
                                                                 input_4[0][0]                    
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 384)          0           model_1[1][0]                    
                                                                 model_1[2][0]                    
                                                                 model_1[3][0]                    
==================================================================================================
Total params: 23,849,984
Trainable params: 262,272
Non-trainable params: 23,587,712
__________________________________________________________________________________________________

【问题讨论】:

    标签: tensorflow machine-learning neural-network keras convolutional-neural-network


    【解决方案1】:

    您可以使用以下代码来保存您的模型 siamese_model.save_weights(MODEL_WEIGHTS_FILE)

    然后加载您需要使用的模型 siamese_model.load_weights(MODEL_WEIGHTS_FILE)

    谢谢

    【讨论】:

    • 我们如何使用这个加载的权重来预测看不见的数据? @Gazal
    • @LakwinChandula - 你的问题得到答案了吗?
    猜你喜欢
    • 2023-03-12
    • 1970-01-01
    • 1970-01-01
    • 2019-05-03
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2013-04-30
    • 2017-05-28
    相关资源
    最近更新 更多