【发布时间】:2018-05-11 17:29:31
【问题描述】:
我已经基于 Keras 示例实现了连体网络。我的代码如下:
def contrastive_loss(y_true, y_pred):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
margin = 1
return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))
def create_base_network(input_dim):
'''Base network to be shared (eq. to feature extraction).
'''
seq = Sequential()
seq.add(Dense(128, input_shape=(input_dim,), activation='relu'))
seq.add(Dropout(0.1))
seq.add(Dense(128, activation='relu'))
seq.add(Dropout(0.1))
seq.add(Dense(128, activation='relu'))
return seq
def euclidean_distance(vects): #replace this with the code from tensorflow
x, y = vects
return K.sqrt(K.sum(K.square(x - y), axis=1, keepdims=True))
def eucl_dist_output_shape(shapes):
shape1, shape2 = shapes
return (shape1[0], 2)
=============================主要部分================ ===============
input_dim = 9216
nb_epoch = 3
# network definition
base_network = create_base_network(input_dim)
input_a = Input(shape=(input_dim,))
input_b = Input(shape=(input_dim,))
# because we re-use the same instance `base_network`,
# the weights of the network
# will be shared across the two branches
processed_a = base_network(input_a)
processed_b = base_network(input_b)
distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([processed_a, processed_b])
model = Model(inputs=[input_a, input_b], outputs=distance)
# train
model.compile(loss=contrastive_loss, optimizer='RMSprop', metrics=['accuracy'])
model.fit([tr_pair1_reshaped, tr_pair2_reshaped],y_train_categorical, epochs=nb_epoch, batch_size=64,verbose=1)
================================================ =======================
The results I am getting are as follows:
Epoch 1/3
3000/3000 [==============================] - 1s 368us/step - loss: 3.8701 - acc: 0.5000
Epoch 2/3
3000/3000 [==============================] - 1s 169us/step - loss: 0.5310 - acc: 0.5000
Epoch 3/3
3000/3000 [==============================] - 1s 167us/step - loss: 0.4727 - acc: 0.5000
所以这里的目标是图像匹配,因此是二进制分类。这里 50% 的准确率可能意味着根本没有学习。我将 to_categorical 用于匹配或不匹配的标签。我也尝试过 contrastive_loss 和 categorical_crossentropy 损失函数,但结果保持不变,“adam”和“rmsProp”优化器也没有区别。训练的总数据约为 40k。所以我也尝试了不同的批量大小,没有任何区别。那么我该从哪里挖掘问题的根源呢?有人对我有任何提示吗?我将非常感谢。 :)
【问题讨论】:
-
你的损失显着减少,所以模型确实在训练。
-
你说的有道理,我也注意到了。但是不影响精度怎么来的?有什么想法吗?
-
准确度是离散的。如果您的样本很少,或者数据有任何不一致,则可能会被冻结或跳转。
标签: python-3.x tensorflow keras