【问题标题】:Getting two outputs from Keras model从 Keras 模型中获取两个输出
【发布时间】:2020-04-29 17:11:14
【问题描述】:

我想在这段代码中将网络的输出扩展到两个输出

 def tag_dataset(self, dataset, model):
    """Tag data with numerical values"""
    correctLabels = []
    predLabels = []

    for i, data in enumerate(dataset):
        tokens, casing, char, labels = data
        tokens = np.asarray([tokens])
        casing = np.asarray([casing])
        char = np.asarray([char])
        pred = model.predict([tokens, casing, char], verbose=False)[0]
        pred = pred.argmax(axis=-1)  
        correctLabels.append(labels)
        predLabels.append(pred) 
    return predLabels, correctLabels



def buildModel(self):
    """Model layers"""
    # character input
    character_input = Input(shape=(None, 52,), name="Character_input")
    embed_char_out = TimeDistributed(
        Embedding(len(self.char2Idx), 30, embeddings_initializer=RandomUniform(minval=-0.5, maxval=0.5)), name="Character_embedding")(
        character_input)

    dropout = Dropout(self.dropout)(embed_char_out)

    # CNN
    conv1d_out = TimeDistributed(Conv1D(kernel_size=self.conv_size, filters=30, padding='same', activation='tanh', strides=1), name="Convolution")(dropout)
    maxpool_out = TimeDistributed(MaxPooling1D(52), name="Maxpool")(conv1d_out)
    char = TimeDistributed(Flatten(), name="Flatten")(maxpool_out)
    char = Dropout(self.dropout)(char)

    # word-level input
    words_input = Input(shape=(None,), dtype='int32', name='words_input')
    words = Embedding(input_dim=self.wordEmbeddings.shape[0], output_dim=self.wordEmbeddings.shape[1], weights=[self.wordEmbeddings],
                      trainable=False)(words_input)

    # case-info input
    casing_input = Input(shape=(None,), dtype='int32', name='casing_input')
    casing = Embedding(output_dim=self.caseEmbeddings.shape[1], input_dim=self.caseEmbeddings.shape[0], weights=[self.caseEmbeddings],
                       trainable=False)(casing_input)

    # concat & BLSTM
    output = concatenate([words, casing, char])
    output = Bidirectional(LSTM(self.lstm_state_size, 
                                return_sequences=True, 
                                dropout=self.dropout,                        # on input to each LSTM block
                                recurrent_dropout=self.dropout_recurrent     # on recurrent input signal
                               ), name="BLSTM")(output)
    output1 = TimeDistributed(Dense(len(self.label2Idx), activation='softmax'),name="Softmax_layer1")(output)
    output2 = TimeDistributed(Dense(len(self.label2Idx), activation='softmax'),name="Softmax_layer2")(output)    #This line is added

    # set up model
    self.model = Model(inputs=[words_input, casing_input, character_input], outputs= [output1, output2])
    self.model.compile(loss='sparse_categorical_crossentropy', loss_weights=[0.5, 0.5], optimizer=self.optimizer)

    self.init_weights = self.model.get_weights()

    plot_model(self.model, to_file='model.png')
    print("Model built. Saved model.png\n")

def train(self):
    """Default training"""

    self.f1_test_history = []
    self.f1_dev_history = []

    for epoch in range(self.epochs):    
        print("Epoch {}/{}".format(epoch, self.epochs))
        for i,batch in enumerate(iterate_minibatches(self.train_batch,self.train_batch_len)):

            labels, tokens, casing, char = batch 

            self.model.train_on_batch([tokens, casing, char], [labels, labels] )

        # compute F1 scores
        predLabels, correctLabels = self.tag_dataset(self.test_batch, self.model)
        pre_test, rec_test, f1_test = compute_f1(predLabels, correctLabels, self.idx2Label)

原始代码位于https://github.com/mxhofer/Named-Entity-Recognition-BidirectionalLSTM-CNN-CoNLL.git

只需添加 Dense 层以获得两个输出,就足够了吗?我添加了 Dense 层,但它给出了定义为“compute_f1”的错误

def compute_f1(predictions, correct, idx2Label):
label_pred = []
for sentence in predictions:
    label_pred.append([idx2Label[element] for element in sentence])

label_correct = []
for sentence in correct:
    label_correct.append([idx2Label[element] for element in sentence])

# print("predictions ", len(label_pred))
# print("correct labels ", len(label_correct))

prec = compute_precision(label_pred, label_correct)
rec = compute_precision(label_correct, label_pred)

f1 = 0
if (rec + prec) > 0:
    f1 = 2.0 * prec * rec / (prec + rec);

return prec, rec, f1

错误是

 in train
  pre_test, rec_test, f1_test = compute_f1(predLabels, correctLabels, self.idx2Label)
  File "...", line 10, in compute_f1
label_pred.append([idx2Label[element] for element in sentence])
   File "...", line 10, in <listcomp>
label_pred.append([idx2Label[element] for element in sentence])
  TypeError: unhashable type: 'numpy.ndarray'

【问题讨论】:

  • 如果您在此处包括如何准备数据和校准拟合函数会更好,而不是链接到另一个 gihub 链接。从错误中,您似乎只将一个目标/标签传递给您的模型,而它需要两个。如果你的目标是相同的,你应该复制它并将它作为一个列表传递。
  • @BashirKazimi 谢谢。我添加了更多细节。正如你所说,我复制了标签 `self.model.train_on_batch([tokens, case, char], [labels, labels])' 该错误已解决,但我认为因为标签是 NumPy 数组的列表,所以出现了另一个错误在compute_f1中。我该如何解决?
  • 很高兴你解决了你的问题。那么也许赞成评论? :) 同样关于您的新错误,您应该将其视为列表。现在,您的 compute_f1 函数认为每个示例都是一个向量,您应该将其视为一个列表。
  • upvoted :) 你能解释一下吗?既然现在 predLabels,correctLabels 不是列表,既然 `pred = model.predict([tokens, case, char], verbose=False)[0]' , idx2Label 也不是列表,compute_f1 函数应该怎么改?跨度>
  • 我添加了 compute_f1 函数作为可读性的答案,它没有经过测试,但你明白了,请根据需要进行调整。祝你好运

标签: keras neural-network lstm named-entity-recognition


【解决方案1】:

注意:根据原帖下的cmets回复

由于您是基于两个预测计算相同值的 f1,因此您可以计算两个预测的 f1 并取平均值。在没有任何正确性验证的情况下,我正在编写以下代码:

def compute_f1(predictions, correct, idx2Label):
    prediction_1 = predictions[0]
    correct_1 = correct[0]
    label_pred = []
    for sentence in prediction_1:
        label_pred.append([idx2Label[element] for element in sentence])

    label_correct = []
    for sentence in correct_1:
        label_correct.append([idx2Label[element] for element in sentence])

    # print("predictions ", len(label_pred))
    # print("correct labels ", len(label_correct))

    prec_1 = compute_precision(label_pred, label_correct)
    rec_1 = compute_precision(label_correct, label_pred)

    f1_1 = 0
    if (rec_1 + prec_1) > 0:
        f1_1 = 2.0 * prec_1 * rec_1 / (prec_1 + rec_1);

    prediction_2 = predictions[1]
    correct_2 = correct[1]
    label_pred = []
    for sentence in prediction_2:
        label_pred.append([idx2Label[element] for element in sentence])

    label_correct = []
    for sentence in correct_2:
        label_correct.append([idx2Label[element] for element in sentence])

    # print("predictions ", len(label_pred))
    # print("correct labels ", len(label_correct))

    prec_2 = compute_precision(label_pred, label_correct)
    rec_2 = compute_precision(label_correct, label_pred)

    f1_2 = 0
    if (rec_2 + prec_2) > 0:
        f1_2 = 2.0 * prec_2 * rec_2 / (prec_2 + rec_2);

    # taking average
    prec = (prec_1 + prec_2)/2.
    rec = (rec_1 + rec_2)/2.
    f1 = (f1_1 + f1_2)/2.

    return prec, rec, f1

【讨论】:

  • thnaks @Bashir Kazimi ,我对 tag_dataset 函数进行了更改, `labels = [labels] correctLabels.append(labels)' ,现在代码运行没有错误,但输出 f1_dev 为 0 并且没有改变
  • 似乎代码执行不正确。我想知道通过考虑模型的两个输出,函数 tag_dataset 中的 `pred = model.predict([tokens, case, char], verbose=False)[0]' 不应该改变吗?它同时为两个输出生成两个预测?
猜你喜欢
  • 2017-05-25
  • 2019-01-27
  • 1970-01-01
  • 2021-12-06
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2019-03-08
  • 1970-01-01
相关资源
最近更新 更多