【发布时间】:2021-05-04 02:53:27
【问题描述】:
我正在使用 keras-bert 进行分类。在某些数据集上,它运行良好并计算损失,而在其他数据集上,损失为NaN。
不同数据集的相似之处在于它们是原始数据集的增强版本。使用 keras-bert,原始数据和一些增强版本的数据运行良好,而其他增强版本的数据运行不佳。
当我在使用 keras-bert 运行不佳的数据的增强版本上使用常规单层 BiLSTM 时,效果很好,这意味着我可以排除数据有错误或包含的可能性可能影响损失计算方式的虚假值。
处理的数据分为三类。
我正在使用基于 bert 的未封装
!wget -q https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip
谁能告诉我为什么损失是nan?
inputs = model.inputs[:2]
dense = model.layers[-3].output
outputs = keras.layers.Dense(3, activation='sigmoid', kernel_initializer=keras.initializers.TruncatedNormal(stddev=0.02),name = 'real_output')(dense)
decay_steps, warmup_steps = calc_train_steps(train_y.shape[0], batch_size=BATCH_SIZE,epochs=EPOCHS,)
#(decay_steps=decay_steps, warmup_steps=warmup_steps, lr=LR)
model = keras.models.Model(inputs, outputs)
model.compile(AdamWarmup(decay_steps=decay_steps, warmup_steps=warmup_steps, lr=LR), loss='sparse_categorical_crossentropy',metrics=['sparse_categorical_accuracy'])
sess = tf.compat.v1.keras.backend.get_session()
uninitialized_variables = set([i.decode('ascii') for i in sess.run(tf.compat.v1.report_uninitialized_variables ())])
init_op = tf.compat.v1.variables_initializer([v for v in tf.compat.v1.global_variables() if v.name.split(':')[0] in uninitialized_variables])
sess.run(init_op)
model.fit(train_x,train_y,epochs=EPOCHS,batch_size=BATCH_SIZE)
Train on 20342 samples
Epoch 1/10
20342/20342 [==============================] - 239s 12ms/sample - loss: nan - sparse_categorical_accuracy: 0.5572
Epoch 2/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 3/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2081
Epoch 4/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 5/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 6/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 7/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 8/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2081
Epoch 9/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
Epoch 10/10
20342/20342 [==============================] - 225s 11ms/sample - loss: nan - sparse_categorical_accuracy: 0.2082
<tensorflow.python.keras.callbacks.History at 0x7f1caf9b0f90>
另外,我正在使用 tensorflow 2.3.0 和 keras 2.4.3 在 Google Colab 上运行它
UPDATE
我再次查看了导致此问题的数据,我意识到其中一个目标标签丢失了。我可能错误地编辑了它。一旦我修复它,损失就是 NaN 问题消失了。但是,我将奖励 50 分给我得到的答案,因为它让我更好地思考我的代码。谢谢。
【问题讨论】:
标签: tensorflow keras deep-learning bert-language-model