【发布时间】:2021-06-29 10:54:57
【问题描述】:
我在 Colab 中编写了一个 Bert 模型,并使用 GPU 对其进行了训练,并下载了权重以进行进一步推理。对于预测,我不需要 GPU,我在没有 GPU 的本地机器上进行测试。但是在我的本地 PC 中加载时出现以下错误,而 Colab 上没有错误。我不知道如何继续。
File "/home/akash/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 909,
in load_internal str(err) + "\n If trying to load on a different device from the "
FileNotFoundError: Op type not registered 'CaseFoldUTF8' in binary running on akash. Make sure
the Op and Kernel are registered in the binary running in this process.
Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.)
`tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered
when the module is first accessed.
我已经加载了,
self.classifier_model = self.build_classifier_model()
self.classifier_model.load_weights(BERT_HEADING)
pip list | grep 'tensorflow'的输出
tensorflow 2.5.0
tensorflow-addons 0.13.0
tensorflow-datasets 4.3.0
tensorflow-estimator 2.5.0
tensorflow-hub 0.12.0
tensorflow-metadata 1.1.0
tensorflow-model-optimization 0.6.0
tensorflow-text 2.5.0
我的模特:
bert_model_name = 'small_bert/bert_en_uncased_L-8_H-512_A-8'
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-8_H-512_A-8/1'
tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)
bert_model = hub.KerasLayer(tfhub_handle_encoder)
def build_classifier_model():
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(updated_data_frame['heading'].nunique(), activation='softmax', name='classifier')(net)
return tf.keras.Model(text_input, net)
classifier_model = build_classifier_model()
epochs = 5
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
classifier_model.compile(optimizer=optimizer,
loss=loss,
metrics=['CategoricalAccuracy'])
print(f'Training model with {tfhub_handle_encoder}')
history = classifier_model.fit(x=train_ds,
validation_data=val_ds,
epochs=5)
saved_model_path = 'resume_headings.h5'
classifier_model.save_weights(saved_model_path)
reloaded_model= build_classifier_model() # <-- This was working fine on Colab but giving an error (detailed desc above)
reloaded_model.load_weights(saved_model_path)
【问题讨论】:
标签: python tensorflow keras nlp tf.keras