【发布时间】:2021-10-25 08:38:34
【问题描述】:
我正在尝试使用 NER 模型进行预测,如来自 huggingface 的教程(它仅包含训练+评估部分)。
我在这里遵循这个确切的教程:https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb
训练完美无缺,但是当我尝试对一个简单的样本进行预测时,我遇到了问题。
model_checkpoint = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
loaded_model = AutoModel.from_pretrained('./my_model_own_custom_training.pth', from_tf=False)
input_sentence = "John Nash is a great mathematician, he lives in France"
tokenized_input_sentence = tokenizer([input_sentence],
truncation=True,
is_split_into_words=False,
return_tensors='pt')
predictions = loaded_model(tokenized_input_sentence["input_ids"])[0]
预测是形状(1,13,768)
如何获得[JOHN <-> ‘B-PER’, … France <-> “B-LOC”] 表单的最终结果,其中B-PER 和B-LOC 是两个真实标签,分别代表人和位置的标签?
预测的结果是:
torch.Size([1, 13, 768])
如果我写:
print(predictions.argmax(axis=2))
tensor([613, 705, 244, 620, 206, 206, 206, 620, 620, 620, 477, 693, 308])
我得到了上面的张量。
但是,我本来希望从地面实况注释中获得表示地面实况 [0…8] 标签的张量。
加载模型时的总结:
loading configuration file ./my_model_own_custom_training.pth/config.json Model config DistilBertConfig { “name_or_path": “distilbert-base-uncased”, “activation”: “gelu”, “architectures”: [ “DistilBertForTokenClassification” ], “attention_dropout”: 0.1, “dim”: 768, “dropout”: 0.1, “hidden_dim”: 3072, “id2label”: { “0”: “LABEL_0”, “1”: “LABEL_1”, “2”: “LABEL_2”, “3”: “LABEL_3”, “4”: “LABEL_4”, “5”: “LABEL_5”, “6”: “LABEL_6”, “7”: “LABEL_7”, “8”: “LABEL_8” }, “initializer_range”: 0.02, “label2id”: { “LABEL_0”: 0, “LABEL_1”: 1, “LABEL_2”: 2, “LABEL_3”: 3, “LABEL_4”: 4, “LABEL_5”: 5, “LABEL_6”: 6, “LABEL_7”: 7, “LABEL_8”: 8 }, “max_position_embeddings”: 512, “model_type”: “distilbert”, “n_heads”: 12, “n_layers”: 6, “pad_token_id”: 0, “qa_dropout”: 0.1, “seq_classif_dropout”: 0.2, “sinusoidal_pos_embds”: false, "tie_weights”: true, “transformers_version”: “4.8.1”, “vocab_size”: 30522 }
【问题讨论】:
标签: python-3.x pytorch huggingface-transformers huggingface-tokenizers