【问题标题】:How to apply a pretrained transformer model from huggingface?如何从拥抱脸应用预训练的变压器模型?
【发布时间】:2021-08-16 19:31:19
【问题描述】:

我有兴趣将 Huggingface 的预训练模型用于命名实体识别 (NER) 任务,而无需进一步训练或测试模型。

model page of HuggingFace上,模型复用的唯一信息如下:

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")

我尝试了以下代码,但我得到的是张量输出,而不是每个命名实体的类标签。

from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")

text = "my text for named entity recognition here."

input_ids = torch.tensor(tokenizer.encode(text, padding=True, truncation=True,max_length=50, add_special_tokens = True)).unsqueeze(0)

with torch.no_grad():
  output = model(input_ids, output_attentions=True)

关于如何将模型应用于 NER 文本的任何建议?

【问题讨论】:

    标签: huggingface-transformers named-entity-recognition transformer


    【解决方案1】:

    您正在寻找命名实体识别pipeline(令牌分类):

    from transformers import AutoTokenizer, pipeline,  AutoModelForTokenClassification
    tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
    model = AutoModelForTokenClassification.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
    nerpipeline = pipeline('ner', model=model, tokenizer=tokenizer)
    text = "my text for named entity recognition here."
    nerpipeline(text)
    

    输出:

    [{'word': 'my',
      'score': 0.5209763050079346,
      'entity': 'LABEL_0',
      'index': 1,
      'start': 0,
      'end': 2},
     {'word': 'text',
      'score': 0.5161970257759094,
      'entity': 'LABEL_0',
      'index': 2,
      'start': 3,
      'end': 7},
     {'word': 'for',
      'score': 0.5297629237174988,
      'entity': 'LABEL_1',
      'index': 3,
      'start': 8,
      'end': 11},
     {'word': 'named',
      'score': 0.5258920788764954,
      'entity': 'LABEL_1',
      'index': 4,
      'start': 12,
      'end': 17},
     {'word': 'entity',
      'score': 0.5415489673614502,
      'entity': 'LABEL_1',
      'index': 5,
      'start': 18,
      'end': 24},
     {'word': 'recognition',
      'score': 0.5396601557731628,
      'entity': 'LABEL_1',
      'index': 6,
      'start': 25,
      'end': 36},
     {'word': 'here',
      'score': 0.5165827870368958,
      'entity': 'LABEL_0',
      'index': 7,
      'start': 37,
      'end': 41},
     {'word': '.',
      'score': 0.5266348123550415,
      'entity': 'LABEL_0',
      'index': 8,
      'start': 41,
      'end': 42}]
    

    请注意,您需要使用AutoModelForTokenClassification 而不是AutoModel,并且并非所有模型都具有经过训练的令牌分类头(即您将获得令牌分类头的随机权重)。

    【讨论】:

      猜你喜欢
      • 2021-12-21
      • 2021-10-07
      • 2022-01-23
      • 2021-01-10
      • 1970-01-01
      • 2021-08-10
      • 2021-01-26
      • 2021-12-12
      • 2022-06-28
      相关资源
      最近更新 更多