【问题标题】:Pytorch: TypeError: copy_(): argument 'other' (position 1) must be Tensor, not VectorsPytorch:TypeError:copy_():参数“其他”(位置1)必须是张量,而不是向量
【发布时间】:2020-08-16 04:52:32
【问题描述】:

我正在 google colab 中构建我的模型。

我已经创建了一个自定义嵌入矩阵

import torchtext.vocab as vocab

custom_embeddings = vocab.Vectors(name = 'custom_embeddings.txt')
TEXT.build_vocab(train_data, vectors = custom_embeddings)

Encoder 类的代码如下:

class Encoder(nn.Module):
    def __init__(self, 
                 input_dim, 
                 hid_dim, 
                 n_layers, 
                 n_heads, 
                 pf_dim,
                 dropout, 
                 device,
                 max_length = 100):
        super().__init__()

        self.device = device

        self.tok_embedding = nn.Embedding(input_dim, hid_dim)

        # step added for custom embedding
        self.tok_embedding.weight.data.copy_(custom_embeddings)

        self.pos_embedding = nn.Embedding(max_length, hid_dim)

        self.layers = nn.ModuleList([EncoderLayer(hid_dim, 
                                                  n_heads, 
                                                  pf_dim,
                                                  dropout, 
                                                  device) 
                                     for _ in range(n_layers)])

        self.dropout = nn.Dropout(dropout)

        self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)

    def forward(self, src, src_mask):

        #src = [batch size, src len]
        #src_mask = [batch size, src len]

        batch_size = src.shape[0]
        src_len = src.shape[1]

        pos = torch.arange(0, src_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)

        #pos = [batch size, src len]

        src = self.dropout((self.tok_embedding(src) * self.scale) + self.pos_embedding(pos))

        #src = [batch size, src len, hid dim]

        for layer in self.layers:
            src = layer(src, src_mask)

        #src = [batch size, src len, hid dim]

        return src

现在,当我尝试创建 Encoder 对象时,我使用的自定义嵌入出现错误。

enc = Encoder(INPUT_DIM, 
              HID_DIM, 
              ENC_LAYERS, 
              ENC_HEADS, 
              ENC_PF_DIM, 
              ENC_DROPOUT, 
              device)

错误描述:

TypeError                                 Traceback (most recent call last)
<ipython-input-72-06d3631c029b> in <module>()
     18               ENC_PF_DIM,
     19               ENC_DROPOUT,
---> 20               device)
     21 
     22 dec = Decoder(OUTPUT_DIM, 

<ipython-input-59-6c2f23451d01> in __init__(self, input_dim, hid_dim, n_layers, n_heads, pf_dim, dropout, device, max_length)
     16 
     17         # step added for custom embedding
---> 18         self.tok_embedding.weight.data.copy_(custom_embeddings)
     19 
     20         self.pos_embedding = nn.Embedding(max_length, hid_dim)

TypeError: copy_(): argument 'other' (position 1) must be Tensor, not Vectors

您能帮我解决这个错误吗?

提前致谢!

【问题讨论】:

    标签: python nlp pytorch transformer


    【解决方案1】:

    我能够解决问题。

    解决方案: custom_embeddings 是一个向量对象,所以我们不能从中复制 .copy_ 因为它不是张量。

    使用 TEXT.vocab.vectors 代替 custom_embeddings 解决了这个问题。

    更新代码:

    # step added for custom embedding
    self.tok_embedding.weight.data.copy_(TEXT.vocab.vectors)
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-05-21
      • 2021-03-12
      • 1970-01-01
      相关资源
      最近更新 更多