【问题标题】:PyTorch: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1]))PyTorch:使用与输入大小 (torch.Size([1, 1])) 不同的目标大小 (torch.Size([1]))
【发布时间】:2021-05-10 13:40:48
【问题描述】:

我是 PyTorch 的新手,正在研究推荐系统的实施。

我从这里获得了我的模型: https://blog.fastforwardlabs.com/2018/04/10/pytorch-for-recommenders-101.html

按照网站上的说明,我以与 MatrixFactorization 模型完全相同的方式提供 DenseNet 模型。

models.py:

class MatrixFactorization(nn.Module):

def __init__(self, n_users, n_items, n_factors=20):
    super(MatrixFactorization, self).__init__()
    # create user embeddings
    self.user_factors = nn.Embedding(n_users, n_factors,
                                     sparse=True)
    # create item embeddings
    self.item_factors = nn.Embedding(n_items, n_factors,
                                     sparse=True)

def forward(self, user, item):
    # matrix multiplication
    prediction = (self.user_factors(user) * self.item_factors(item)).sum(1)
    # test
    return F.hardsigmoid(prediction)

def predict(self, user, item):
    return self.forward(user, item)


class DenseNet(nn.Module):

def __init__(self, n_users, n_items, n_factors, h1=128, d_out=1):
    """
    Simple Feedforward with Embeddings
    """
    super(DenseNet, self).__init__()
    # user and item embedding layers
    self.user_factors = torch.nn.Embedding(n_users, n_factors,
                                           sparse=True)
    self.item_factors = torch.nn.Embedding(n_items, n_factors,
                                           sparse=True)
    # linear layers
    self.linear1 = torch.nn.Linear(n_factors*2, h1)
    self.linear2 = torch.nn.Linear(h1, d_out)

def forward(self, users, items):
    users_embedding = self.user_factors(users)
    items_embedding = self.item_factors(items)
    # concatenate user and item embeddings to form input
    x = torch.cat([users_embedding, items_embedding], 1)
    h1_relu = F.relu(self.linear1(x))
    output_scores = self.linear2(h1_relu)
    return output_scores

def predict(self, users, items):
    # return the score
    output_scores = self.forward(users, items)
    return output_scores

densenet 的训练:

    index = 0       
    model.train()

for user, item in zip(users, items):
    # get user, item and rating data
    # rating = Variable(torch.FloatTensor([ratings[user, item]]))
    rating = normalize(rating_values[index])
    rating = Variable(torch.FloatTensor([rating]))
    user = Variable(torch.LongTensor([int(user)]))
    item = Variable(torch.LongTensor([int(item)]))

    index += 1
    # predict
    prediction = model.predict(user, item)
    loss = loss_fn(prediction, rating)


    optimizer.zero_grad()
    # backpropagate
    loss.backward()

    # update weights
    optimizer.step()

虽然我收到了一个输出,但我得到了 UserWarning:

    AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\loss.py:446: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
  return F.mse_loss(input, target, reduction=self.reduction)

我认为该模型采用与适用于我的 mf 模型相同的输入(用户张量和每个用户的项目 ID)。输入有什么问题?哪一行产生了这个错误?

【问题讨论】:

    标签: machine-learning pytorch


    【解决方案1】:

    这不是一条错误消息,它是关于传递给nn.MSELoss 的张量形状的警告。

    假设您为每个模型提供形状为(n,) 的一维张量。唯一的区别是MatrixFactorization 将返回一个一维张量(形状为(n,),而DenseNet 将返回一个二维张量:形状(n, 1)。要删除额外的维度,您可以广播到形状(-1,) 或只是squeeze所有维度:

    loss = loss_fn(prediction.squeeze(), rating)
    

    其他需要指出的:

    • 请致电self 而不是self.forward
    • 不要使用Variable,因为它已被弃用

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2020-09-06
      • 2022-11-23
      • 2021-03-13
      • 2022-06-24
      • 2021-12-05
      • 2020-01-07
      • 2021-04-08
      • 1970-01-01
      相关资源
      最近更新 更多