【问题标题】:How to fix "ValueError: Expected input batch_size (1) to match target batch_size (4)."?如何修复“ValueError: Expected input batch_size (1) to match target batch_size (4).”?
【发布时间】:2019-10-24 04:56:44
【问题描述】:

我正在 google colab 上训练一个 pytorch 神经网络,以对总共 29 个类的手语字母进行分类。

我们一直在通过更改各种参数来修复代码,但无论如何它都不起作用。

    transform = transforms.Compose([

        #gray scale
        transforms.Grayscale(),

        #resize
        transforms.Resize((128,128)),

        #converting to tensor
        transforms.ToTensor(),

        #normalize
        transforms.Normalize( (0.1307,), (0.3081,)),
    ])

    data_dir = 'data/train/asl_alphabet_train'

    #dataset
    full_dataset = datasets.ImageFolder(root=data_dir, transform=transform)

    #train & test 
    train_size = int(0.8 * len(full_dataset))
    test_size = len(full_dataset) - train_size

    #splitting
    train_dataset, test_dataset = torch.utils.data.random_split(full_dataset, [train_size, test_size])

    trainloader = torch.utils.data.DataLoader(train_dataset , batch_size = 4, shuffle = True )
    testloader = torch.utils.data.DataLoader(test_dataset , batch_size = 4, shuffle = False )

    #neural net architecture
    Net(
  (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv2): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (conv3): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
  (fc1): Linear(in_features=32768, out_features=128, bias=True)
  (fc2): Linear(in_features=128, out_features=29, bias=True)
  (dropout): Dropout(p=0.5)
   )

    loss_fn = nn.CrossEntropyLoss()
    #optimizer
    opt = optim.SGD(model.parameters(), lr=0.01)
    def train(model, train_loader, optimizer, loss_fn, epoch, device):
        #telling pytorch that training mode is on
        model.train()
        loss_epoch_arr = []

        #epochs
        for e in range(epoch):

            # bach_no, data, target
            for batch_idx, (data, target) in enumerate(train_loader):

                #moving to GPU
                #data, target = data.to(device), target.to(device)

                #Making gradints zero
                optimizer.zero_grad()

                #generating output
                output = model(data)

                #calculating loss
                loss = loss_fn(output, target)

                #backward propagation
                loss.backward()

                #stepping optimizer
                optimizer.step()

                #printing at each 10th epoch
                if batch_idx % 10 == 0:
                    print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                        epoch, batch_idx * len(data), len(train_loader.dataset),
                        100. * batch_idx / len(train_loader), loss.item()))


                #de-allocating memory
                del data,target,output
                #torch.cuda.empty_cache()

            #appending values
            loss_epoch_arr.append(loss.item())

        #plotting loss
        plt.plot(loss_epoch_arr)
        plt.show()

    train(model, trainloader , opt, loss_fn, 10, device)

ValueError: Expected input batch_size (1) to match target batch_size (4).

我们是 pytorch 的初学者,并试图找出问题所在。

【问题讨论】:

  • 错误说明了问题所在。使您的输入批量大小与目标批量大小匹配
  • 在你做output = model(data)之前,使用print(data.shape)检查你输入的维度,即数据。 PyTorch 模型通常需要一个 4D 输入张量,其尺寸为 -(批量大小、通道、高度、宽度)。在您的情况下,它应该是 (4, 1, height, width)。
  • 面临同样的问题ValueError: Expected input batch_size (3) to match target batch_size (1). 但是我有 3 个频道而不是 batch_size

标签: python-3.x deep-learning pytorch


【解决方案1】:

此错误的最可能原因与 nn.Linear 函数中的 in_features 值有关 您尚未为此提供完整代码。

检查这一点的一种方法是将以下几行添加到转发函数中(在 x.view 之前:

    print('x_shape:',x.shape)

结果的格式为 [a,b,c,d]。 in_features 值应等于 b*c*d

【讨论】:

    猜你喜欢
    • 2021-05-21
    • 2021-09-04
    • 2020-08-24
    • 2022-07-22
    • 1970-01-01
    • 2021-04-15
    • 1970-01-01
    • 2020-08-25
    • 2018-03-10
    相关资源
    最近更新 更多