【问题标题】:RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels insteadRuntimeError:给定组 = 1,大小为 [16, 1, 3, 3] 的权重,预期输入 [16, 3, 1, 28] 有 1 个通道,但有 3 个通道
【发布时间】:2021-04-19 11:22:13
【问题描述】:

我知道我的图像只有一个通道,所以第一个卷积层是 (1,16,3,1) ,但我不知道为什么会出现这样的错误。

这是我的代码(我只发布相关部分)。

    org_x = train_csv.drop(['id', 'digit', 'letter'], axis=1).values
    org_x = org_x.reshape(-1, 28, 28, 1)  
    org_x = org_x/255
    org_x = np.array(org_x)
    org_x = org_x.reshape(-1, 1, 28, 28)
    org_x = torch.Tensor(org_x).float()

    x_test = test_csv.drop(['id','letter'], axis=1).values
    x_test = x_test.reshape(-1, 28, 28, 1)     
    x_test = x_test/255
    x_test = np.array(x_test)
    x_test = x_test.reshape(-1, 1, 28, 28)
    x_test = torch.Tensor(x_test).float()

    y = train_csv['digit']
    y = list(y)
    print(len(y))
    org_y = np.zeros([len(y), 1])
    for i in range(len(y)):
        org_y[i] = y[i]
    org_y = np.array(org_y)  
    org_y = torch.Tensor(org_y).float()

    from sklearn.model_selection import train_test_split
    x_train, x_valid, y_train, y_valid = train_test_split(
        org_x, org_y, test_size=0.2, random_state=42)  

我检查了 x_train 形状是 [1638, 1, 28, 28] 并且 x_valid 形状是 [410, 1, 28, 28]。

    transform = transforms.Compose([transforms.ToPILImage(),
                            transforms.ToTensor(),
                            transforms.Normalize((0.5, ), (0.5, )) ]) 

    
    class kmnistDataset(data.Dataset):
        def __init__(self, images, labels, transforms=None):
            self.x = images
            self.y = labels
            self.transforms = transforms
     
        def __len__(self):
            return (len(self.x))

        def __getitem__(self, idx):
            data = np.asarray(self.x[idx][0:]).astype(np.uint8)
    
            if self.transforms:
                data = self.transforms(data)
        
            if self.y is not None:
                return (data, self.y[idx])
            else:
                return data
    
    train_data = kmnistDataset(x_train, y_train, transforms=transform)
    valid_data = kmnistDataset(x_valid, y_valid, transforms=transform)

    # dataloaders
    train_loader = DataLoader(train_data, batch_size=16, shuffle=True)
    valid_loader = DataLoader(valid_data, batch_size=16, shuffle = False) 

这是我的模型

    class Net(nn.Module):
      def __init__(self):
            super(Net, self).__init__()

            self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
            self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
            self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
   
            self.bn1 = nn.BatchNorm2d(16)
            self.pool = nn.MaxPool2d(2, 2)

            unit = 64 * 14 * 14 
            self.fc1 = nn.Linear(unit, 500)
            self.fc2 = nn.Linear(500, 10)
    
        def forward(self, x):
            x = self.pool(F.relu(self.bn1(self.conv1(x))))
            x = F.relu(self.conv2(x))
            x = F.relu(self.conv3(x))
            x = x.view(-1, 128 * 28 * 28)
            x = F.relu(self.fc1(x))
            x = self.fc2(x)
            return x
    

    model = Net()
    print(model)

最后,

    n_epochs = 30

    valid_loss_min = np.Inf

    for epoch in range(1, n_epochs+1):
        train_loss = 0
        valid_loss = 0

        ###################
        # train the model #
        ###################
        model.train()
        for data in train_loader:
            inputs, labels = data[0], data[1]
            optimizer.zero_grad()
            output = model(inputs)
            loss = criterion(output, labels)
            loss.backward()
            optimizer.step()
            train_loss += loss.item()*data.size(0)
        
        #####################
        # validate the model#
        #####################
        model.eval()
        for data in valid_loader:
            inputs, labels = data[0], data[1]
            output = model(inputs)
            loss = criterion(output, labels)
            valid_loss += loss.item()*data.size(0)
    
    
        train_loss = train_loss/ len(train_loader.dataset)
        valid_loss = valid_loss / len(valid_loader.dataset)

        print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
            epoch, train_loss, valid_loss))

当我运行它时,我收到了这个错误信息

RuntimeError: 给定组=1,大小为 [16, 1, 3, 3] 的权重,预期输入 [16, 3, 1, 28] 有 1 个通道,但改为有 3 个通道

具体来说,

    ---------------------------------------------------------------------------
    RuntimeError                              Traceback (most recent call last)
    <ipython-input-14-b8783819421f> in <module>
         14         inputs, labels = data[0], data[1]
         15         optimizer.zero_grad()
    ---> 16         output = model(inputs)
         17         loss = criterion(output, labels)
         18         loss.backward()

    /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in   _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),

    <ipython-input-12-500e34c49306> in forward(self, x)
         26 
         27     def forward(self, x):
    ---> 28         x = self.pool(F.relu(self.bn1(self.conv1(x))))
         29         x = F.relu(self.conv2(x))
         30         x = F.relu(self.conv3(x))

    /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in         _call_impl(self, *input, **kwargs)
        725             result = self._slow_forward(*input, **kwargs)
        726         else:
    --> 727             result = self.forward(*input, **kwargs)
        728         for hook in itertools.chain(
        729                 _global_forward_hooks.values(),

    /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in forward(self, input)
        421 
        422     def forward(self, input: Tensor) -> Tensor:
    --> 423         return self._conv_forward(input, self.weight)
        424 
        425 class Conv3d(_ConvNd):

    /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight)
        418                             _pair(0), self.dilation, self.groups)
        419         return F.conv2d(input, weight, self.bias, self.stride,
    --> 420                         self.padding, self.dilation, self.groups)
        421 
        422     def forward(self, input: Tensor) -> Tensor:

    RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28]         to have 1 channels, but got 3 channels instead

【问题讨论】:

  • 尝试删除转换。我怀疑ToPILImage 添加了提取通道。此外,3 个通道不是您唯一的问题 - 输入图像的高度是 1 而不是 28...检查inputslabels 的形状通过模型运行它们之前。
  • 你好,你能提供一个最小的可重现的例子吗?失败的部分显然是您的 forward 方法的第一行。这只是张量和层维度的问题。删除其他所有内容(数据集、模型定义、训练循环),只保留几个相关层和一个大小正确的虚拟输入张量(使用 torch.zerostorch.randn 调用)。你应该得到一个大约 5 行的代码,可以复制粘贴并正常工作。那么调试会容易很多
  • @Shai 我按照您的建议删除了转换,我收到了另一条错误消息:RuntimeError: expected scalar type Byte but found Float
  • @trialNerror 你好,你能详细解释一下吗?我不明白.. 你的意思是不要先使用我的数据并尝试使用零值或随机值,而是使用相同形状的张量来检查我的模型是否正常?
  • 没错!实际上,您粘贴的代码有几十行,而仅仅几行就足以重现问题。它显然来自输入张量和层的大小,因此值是 0 还是随机或其他都无关紧要。只需构建卷积和 batchnorm 层,一个具有输入尺寸的张量,将张量放入层中,看看会发生什么。那应该是 5 行代码,更容易理解,并且对于 stackoverflow 上的人来说更具可读性:)

标签: pytorch


【解决方案1】:

我用您的代码尝试了一个小演示。在您的代码具有x = x.view(-1, 64*14*14)torch.Size([1, 1, 28 ,28]) 的输入形状之前它工作正常

import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
        def __init__(self):
            super(Net, self).__init__()

            self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
            self.conv2 = nn.Conv2d(16, 32, 3, padding=1)
            self.conv3 = nn.Conv2d(32, 64, 3, padding=1)
   
            self.bn1 = nn.BatchNorm2d(16)
            self.pool = nn.MaxPool2d(2, 2)

            unit = 64 * 14 * 14 
            self.fc1 = nn.Linear(unit, 500)
            self.fc2 = nn.Linear(500, 10)
    
        def forward(self, x):
            x = self.pool(F.relu(self.bn1(self.conv1(x))))
            x = F.relu(self.conv2(x))
            x = F.relu(self.conv3(x))
            #print(x.shape)
            x = x.view(-1, 64*14*14)
            x = F.relu(self.fc1(x))
            x = self.fc2(x)
            return x
    

model = Net()
print(model)

data = torch.rand((1,1,28,28))
pred = model(data)

如果我将我的data 张量作为data = torch.rand((1,3,28,28)) 我得到你的错误,即RuntimeError: Given groups=1, weight of size [16, 1, 3, 3], expected input[16, 3, 1, 28] to have 1 channels, but got 3 channels instead

因此,请在将数据传递给您的模型之前检查您的数据通道暗度,即此处(由 ** ** 突出显示)

for data in train_loader:
        inputs, labels = data[0], data[1]
        optimizer.zero_grad()
        **print(inputs.shape)**
        output = model(inputs)
        loss = criterion(output, labels)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()*data.size(0)

【讨论】:

    【解决方案2】:

    我认为问题出在BatchNorm() 层 ==> self.bn1 = nn.BatchNorm2d(16)

    这一层的参数应该是输入的通道数。因此,如果您查看最后一个 conv 层 conv3,它会生成 64 个通道的特征图,因此当您将此特征图提供给您的 BatchNorm() 时,它也应该是 64。因此,您可以简单地执行以下操作:

    self.bn1 = nn.BatchNorm2d(64)
    

    【讨论】:

    • 你好,我只在第一个卷积层(conv1)使用了 BatchNorm。这就是为什么我将 bn1 的大小定义为 16。我是否应该更改或定义新的 self.bn2 为 64,即使我没有在最后一个 conv 层(conv3)使用它?
    • 哦,对了,对不起,我想我错过了,那部分没问题,你做对了,我想我找到了,,,你能把这行x = x.view(-1, 128 * 28 * 28) 改成x = x.view(-1, 64 * 14 * 14) ,我认为它会解决问题,如果不能,那么我认为我需要更严格地查看您的数据形状。
    猜你喜欢
    • 2020-10-06
    • 2021-03-11
    • 2020-01-27
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2022-12-22
    • 2020-03-17
    相关资源
    最近更新 更多