【问题标题】:Change array shape / pytorch preprocessing / python更改数组形状/pytorch 预处理/python
【发布时间】:2018-12-02 20:19:26
【问题描述】:

我正在试验 pytorch 并尝试在 GPU 上运行它并出现这样的错误

ValueError: Target size (torch.Size([4, 256, 1, 320])) must be the same as input size (torch.Size([4, 1, 256, 320]))

这些是我重塑数组的方式。

def __getitem__(self, idx):
    img_filename = os.path.join(
        self.images_dir, self.images_name[idx] + '.jpg')
    img = np.array(Image.open(img_filename))
    img = cv2.resize(img, (320, 256))

    if self.target_dir:
        mask_filename = os.path.join(
            self.target_dir, self.images_name[idx] + '.png')
        mask = np.array(Image.open(mask_filename))
        mask = np.resize(mask, (320, 256))
        mask = np.reshape(mask, (1,) + mask.shape)
    else:
        mask = []

    if self.transforms:
        img = self.transforms(img)
        if mask != []:
            mask = transforms.ToTensor()(mask)

    return {'img': img, 'mask': mask}

这些是错误的完整日志 有什么建议吗?

------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-14-d3dc126e6038> in <module>()
      6         optimizer.zero_grad()
      7         output = unet(batch['img'].cuda())
----> 8         loss = criterion(output, batch['mask'])
      9         loss.backward()
     10         optimizer.step()

~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, 
*input, **kwargs)
    475             result = self._slow_forward(*input, **kwargs)
    476         else:
--> 477             result = self.forward(*input, **kwargs)
    478         for hook in self._forward_hooks.values():
    479             hook_result = hook(self, input, result)

~\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, 
input, target)
    571                                                   self.weight,
    572                                                   
pos_weight=self.pos_weight,
--> 573                                                   
reduction=self.reduction)
    574 
    575 

~\Anaconda3\lib\site-packages\torch\nn\functional.py in 
binary_cross_entropy_with_logits(input, target, weight, size_average, 
reduce, reduction, pos_weight)
   1644         reduction = _Reduction.legacy_get_string(size_average, reduce)
   1645     if not (target.size() == input.size()):
-> 1646         raise ValueError("Target size ({}) must be the same as input 
size ({})".format(target.size(), input.size()))
   1647 
   1648     max_val = (-input).clamp(min=0)

ValueError: Target size (torch.Size([4, 256, 1, 320])) must be the same as 
input size (torch.Size([4, 1, 256, 320]))

更新: 这些是重塑的工作原理,一切看起来都很好。只是不明白为什么会出现这样的错误。

mask = np.array(Image.open('data/train_mask/1.png'))
mask = np.resize(mask, (320, 240))
mask = np.reshape(mask, mask.shape + (1,))
img = np.array(Image.open('data/train/1.jpg'))
print(np.shape(mask), np.shape(img))

(320, 240, 1) (320, 240, 3)

【问题讨论】:

  • 重塑前的面具是什么形状?

标签: python arrays computer-vision pytorch shapes


【解决方案1】:

这里的问题是目标和输入的张量维度不匹配。 Pytorch 期望维度的形式为

N C H W

N -- 批量大小

C -- 通道数

H -- 张量/矩阵的高度

W -- 张量/矩阵的宽度

解决这个问题

替换

loss = criterion(output, batch['mask'])

有了这个

targets = torch.reshape(batch['mask'], (4, 1, 256, 320))
loss = criterion(output, targets)

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2016-11-26
    • 2022-01-16
    • 1970-01-01
    • 2020-09-09
    • 1970-01-01
    • 2021-06-02
    • 2016-08-03
    • 1970-01-01
    相关资源
    最近更新 更多