【问题标题】:TypeError: forward() takes 2 positional arguments but 4 were given, PytorchTypeError: forward() 接受 2 个位置参数,但给出了 4 个,Pytorch
【发布时间】:2021-09-01 18:32:25
【问题描述】:

我正在尝试编写一个基于 Densenet 和 Deconv 方法的 GAN 生成器。我是 PyTorch 的新手,无法弄清楚

TypeError: forward() takes 2 positional arguments but 4 were given. 

我尝试了

中建议的方法

Pytorch TypeError: forward() takes 2 positional arguments but 4 were given

但我无法找出解决方案。

我的代码:

class DenseLayer(nn.Module):
  def __init__(self, in_size, out_size, drop_rate=0.0):
    super(DenseLayer, self).__init__()
    self.bottleneck = nn.Sequential() # define bottleneck layers
    self.bottleneck.add_module('btch1', nn.BatchNorm2d(in_size))
    self.bottleneck.add_module('relu1', nn.ReLU(inplace=True))
    self.bottleneck.add_module('conv1', nn.ConvTranspose2d(in_size, int(out_size/4), kernel_size=1, stride=1, padding=0, bias=False))

    self.basic = nn.Sequential() # define basic block
    self.basic.add_module('btch2', nn.BatchNorm2d(int(out_size/4)))
    self.basic.add_module('relu2', nn.ReLU(inplace=True))
    self.basic.add_module('conv2', nn.ConvTranspose2d(int(out_size/4), out_size, kernel_size=3, stride=1, padding=1, bias=False))

    self.droprate = drop_rate

  def forward(self, input):
    out = self.bottleneck(input)
    if self.droprate > 0:
      out = F.dropout(out, p=self.droprate, inplace=False, training=self.training)
    
    out = self.basic(out)
    if self.droprate > 0:
      out = F.dropout(out, p=self.droprate, inplace=False, training=self.training)
    return torch.cat((x,out), 1)

class DenseBlock(nn.Module):
  def __init__(self, num_layers, in_size, growth_rate, block, droprate=0.0):
    super(DenseBlock, self).__init__()
    self.layer = self._make_layer(block, in_size, growth_rate, num_layers, droprate)

  def _make_layer(self, block, in_size, growth_rate, num_layers, droprate):
    layers = []
    for i in range(num_layers):
      layers.append(block(in_size, in_size-i*growth_rate, droprate))
    return nn.Sequential(*layers)

  def forward(self, input):
    return self.layer(input)

class MGenDenseNet(nn.Module):
  def __init__(self, ngpu, growth_rate=32, block_config=(16,24,12,6), in_size=1024, drop_rate=0.0):
    super(MGenDenseNet, self).__init__()
    self.ngpu = ngpu
    self.features = nn.Sequential()
    self.features.add_module('btch0', nn.BatchNorm2d(in_size))

    block = DenseLayer
    num_features = in_size
    for i, num_layers in enumerate(block_config):
      block = DenseBlock(num_layers=num_layers, in_size=num_features, growth_rate=growth_rate, block=block, droprate=drop_rate) ### Error thrown on this line
      self.features.add_module('denseblock{}'.format(i+1), block)
      num_features -= num_layers*growth_rate

      if i!=len(block_config)-1:
        trans = TransitionLayer(in_size=num_features, out_size=num_features*2, drop_rate=drop_rate)
        self.features.add_module('transitionblock{}'.format(i+1), trans)
        num_features *= 2

    self.features.add_module('convfinal', nn.ConvTranspose2d(num_features, 3, kernel_size=7, stride=2, padding=3, bias=False))
    self.features.add_module('Tanh', nn.Tanh())

  def forward(self, input):
    return self.features(input)

mGen = MGenDenseNet(ngpu).to(device)
mGen.apply(weights_init)

print(mGen)

【问题讨论】:

  • 总是将完整的错误消息(从单词“Traceback”开始)作为文本(不是截图,不是链接到外部门户)有问题(不是评论)。还有其他有用的信息。
  • 你创建 def forward(self, input): 所以它只得到两个值 - 但似乎 Pytorch 用 4 个值运行它,所以它需要获得 4 个值的函数 - 比如 def forward(self, input, arg3, arg4):
  • 首先你可以创建def forward(self, input, arg3=None, arg4=None): print('arg3:', arg3) print('arg4:', arg4) 来看看你在这些函数中得到了什么。也许 Pytorch 会发送一些有用的信息。

标签: python pytorch typeerror generative-adversarial-network densenet


【解决方案1】:

class MGenDenseNet(nn.Module):
  def __init__(self, ngpu, growth_rate=32, block_config=(16,24,12,6), in_size=1024, drop_rate=0.0):
    super(MGenDenseNet, self).__init__()
    import pdb; pdb.set_trace()
    self.ngpu = ngpu
    self.features = nn.Sequential()
    self.features.add_module('btch0', nn.BatchNorm2d(in_size))
    block_placeholder = DenseLayer <<<<
    num_features = in_size
    for i, num_layers in enumerate(block_config):
      block = DenseBlock(num_layers=num_layers, in_size=num_features, growth_rate=growth_rate, block=block_placeholder, droprate=drop_rate) <<<< look at change
      self.features.add_module('denseblock{}'.format(i+1), block)
      num_features -= num_layers*growth_rate
    self.features.add_module('convfinal', nn.ConvTranspose2d(num_features, 3, kernel_size=7, stride=2, padding=3, bias=False))
    self.features.add_module('Tanh', nn.Tanh())
  def forward(self, input):
    return self.features(input)

这是因为您将block 定义为DenseLayer,然后将block 重新分配给初始化的DenseBlock(),然后将其作为block=block 传递。因此,在通过 for 循环进行一次迭代后,它传递的是 DenseBlock() 对象而不是 DenseLayer,因此它错误地使用了前向传递。

只需将 block = DenseLayer 更改为 block_placeholder 并使用该变量即可。

我通过在您的代码中放置一个调试器并注意到 DenseBlock 行仅在第二次调用时失败而发现了这一点。

【讨论】:

  • 谢谢,我很快发现问题中的 cmets 是由 @furas 添加的。由于我没有使用除瓶颈之外的任何其他块,因此我修改了代码以启动瓶颈块。
猜你喜欢
  • 2021-08-20
  • 1970-01-01
  • 2020-06-16
  • 1970-01-01
  • 1970-01-01
  • 2021-07-06
  • 1970-01-01
  • 2019-12-07
  • 2017-09-06
相关资源
最近更新 更多