【发布时间】:2020-06-27 05:46:55
【问题描述】:
我正在构建一个自定义自动编码器来训练数据集。我的模型如下
class AutoEncoder(nn.Module):
def __init__(self):
super(AutoEncoder,self).__init__()
self.encoder = nn.Sequential(
nn.Conv2d(in_channels = 3, out_channels = 32, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels = 32, out_channels = 64, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels = 64, out_channels = 128, kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=128,out_channels=256,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=256,out_channels=512,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.Conv2d(in_channels=512,out_channels=1024,kernel_size=5,stride=2),
nn.ReLU(inplace=True)
)
self.decoder = nn.Sequential(
nn.ConvTranspose2d(in_channels=1024,out_channels=512,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=512,out_channels=256,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=256,out_channels=128,kernel_size=5,stride=2),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=128,out_channels=64,kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=64,out_channels=32,kernel_size=3,stride=1),
nn.ReLU(inplace=True),
nn.ConvTranspose2d(in_channels=32,out_channels=3,kernel_size=3,stride=1),
nn.ReLU(inplace=True)
)
def forward(self,x):
x = self.encoder(x)
print(x.shape)
x = self.decoder(x)
return x
def unit_test():
num_minibatch = 16
img = torch.randn(num_minibatch, 3, 512, 640).cuda(0)
model = AutoEncoder().cuda()
model = nn.DataParallel(model)
output = model(img)
print(output.shape)
if __name__ == '__main__':
unit_test()
如您所见,我的输入维度是 (3, 512, 640),但通过解码器后的输出是 (3, 507, 635)。添加 Conv2D 转置层时我是否遗漏了什么?
任何帮助将不胜感激。谢谢
【问题讨论】:
标签: python computer-vision pytorch autoencoder torchvision