【问题标题】:My DC-GAN on grayscale face images is not training well我在灰度人脸图像上的 DC-GAN 训练不佳
【发布时间】:2021-05-14 19:28:08
【问题描述】:

所以我用 python/pytorch DC-GAN(深度卷积 GAN)在灰度面上训练了 30 个 epoch,我的 GAN 几乎失败了。我在生成器和鉴别器中添加了批量标准化和泄漏 relu(我听说这些是使 GAN 收敛的方法)和 Adam 优化器。我的 GAN 仍然只输出随机灰度像素(甚至与面部无关。)我对鉴别器没有任何问题,我的鉴别器工作得很好。然后我在我的鉴别器上实现了 0.01 的权重衰减,以使我的 GAN 训练更好(因为我的鉴别器比我的生成器做得更好)但无济于事。最后,我尝试将 GAN 训练更多 epoch,60 个 epoch。我的 GAN 仍然只生成随机像素,有时会输出全黑。 我使用的 GAN 训练方法适用于 MNIST 数据集(但我为此使用了一种更简单的 GAN 架构。)

import torch.nn as nn
import torch.nn.functional as F

class Discriminator(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 4, 3)
        self.conv2 = nn.Conv2d(4, 8, 3)
        self.bnorm1 = nn.BatchNorm2d(8)
        
        self.conv3 = nn.Conv2d(8, 16, 3)
        self.conv4 = nn.Conv2d(16, 32, 3)
        self.bnorm2 = nn.BatchNorm2d(32)
        
        self.conv5 = nn.Conv2d(32, 4, 3)
        
        self.fc1 = nn.Linear(5776, 1024)
        self.fc2 = nn.Linear(1024, 512)
        self.fc3 = nn.Linear(512, 1)
    def forward(self, x):
        pred = F.leaky_relu(self.conv1(x.reshape(-1,1,48,48)))
        pred = F.leaky_relu(self.bnorm1(self.conv2(pred)))
        pred = F.leaky_relu(self.conv3(pred))
        pred = F.leaky_relu(self.bnorm2(self.conv4(pred)))     
        pred = F.leaky_relu(self.conv5(pred))
        
        pred = pred.reshape(-1, 5776)

        pred = F.leaky_relu(self.fc1(pred))
        pred = F.leaky_relu(self.fc2(pred))
        pred = torch.sigmoid(self.fc3(pred))
        
        return pred
    
class Generator(nn.Module):
    def __init__(self):
        super().__init__()
        self.fc1 = nn.Linear(512, 1024)
        self.fc2 = nn.Linear(1024, 2048)
        self.fc3 = nn.Linear(2048, 5776)

        self.convT1 = nn.ConvTranspose2d(4, 32, 3)       
        self.convT2 = nn.ConvTranspose2d(32, 16, 3)
        self.bnorm1 = nn.BatchNorm2d(16)
        self.convT3 = nn.ConvTranspose2d(16, 8, 3)
        self.convT4 = nn.ConvTranspose2d(8, 4, 3)
        self.bnorm2 = nn.BatchNorm2d(4)
        self.convT5 = nn.ConvTranspose2d(4, 1, 3)
        
    def forward(self, x):
        pred = F.leaky_relu(self.fc1(x))
        pred = F.leaky_relu(self.fc2(pred))
        pred = F.leaky_relu(self.fc3(pred))
        
        pred = pred.reshape(-1, 4, 38, 38)
        
        pred = F.leaky_relu(self.convT1(pred))
        pred = F.leaky_relu(self.bnorm1(self.convT2(pred)))
        pred = F.leaky_relu(self.convT3(pred))
        pred = F.leaky_relu(self.bnorm2(self.convT4(pred)))
        pred = torch.sigmoid(self.convT5(pred))
        
        return pred

import torch.optim as optim

discriminator = discriminator.to("cuda")
generator = generator.to("cuda")

discriminator_losses = []
generator_losses = []

for epoch in range(30):
    for data,label in tensor_dataset:
        data = data.to("cuda")
        label = label.to("cuda")
        
        batch_size = data.size(0)
        real_labels = torch.ones(batch_size, 1).to("cuda")
        fake_labels = torch.zeros(batch_size, 1).to("cuda")
        
        noise = torch.randn(batch_size, 512).to("cuda")
        
        D_real = discriminator(data)
        D_fake = discriminator(generator(noise))
        
        D_real_loss = F.binary_cross_entropy(D_real, real_labels)
        D_fake_loss = F.binary_cross_entropy(D_fake, fake_labels)
        
        D_loss = D_real_loss+D_fake_loss
        
        d_optim.zero_grad()
        D_loss.backward()
        d_optim.step()
        
        noise = torch.randn(batch_size, 512).to("cuda")
        D_fake = discriminator(generator(noise))
        G_loss = F.binary_cross_entropy(D_fake, real_labels)
        
        g_optim.zero_grad()
        
        G_loss.backward()
        g_optim.step()
        
        discriminator_losses.append(D_loss)
        generator_losses.append(G_loss)

    print(epoch)

【问题讨论】:

  • 我对训练 WGAN 的记忆中只有几厘米,但我相信你不应该同时训练 D 和 G。您应该对 D 进行一个 epoch,然后再对 G 进行一个单独的 epoch。另一个技巧是先对 G 进行多个 epoch,然后对 D 进行一个 epoch。按照此页面上的步骤操作:machinelearningmastery.com/…

标签: machine-learning deep-learning neural-network pytorch generative-adversarial-network


【解决方案1】:

我也是深度学习和 GAN 模型的新手,但这种方法为我的 DCGAN 项目解决了类似的问题。使用至少 4*4 的内核大小:这是我的猜测,但无论网络有多深,小内核似乎都无法捕捉图像中的细节。我发现的其他提示大多来自这里:(上面的相同链接) https://machinelearningmastery.com/how-to-train-stable-generative-adversarial-networks/

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2018-07-15
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2019-02-20
    • 2020-03-21
    • 2014-06-04
    • 1970-01-01
    相关资源
    最近更新 更多