【问题标题】:Linear autoencoder using Pytorch使用 Pytorch 的线性自动编码器
【发布时间】:2021-09-22 13:04:38
【问题描述】:

我们如何构建一个简单的线性自动编码器并使用 torch.optim 优化器对其进行训练?

如何使用 autograd (.backward()) 并优化 MSE 损失,然后学习编码器和解码器中的权重和偏差值(即编码器中的 3 个参数和输入中的 4 个参数)解码器)?并且数据必须是随机的,对于每次学习,从随机的权重和偏差开始,例如:

wEncoder = torch.randn(D,1, requires_grad=True)
wDecoder = torch.randn(1,D, requires_grad=True)
bEncoder = torch.randn(1, requires_grad=True)
bDecoder = torch.randn(1,D, requires_grad=True)

目标优化器是 SGD,学习率 0.01,无动量,1000 步(从随机开始),那么我们如何绘制损失与历元(步数)的关系?

我试过了,但每个时期的损失都是一样的。

D = 2
x = torch.rand(100,D)
x[:,0] = x[:,0] + x[:,1]
x[:,1] = 0.5*x[:,0] + x[:,1]

loss_fn = nn.MSELoss()
optimizer = optim.SGD([x[:,0],x[:,1]], lr=0.01)
losses = []
for epoch in range(1000):
    running_loss = 0.0
    inputs = x_reconstructed
    targets = x
    loss=loss_fn(inputs,targets)
    loss.backward(retain_graph=True)
    optimizer.step()
    optimizer.zero_grad()
    running_loss += loss.item() 
    epoch_loss = running_loss / len(data)
    losses.append(running_loss)

【问题讨论】:

    标签: optimization deep-learning pytorch autoencoder sgd


    【解决方案1】:

    这个例子应该可以帮助你。进一步解释请参见代码 cmets:

    import torch
    
    
    # Use torch.nn.Module to create models
    class AutoEncoder(torch.nn.Module):
        def __init__(self, features: int, hidden: int):
            # Necessary in order to log C++ API usage and other internals
            super().__init__()
            self.encoder = torch.nn.Linear(features, hidden)
            self.decoder = torch.nn.Linear(hidden, features)
    
        def forward(self, X):
            return self.decoder(self.encoder(X))
    
        def encode(self, X):
            return self.encoder(X)
    
    # Random data
    data = torch.rand(100, 4)
    model = AutoEncoder(4, 10)
    # Pass model.parameters() for increased readability
    # Weights of encoder and decoder will be passed
    optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
    loss_fn = torch.nn.MSELoss()
    
    # Per-epoch losses are gathered
    # Loss is the mean of batch elements, in our case mean of 100 elements
    losses = []
    for epoch in range(1000):
        reconstructed = model(data)
        loss = loss_fn(reconstructed, data)
        # No need to retain_graph=True as you are not performing multiple passes
        # of backpropagation
        loss.backward()
        optimizer.step()
        optimizer.zero_grad()
    
        losses.append(loss.item())
    

    请注意线性自动编码器大致相当于PCA decomposition,效率更高。

    您可能应该使用非线性自动编码器,除非它只是用于训练目的。

    【讨论】:

    • 谢谢!我隐约知道缺少一个模型,但不知道如何使用随机数据获取模型。
    【解决方案2】:

    我们也可以简单地使用nn.Sequential(),例如,使用以下代码 sn-p:

    import torch
    encoded_dim = 32
    encoder = torch.nn.Sequential(
                          torch.nn.Flatten(),
                          torch.nn.Linear(28*28, 256),
                          torch.nn.Sigmoid(),
                          torch.nn.Linear(256, 64),
                          torch.nn.Sigmoid(),
                          torch.nn.Linear(64, encoded_dim)
    )
    decoder = torch.nn.Sequential(
                          torch.nn.Linear(encoded_dim, 64),
                          torch.nn.Sigmoid(),
                          torch.nn.Linear(64, 256),
                          torch.nn.Sigmoid(),
                          torch.nn.Linear(256, 28*28),
                          torch.nn.Sigmoid(),
                          torch.nn.Unflatten(1, (28,28))
    )
    autoencoder = torch.nn.Sequential(encoder, decoder)
    autoencoder 
    # Sequential(
    # (0): Sequential(
    #  (0): Flatten(start_dim=1, end_dim=-1)
    #  (1): Linear(in_features=784, out_features=256, bias=True)
    #  (2): Sigmoid()
    #  (3): Linear(in_features=256, out_features=64, bias=True)
    #  (4): Sigmoid()
    #  (5): Linear(in_features=64, out_features=32, bias=True)
    #  )
    # (1): Sequential(
    #  (0): Linear(in_features=32, out_features=64, bias=True)
    #  (1): Sigmoid()
    #  (2): Linear(in_features=64, out_features=256, bias=True)
    #  (3): Sigmoid()
    #  (4): Linear(in_features=256, out_features=784, bias=True)
    #  (5): Sigmoid()
    #  (6): Unflatten(dim=1, unflattened_size=(28, 28))
    # )
    #)
    

    使用 MNIST 数据的示例训练

    使用 torchvision 加载数据 (MNIST):

    train_loader = torch.utils.data.DataLoader(
                        torchvision.datasets.MNIST('./data', train=True, download=True,
                                 transform=torchvision.transforms.Compose([
                                   torchvision.transforms.ToTensor(),
                                   # ...
                                 ])),
                        batch_size=64, shuffle=True)
    

    现在,让我们训练自动编码器模型,使用的优化器是 Adam,尽管也可以使用 SGD

    loss_fn = torch.nn.BCELoss()
    optimizer = torch.optim.Adam(autoencoder.parameters(), lr=1e-3, weight_decay=1e-5)
    for epoch in range(10):
        for idx, (x, _) in enumerate(train_loader):
          x = x.squeeze()
          x = x / x.max()
          x_pred = autoencoder(x) # forward pass
          loss = loss_fn(x_pred, x)
          if idx % 1024 == 0:
            print(epoch, loss.item())
          optimizer.zero_grad()
          loss.backward()         # backward pass
          optimizer.step()
    
    # epoch  loss
    # 0 0.702496349811554
    # 1 0.24611620604991913
    # 2 0.20603498816490173
    # 3 0.1827092468738556
    # 4 0.1805819869041443
    # 5 0.16927748918533325
    # 6 0.17275433242321014
    # 7 0.15827134251594543
    # 8 0.1635081171989441
    # 9 0.15693898499011993
    

    下面的动画展示了自动编码器在不同时期对一些随机选择的图像的重建,注意 MNIST 数字的重建是如何随着越来越多的时期变得更好的:

    【讨论】:

      猜你喜欢
      • 2020-05-08
      • 2019-10-18
      • 2021-11-08
      • 2019-12-10
      • 2021-03-16
      • 2018-10-15
      • 2021-05-22
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多