【发布时间】:2020-07-30 07:08:22
【问题描述】:
这是一个简单的自动编码器,用于将 3 个维度为 1x3 的向量:[1,2,3],[1,2,3],[100,200,500] 编码为 1x1:
epochs = 1000
from pylab import plt
plt.style.use('seaborn')
import torch.utils.data as data_utils
import torch
import torchvision
import torch.nn as nn
from torch.autograd import Variable
cuda = torch.cuda.is_available()
FloatTensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
import numpy as np
import pandas as pd
import datetime as dt
features = torch.tensor(np.array([ [1,2,3],[1,2,3],[100,200,500] ]))
print(features)
batch = 1
data_loader = torch.utils.data.DataLoader(features, batch_size=2, shuffle=False)
encoder = nn.Sequential(nn.Linear(3,batch), nn.Sigmoid())
decoder = nn.Sequential(nn.Linear(batch,3), nn.Sigmoid())
autoencoder = nn.Sequential(encoder, decoder)
optimizer = torch.optim.Adam(params=autoencoder.parameters(), lr=0.001)
encoded_images = []
for i in range(epochs):
for j, images in enumerate(data_loader):
# images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
optimizer.zero_grad()
reconstructions = autoencoder(images)
loss = torch.dist(images, reconstructions)
loss.backward()
optimizer.step()
# encoded_images.append(encoder(images))
# print(decoder(torch.tensor(np.array([1,2,3])).type(FloatTensor)))
encoded_images = []
for j, images in enumerate(data_loader):
images = images.view(images.size(0), -1)
images = Variable(images).type(FloatTensor)
encoded_images.append(encoder(images))
变量encoded_images 是一个大小为3 的数组,其中每个数组条目代表feature 数组的降维:
[tensor([[0.9972],
[0.9972]], grad_fn=<SigmoidBackward>),
tensor([[1.]], grad_fn=<SigmoidBackward>)]
为了确定新特征的相似性,例如[1,1,1] 是否需要重新训练网络,或者是否可以“引导”现有训练的网络配置/权重,以便无需对新向量进行编码重新训练网络?
【问题讨论】:
标签: deep-learning pytorch autoencoder