【发布时间】:2019-10-01 14:58:27
【问题描述】:
我正在尝试使用 LSTM 自动编码器将可变大小的整数序列数据集嵌入到固定长度的向量中,但即使序列不同,该模型也会继续输出恒定向量。
数据集的每个样本表示如下:
[1,3,4,2,1]
每个序列都使用 one-hot 编码进行编码:
[[0,1,0,0,0],[0,0,0,1,0],[0,0,0,0,1],[0,0,1,0,0 ],[0,1,0,0,0]]
如果序列较短,则对 one-hot 编码向量应用零填充。
[[0,1,0,0,0],[0,0,0,1,0],[0,0,0,0,1],[0,0,1,0,0 ],[0,1,0,0,0],...,[0,0,0,0,0]]
最后我有一个大小矩阵作为输入
N_SAMPLES x N_INTEGERS(n_timesteps) X ONE_HOT_ENCODING_SIZE(n_features)
我期望模型的输出是一个大小矩阵
N_SAMPLES x FIXED_SIZE(latent_dim)
from keras.utils import Sequence
def to_categorical(sequences, n_categories, max_len):
categorical_sequences = []
for s in sequences:
#ohe = np.full((max_len, n_categories), fill_value=-1 )
ohe = np.zeros((max_len, n_categories))
for i, item in enumerate(s):
ohe[i][item] = 1
categorical_sequences.append(ohe)
return np.array(categorical_sequences)
class batch_generator(Sequence):
def __init__(self, X, batch_size, num_classes, max_len, y=None, prediction_only=False, shuffle=True):
self.X = X
self.batch_size = batch_size
self.num_classes = num_classes
self.max_len = max_len
self.y = y
self.prediction_only = prediction_only
self.shuffle = shuffle
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return int(np.floor(len(self.X) / self.batch_size))
def __getitem__(self, index):
'Generate one batch of data'
#print("Generating batch with index {}".format(index))
batch_indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
return self.__data_generation(batch_indexes)
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indexes = np.arange(len(self.X))
if(self.shuffle == True):
np.random.shuffle(self.indexes, )
def __data_generation(self, batch_indexes):
'Generates data containing batch_size samples'
result = None
batch_X = to_categorical(self.X[batch_indexes], self.num_classes, self.max_len)
if(self.prediction_only):
result = batch_X
else:
if(self.y is None):
result = batch_X, batch_X
else:
batch_y = self.y[batch_indexes]
result = batch_X, batch_y
return result
from keras.layers import Input, RepeatVector, CuDNNGRU
from keras.models import Model
n_timesteps = np.max([x.shape[0] for x in X])
n_features = int(np.max([np.max(x) for x in X]) + 1)
latent_dim = 128
print("N timesteps {}".format(n_timesteps))
print("N features {}".format(n_features))
print("Latent dimension {}".format(latent_dim))
inputs = Input(shape=(n_timesteps, n_features))
encoded = CuDNNGRU(units=latent_dim)(inputs)
decoded = RepeatVector(n=n_timesteps)(encoded)
decoded = CuDNNGRU(units=n_features, return_sequences=True)(decoded)
autoencoder = Model(inputs, decoded)
encoder = Model(inputs, encoded)
autoencoder.compile(loss='mse', optimizer="adam")
autoencoder.summary()
batch_size = 128
train_generator = batch_generator(X_train, batch_size=batch_size, num_classes=n_features, max_len=n_timesteps)
val_generator = batch_generator(X_val, batch_size=batch_size, num_classes=n_features, max_len=n_timesteps)
history = autoencoder.fit_generator(generator = train_generator,
steps_per_epoch = X_train.shape[0]//batch_size,
epochs = 2,
#callbacks = [early_stopping, model_checkpoint],
validation_data = val_generator,
validation_steps = X_val.shape[0]//batch_size,
#use_multiprocessing = True,
#workers = n_cpu
)
X_generator = batch_generator(X, batch_size=batch_size, num_classes=n_features, max_len=n_timesteps, prediction_only=True )
compact_representation64 = encoder.predict_generator(generator=X_generator, steps=X.shape[0]//batch_size, verbose=1)
问题是每个样本都被编码成同一个定长向量:
示例 #1
array([-0.00898637, 0.02220072, -0.0095799, 0.00655961, 0.00733364, 0.00351852、0.00088661、-0.00060489、-0.00819919、-0.01798768、 -0.02408937,-0.01549,0.00395884,-0.0124888,-0.00321282, -0.01447861, ...................................................
样本 #100
array([-0.00898637, 0.02220072, -0.0095799, 0.00655961, 0.00733364, 0.00351852、0.00088661、-0.00060489、-0.00819919、-0.01798768、 -0.02408937,-0.01549,0.00395884,-0.0124888,-0.00321282, -0.01447861, ...................................................
【问题讨论】:
-
看起来很奇怪,使用您的 RepeatedVector,您使用 第一个 GRU 层 的输出并重复相同的 n 次 b> 以便 第二个 GRU 层 采用 n 个相似向量的序列
-
这种奇怪行为的任何可能原因??
标签: keras lstm autoencoder dimensionality-reduction