【发布时间】:2020-03-26 15:45:42
【问题描述】:
问题
我不明白在小批量训练时如何处理 LSTM 隐藏单元,因为训练数据以 n 个序列批量发送到网络,而只处理 1 个序列每次测试期间。
代码
具体来说,我的网络是:
class Pytorch_LSTM(nn.Module):
def __init__(self, params):
super(Pytorch_LSTM, self).__init__()
self.params = params
self.hidden_layer_size = params['hidden_layer_size']
# Define layers
self.lstm = nn.LSTM(input_size = params['in_features'], hidden_size = params['hidden_layer_size'])
self.linear1 = nn.Linear(params['hidden_layer_size'], params['hidden_layer_size'])
self.linear2 = nn.Linear(params['hidden_layer_size'], params['out_features'])
self.hidden_cell = (torch.zeros(1,self.params['batch_size'],self.hidden_layer_size),
torch.zeros(1,self.params['batch_size'],self.hidden_layer_size))
def forward(self, input_seq):
lstm_out, self.hidden_cell = self.lstm(input_seq.view(self.params['time_window'],-1,self.params['in_features']), self.hidden_cell)
linear1_out = self.linear1(lstm_out)
predictions = self.linear2(linear1_out)
return predictions[-1]
在我的train() 方法中:
def train(self, input_sequence, params, test_idx, final, verbose=True):
....
....
# Model
self.model = Pytorch_LSTM(params)
# Let's train the model
for epoch in range(epochs):
for count_1,seq in enumerate(train_data_batch):
optimizer.zero_grad()
self.model.hidden_cell = (torch.zeros(1, params['batch_size'], self.model.hidden_layer_size),
torch.zeros(1, params['batch_size'], self.model.hidden_layer_size))
y_pred = self.model(seq) # seq.shape: (n_batches, 25, 4)
single_loss = mse_loss(y_pred, y_label) # y_pred.shape, y_label.shape : (batch_size, 4)
我相信这是对模型的小批量训练。
当我测试它时,我每次只有一个序列,而不是多个批次。在我的test():
for count,seq in enumerate(val_data[j]):
y_pred = self.model(seq) # seq.shape: (25,4)
single_loss = mse_loss(y_pred, y_label)
这会返回错误:
RuntimeError: Expected hidden[0] size (1, 1, 100), got (1, 704, 100)
n_batches= 704。
我应该如何处理 hidden_cell?
【问题讨论】:
标签: python machine-learning pytorch lstm recurrent-neural-network