【问题标题】:Multivariate input LSTM in pytorchpytorch中的多变量输入LSTM
【发布时间】:2022-04-13 10:32:07
【问题描述】:

我想实现 LSTM 用于Pytorch 中的多变量输入

按照本文https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/使用keras,输入数据的形状为(样本数,时间步数,并行特征数)

in_seq1 = array([10, 20, 30, 40, 50, 60, 70, 80, 90])
in_seq2 = array([15, 25, 35, 45, 55, 65, 75, 85, 95])
out_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
. . . 
Input     Output
[[10 15]
 [20 25]
 [30 35]] 65
[[20 25]
 [30 35]
 [40 45]] 85
[[30 35]
 [40 45]
 [50 55]] 105
[[40 45]
 [50 55]
 [60 65]] 125
[[50 55]
 [60 65]
 [70 75]] 145
[[60 65]
 [70 75]
 [80 85]] 165
[[70 75]
 [80 85]
 [90 95]] 185

n_timesteps = 3
n_features = 2

在 keras 中似乎很容易:

model.add(LSTM(50, activation='relu', input_shape=(n_timesteps, n_features)))

除了创建n_features 的 LSTM 作为第一层并分别提供每个(想象成多个序列流)然后将它们的输出展平到线性层之外,是否可以通过其他方式完成?

我不能 100% 确定,但从 LSTM 的性质来看,输入不能被展平并作为一维数组传递,因为每个序列“按照 LSTM 应该学习的不同规则播放”。

那么如何使用 keras 实现等于 PyTorch input of shape (seq_len, batch, input_size)(来源https://pytorch.org/docs/stable/nn.html#lstm)


编辑:

除了创建n_features 的 LSTM 作为第一层并分别提供每个(想象为多个序列流)然后将它们的输出展平到线性层之外,是否可以通过其他方式完成?

根据 PyTorch docsinput_size 参数实际上表示特征数(如果它表示并行序列数)

【问题讨论】:

    标签: python pytorch lstm


    【解决方案1】:

    我希望有问题的部分被评论是有意义的:

    数据准备

    import random
    import numpy as np
    import torch
    
    # multivariate data preparation
    from numpy import array
    from numpy import hstack
     
    # split a multivariate sequence into samples
    def split_sequences(sequences, n_steps):
        X, y = list(), list()
        for i in range(len(sequences)):
            # find the end of this pattern
            end_ix = i + n_steps
            # check if we are beyond the dataset
            if end_ix > len(sequences):
                break
            # gather input and output parts of the pattern
            seq_x, seq_y = sequences[i:end_ix, :-1], sequences[end_ix-1, -1]
            X.append(seq_x)
            y.append(seq_y)
        return array(X), array(y)
     
    # define input sequence
    in_seq1 = array([x for x in range(0,100,10)])
    in_seq2 = array([x for x in range(5,105,10)])
    out_seq = array([in_seq1[i]+in_seq2[i] for i in range(len(in_seq1))])
    # convert to [rows, columns] structure
    in_seq1 = in_seq1.reshape((len(in_seq1), 1))
    in_seq2 = in_seq2.reshape((len(in_seq2), 1))
    out_seq = out_seq.reshape((len(out_seq), 1))
    # horizontally stack columns
    dataset = hstack((in_seq1, in_seq2, out_seq))
    

    多元 LSTM 网络

    class MV_LSTM(torch.nn.Module):
        def __init__(self,n_features,seq_length):
            super(MV_LSTM, self).__init__()
            self.n_features = n_features
            self.seq_len = seq_length
            self.n_hidden = 20 # number of hidden states
            self.n_layers = 1 # number of LSTM layers (stacked)
        
            self.l_lstm = torch.nn.LSTM(input_size = n_features, 
                                     hidden_size = self.n_hidden,
                                     num_layers = self.n_layers, 
                                     batch_first = True)
            # according to pytorch docs LSTM output is 
            # (batch_size,seq_len, num_directions * hidden_size)
            # when considering batch_first = True
            self.l_linear = torch.nn.Linear(self.n_hidden*self.seq_len, 1)
            
        
        def init_hidden(self, batch_size):
            # even with batch_first = True this remains same as docs
            hidden_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)
            cell_state = torch.zeros(self.n_layers,batch_size,self.n_hidden)
            self.hidden = (hidden_state, cell_state)
        
        
        def forward(self, x):        
            batch_size, seq_len, _ = x.size()
            
            lstm_out, self.hidden = self.l_lstm(x,self.hidden)
            # lstm_out(with batch_first = True) is 
            # (batch_size,seq_len,num_directions * hidden_size)
            # for following linear layer we want to keep batch_size dimension and merge rest       
            # .contiguous() -> solves tensor compatibility error
            x = lstm_out.contiguous().view(batch_size,-1)
            return self.l_linear(x)
    

    初始化

    n_features = 2 # this is number of parallel inputs
    n_timesteps = 3 # this is number of timesteps
    
    # convert dataset into input/output
    X, y = split_sequences(dataset, n_timesteps)
    print(X.shape, y.shape)
    
    # create NN
    mv_net = MV_LSTM(n_features,n_timesteps)
    criterion = torch.nn.MSELoss() # reduction='sum' created huge loss value
    optimizer = torch.optim.Adam(mv_net.parameters(), lr=1e-1)
    
    train_episodes = 500
    batch_size = 16
    

    培训

    mv_net.train()
    for t in range(train_episodes):
        for b in range(0,len(X),batch_size):
            inpt = X[b:b+batch_size,:,:]
            target = y[b:b+batch_size]    
            
            x_batch = torch.tensor(inpt,dtype=torch.float32)    
            y_batch = torch.tensor(target,dtype=torch.float32)
        
            mv_net.init_hidden(x_batch.size(0))
        #    lstm_out, _ = mv_net.l_lstm(x_batch,nnet.hidden)    
        #    lstm_out.contiguous().view(x_batch.size(0),-1)
            output = mv_net(x_batch) 
            loss = criterion(output.view(-1), y_batch)  
            
            loss.backward()
            optimizer.step()        
            optimizer.zero_grad() 
        print('step : ' , t , 'loss : ' , loss.item())
    

    结果

    step :  499 loss :  0.0010267728939652443 # probably overfitted due to 500 training episodes
    

    【讨论】:

    • 嗨 Tomas,感谢您提供的详细代码,它确实很有帮助。我的问题是,当我尝试使用 seq_length=35、target_length 为 5 和 batch_size=49 实现模型时,前向传递返回不同的形状作为目标? '''' 输出.shape = torch.Size([49, 1]) target.shape = torch.Size([49, 5, 1]) ''''
    • 非常好的例子。谢谢……
    【解决方案2】:

    pytorch 中任何 rnn 单元格中的输入是 3d 输入,格式为 (seq_len, batch, input_size) 或 (batch, seq_len, input_size),如果您更喜欢第二个(也像我一样大声笑)init lstm 层)或其他 rnn 层) 带 arg

    bach_first = True
    

    https://discuss.pytorch.org/t/could-someone-explain-batch-first-true-in-lstm/15402

    您在设置中也没有任何经常性关系。 如果要创建多对一计数器,请在 size (-1, n,1) 时创建输入 其中 -1 是您想要的大小,n 是位数,每个刻度一个数字,例如输入 [[10][20][30]] ,输出 - 60,输入 [[30,][70]] 输出 100 等,输入必须具有从 1 到某个最大值的不同长度,以便学习 rnn 关系

    import random
    
    import numpy as np
    
    import torch
    
    
    def rnd_io():    
        return  np.random.randint(100, size=(random.randint(1,10), 1))
    
    
    class CountRNN(torch.nn.Module):
    
    def __init__(self):
        super(CountRNN, self).__init__()
    
        self.rnn = torch.nn.RNN(1, 20,num_layers=1, batch_first=True)
        self.fc = torch.nn.Linear(20, 1)
    
    
    def forward(self, x):        
        full_out, last_out = self.rnn(x)
        return self.fc(last_out)
    
    
    nnet = CountRNN()
    
    criterion = torch.nn.MSELoss(reduction='sum')
    
    optimizer = torch.optim.Adam(nnet.parameters(), lr=0.0005)
    
    batch_size = 100
    
    batches = 10000 * 1000
    
    printout = max(batches //(20* 1000),1)
    
    for t in range(batches):
    
    optimizer.zero_grad()
    
    x_batch = torch.unsqueeze(torch.from_numpy(rnd_io()).float(),0)
    
    y_batch = torch.unsqueeze(torch.sum(x_batch),0)
    
    output = nnet.forward(x_batch) 
    
    loss = criterion(output, y_batch)
    
    if t % printout == 0:
        print('step : ' , t , 'loss : ' , loss.item())  
        torch.save(nnet.state_dict(), './rnn_summ.pth')  
    
    loss.backward()
    
    optimizer.step()
    

    【讨论】:

    • 所以根据提供的示例输入为 (seq_length == n_timesteps, batch == batch_size, input_size == n_features)?
    • 是的,但是拥有 sutch 输入和输出的想法是错误的。我刚刚在 pytorch 中创建了小额示例,将编辑我的帖子
    • 嗯,您的 rnd_io() 与我要问的完全不同... 多变量问题 => 多个并行输入序列,每个都来自不同的来源。这意味着您知道时间步长和特征的大小。喜欢def rnd_io(n_features,n_timesteps): arr = [] for i in range(n_features): arr.append(np.random.randint(100, size=(n_timesteps, 1))) return np.array(arr)
    • 好吧,随心所欲。不同的来源将只是批量大小,不像它是 1 的示例
    【解决方案3】:

    我只想稍微更新一下训练部分.....使用基本的提前停止机制和保存模型的代码。

    #Early stopping
    the_last_loss = -100
    patience = 4
    trigger_times = 0
    
    mv_net.train()
    for t in range(train_episodes):
        for b in range(0,len(X),batch_size):
            inpt = X[b:b+batch_size,:,:]
            target = y[b:b+batch_size]    
            
            x_batch = torch.tensor(inpt,dtype=torch.float32)    
            y_batch = torch.tensor(target,dtype=torch.float32)
        
            mv_net.init_hidden(x_batch.size(0))
        #    lstm_out, _ = mv_net.l_lstm(x_batch,nnet.hidden)    
        #    lstm_out.contiguous().view(x_batch.size(0),-1)
            output = mv_net(x_batch) 
            loss = criterion(output.view(-1), y_batch)  
            
            loss.backward()
            optimizer.step()        
            optimizer.zero_grad() 
        the_current_loss= loss.item()
        print('step : ' , t , 'loss : ' , the_current_loss)    
        the_current_loss= loss.item()
        if the_current_loss > the_last_loss:
            trigger_times += 1        
            if trigger_times >= patience:
                print('Early stopping!\nStart to test process.')
                break
        else:
            #print('trigger times: 0')
            trigger_times = 0
        
        the_last_loss = the_current_loss
       
    # Lets assume we are happy with this 
    #Save the model
    torch.save(mv_net.state_dict(),'pytorch_dev')
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2018-09-24
      • 2018-12-29
      • 2021-08-09
      • 2019-10-23
      • 2020-08-21
      • 2018-03-25
      • 2021-06-02
      相关资源
      最近更新 更多