【问题标题】:training a RNN in Pytorch在 Pytorch 中训练 RNN
【发布时间】:2018-10-13 09:55:58
【问题描述】:

我想要一个 RNN 模型并教它学习从“hihell”生成“ihello”。我是 Pytorch 的新手,并按照视频中的说明编写代码。 我编写了两个名为train.pymodel.py 的python 文件。 这是model.py

#----------------- model for teach rnn hihell to ihello
#-----------------  OUR MODEL ---------------------
import torch
import torch.nn as nn
from torch import autograd

class Model(nn.Module):
    def __init__(self):
        super(Model,self).__init__()
        self.rnn=nn.RNN(input_size=input_size,hidden_size=hidden_size,batch_first=True)
    def forward(self,x,hidden):
        #Reshape input in (batch_size,sequence_length,input_size)
        x=x.view(batch_size,sequence_length,input_size)
        #Propagate input through RNN
        #Input:(batch,seq+len,input_size)
        out,hidden=self.rnn(x,hidden)
        out=out.view(-1,num_classes)
        return hidden,out
    def init_hidden(self):
        #Initialize hidden and cell states
        #(num_layers*num_directions,batch,hidden_size)
        return autograd.Variable(torch.zeros(num_layers,batch_size,hidden_size))

这是train.py

"""----------------------train for teach rnn to hihell to ihello--------------------------"""
#-----------------  DATA PREPARATION ---------------------
#Import
import torch
import torch.nn as nn
from torch import autograd
from model import Model
import sys


idx2char=['h','i','e','l','o']
#Teach hihell->ihello
x_data=[0,1,0,2,3,3]#hihell
y_data=[1,0,2,3,3,4]#ihello
one_hot_lookup=[[1,0,0,0,0],#0
                [0,1,0,0,0],#1
                [0,0,1,0,0],#2
                [0,0,0,1,0],#3
                [0,0,0,0,1]]#4
x_one_hot=[one_hot_lookup[x] for x in x_data]
inputs=autograd.Variable(torch.Tensor(x_one_hot))
labels=autograd.Variable(torch.LongTensor(y_data))
""" ----------- Parameters Initialization------------"""
num_classes = 5
input_size = 5  # one hot size
hidden_size = 5  # output from LSTM to directly predict onr-hot
batch_size = 1  # one sequence
sequence_length = 1  # let's do one by one
num_layers = 1  # one layer RNN
"""-----------------  LOSS AND TRAINING ---------------------"""
#Instantiate RNN model
model=Model()
#Set loss and optimizer function
#CrossEntropyLoss=LogSoftmax+NLLLOSS
criterion=torch.nn.CrossEntropyLoss()
optimizer=torch.optim.Adam(model.parameters(),lr=0.1)

"""----------------Train the model-------------------"""
for epoch in range(100):
    optimizer.zero_grad()
    loss=0
    hidden=model.init_hidden()
    sys.stdout.write("Predicted String:")
    for input,label in zip(inputs,labels):
        #print(input.size(),label.size())
        hidden,output=model(input,hidden)
        val,idx=output.max(1)
        sys.stdout.write(idx2char[idx.data[0]])
        loss+=criterion(output,label)
    print(",epoch:%d,loss:%1.3f"%(epoch+1,loss.data[0]))
    loss.backward()
    optimizer.step()

当我运行train.py 时,我收到此错误:

self.rnn=nn.RNN(input_size=input_size,hidden_​​size=hidden_​​size,batch_first=True) NameError: name 'input_size' 未定义

我不知道为什么会收到此错误,因为我在上面的代码行中有input_size=5。有人可以帮助我吗?谢谢。

【问题讨论】:

    标签: neural-network recurrent-neural-network pytorch rnn


    【解决方案1】:

    train.py (num_classes, input_size, ...) 中定义的变量的范围是train.py 本身。它们仅在此文件中可见。 model.py 忽略了这些。 我建议在构造函数中包含这些参数:

    class Model(nn.Module):
      def __init__(self, hidden_size, input_size):
        # same
    

    然后将模型称为:

    model = Model(hidden_size, input_size)
    

    同样,对于您在train.py 中定义的其他变量(并希望在model.py 中使用它们),您必须将它们作为参数传递给它们各自的函数或构造函数并将它们存储为属性。

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2021-08-03
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-04-12
      • 1970-01-01
      • 2021-05-19
      相关资源
      最近更新 更多