【发布时间】:2019-03-28 17:57:33
【问题描述】:
我正在使用 PyTorch 对两个字符串的二进制加法运行代码。
但是,在训练模型时出现以下错误:
can't convert np.ndarray of type numpy.object.
The only supported types are: double, float, float16, int64, int32, and uint8.
谁能帮帮我?这是我的代码:
featDim=2 # two bits each from each of the String
outputDim=1 # one output node which would output a zero or 1
lstmSize=10
lossFunction = nn.MSELoss()
model =Adder(featDim, lstmSize, outputDim)
print ('model initialized')
#optimizer = optim.SGD(model.parameters(), lr=3e-2, momentum=0.8)
optimizer=optim.Adam(model.parameters(),lr=0.001)
epochs=500
### epochs ##
totalLoss= float("inf")
while totalLoss > 1e-5:
print(" Avg. Loss for last 500 samples = %lf"%(totalLoss))
totalLoss=0
for i in range(0,epochs): # average the loss over 200 samples
stringLen=4
testFlag=0
x,y=getSample(stringLen, testFlag)
model.zero_grad()
x_var=autograd.Variable(torch.from_numpy(x).unsqueeze(1).float()) #convert to torch tensor and variable
# unsqueeze() is used to add the extra dimension since
# your input need to be of t*batchsize*featDim; you cant do away with the batch in pytorch
seqLen=x_var.size(0)
#print (x_var)
x_var= x_var.contiguous()
y_var=autograd.Variable(torch.from_numpy(y).float()) ##ERROR ON THIS LINE
finalScores = model(x_var)
#finalScores=finalScores.
loss=lossFunction(finalScores,y_var)
totalLoss+=loss.data[0]
optimizer.zero_grad()
loss.backward()
optimizer.step()
totalLoss=totalLoss/epochs
【问题讨论】:
标签: pytorch rnn google-colaboratory