【问题标题】:RuntimeError: Found dtype Double but expected FloatRuntimeError:找到 dtype Double 但预期 Float
【发布时间】:2021-12-02 21:49:16
【问题描述】:

我正在使用 Python 3 和 Pytorch 1.9.1 编写强化学习代码。

我发布了一个问题,因为我不明白错误行。错误发生在loss.mean().backward()这一行。

据说dtype应该有float,但是double进来了,但是dtype不管打印多少都是float 32,有什么问题?

有问题的代码如下。

def train_net_ap(self, idx):
    s, a, r, s_prime, done_mask, prob_a = self.make_batch(idx)
    print("a is ", a)

    for i in range(K_epoch):
        td_target = r + gamma * self.v_ap(s_prime) * done_mask
        delta = td_target - self.v_ap(s)
        delta = delta.detach().numpy()

        advantage_lst = []
        advantage = 0.0
        for delta_t in delta[::-1]:
            advantage = gamma * lmbda * advantage + delta_t[0]
            advantage_lst.append([advantage])
        advantage_lst.reverse()
        advantage = torch.tensor(advantage_lst, dtype=torch.float)

        pi = self.pi_ap(s, softmax_dim=1)
        pi_a = pi.gather(1, a)
        ratio = torch.exp(torch.log(pi_a) - torch.log(prob_a))  # a/b == exp(log(a)-log(b))

        surr1 = ratio * advantage
        surr2 = torch.clamp(ratio, 1 - eps_clip, 1 + eps_clip) * advantage
        loss = -torch.min(surr1, surr2) + F.smooth_l1_loss(self.v_ap(s), td_target.detach())

        print("loss is ", loss)
        print("loss dtype is ", loss.dtype)
        print("loss.mean() is ", loss.mean(), loss.mean().dtype)
        self.optimizer.zero_grad()
        loss.mean().backward()
        self.optimizer.step()

打印出来的短语和错误信息如下。

loss dtype is  torch.float32 
loss.mean() is  tensor(6.1353,   grad_fn=<MeanBackward0>) torch.float32


Traceback (most recent call last):
  main()
  model.train_net_ap(x)
  loss.mean().backward()
    
  torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
  allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag

RuntimeError: Found dtype Double but expected Float

【问题讨论】:

标签: python pytorch tensor reinforcement-learning


【解决方案1】:

错误说它需要一个 Float 数据类型,但它正在接收一个 Double 类型的数据,你可以做的是将变量类型更改为这种情况下所需的类型,执行类似的操作:

float(double_variable)

或者,如果您需要更精确的浮点值或特定的小数位数,您可以使用:

                                   (This is an example)
v1 = 0.00582811585976
import numpy as np
np.float32(v1)
float(np.float32(v1))  #Convert to 32bit and then back to 64bit
'%.14f'%np.float32(v1) #This rounds to v2 if you're printing 14 places of precision ...
  

【讨论】:

    猜你喜欢
    • 1970-01-01
    • 2021-07-31
    • 2021-07-10
    • 1970-01-01
    • 1970-01-01
    • 1970-01-01
    • 2020-10-24
    • 1970-01-01
    • 1970-01-01
    相关资源
    最近更新 更多