【发布时间】:2021-10-29 06:54:23
【问题描述】:
我创建了一个激活函数类 Threshold,它应该在 one-hot-encoded 图像张量上运行。
该函数在每个通道上执行最小-最大特征缩放,然后进行阈值处理。
class Threshold(nn.Module):
def __init__(self, threshold=.5):
super().__init__()
if threshold < 0.0 or threshold > 1.0:
raise ValueError("Threshold value must be in [0,1]")
else:
self.threshold = threshold
def min_max_fscale(self, input):
r"""
applies min max feature scaling to input. Each channel is treated individually.
input is assumed to be N x C x H x W (one-hot-encoded prediction)
"""
for i in range(input.shape[0]):
# N
for j in range(input.shape[1]):
# C
min = torch.min(input[i][j])
max = torch.max(input[i][j])
input[i][j] = (input[i][j] - min) / (max - min)
return input
def forward(self, input):
assert (len(input.shape) == 4), f"input has wrong number of dims. Must have dim = 4 but has dim {input.shape}"
input = self.min_max_fscale(input)
return (input >= self.threshold) * 1.0
当我使用该函数时,我得到以下错误,因为我假设梯度不是自动计算的。
Variable._execution_engine.run_backward(RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
我已经看过How to properly update the weights in PyTorch?,但不知道如何将它应用到我的案例中。
如何计算这个函数的梯度?
感谢您的帮助。
【问题讨论】:
标签: pytorch gradient activation