【发布时间】:2020-07-28 10:38:11
【问题描述】:
我正在尝试在 PyTorch 中执行静态训练后量化。对于这个例子,我尝试量化一个带有偏差的 Conv2d 层:
def quantize(model, input_shape):
with torch.no_grad():
# model = tq.QuantWrapper(model)
observer = tq.PerChannelMinMaxObserver()
model.qconfig = torch.quantization.QConfig(activation=tq.MinMaxObserver,
weight=observer.with_args(dtype=torch.qint8,
qscheme=torch.per_channel_affine))
#model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
model = tq.QuantWrapper(model)
tq.prepare(model, inplace=True)
for i in range(1000):
x = torch.ones(2, *input_shape)
#x = torch.randn(2, *input_shape)
tmp = model(x)
tq.convert(model, inplace=True)
return model
input_shape = (5, 7, 7)
model_b = nn.Conv2d(input_shape[0], 2, 3, bias=True)
for p in model_b.parameters():
torch.nn.init.zeros_(p)
model_b.bias.data.fill_(.5)
model_b = quantize(model_b, input_shape)
model_b.eval()
PyTorch 文档明确指出偏差是未量化的,并保留为浮点张量。 输出的整数表示产生:
tensor([[[[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255]],
[[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255],
[255, 255, 255, 255, 255]]]], dtype=torch.uint8)
但是,浮点表示产生:
tensor([[[[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]],
[[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000],
[0.5000, 0.5000, 0.5000, 0.5000, 0.5000]]]], size=(1, 2, 5, 5),
dtype=torch.quint8, quantization_scheme=torch.per_tensor_affine,
scale=0.0019607844296842813, zero_point=0)
我搜索了有关该问题的信息并得出结论,用于重新量化卷积输出的比例和零点考虑了偏差,并且在 GEMM 操作期间,偏差在添加到 int32_t 之前被量化为 int32_t GEMM 的结果。从上面的例子中,如果它被简单地转换为 int32_t,整数和浮点输出将为 0。
我的问题是:如果不将偏差量化为 int32_t,它是如何量化为 int32_t 的?
【问题讨论】:
标签: python pytorch quantization