【问题标题】:Convert custom Convolution from PyTorch to Tensorflow (2.2.0)将自定义卷积从 PyTorch 转换为 Tensorflow (2.2.0)
【发布时间】:2020-09-19 21:10:17
【问题描述】:

我目前正在尝试将自定义卷积从 PyTorch 转换为 Tensorflow (V. 2.2.0)。

卷积在 PyTorch 中定义为:

    self.quantizer = q = nn.Conv1d(1, 2*nq, kernel_size=1, bias=True)
    a = (nq-1) / gap
    #1st half = lines passing to (min+x,1) and (min+x+1/a,0) with x = {nq-1..0}*gap/(nq-1)
    q.weight.data[:nq] = -a
    q.bias.data[:nq] = torch.from_numpy(a*min + np.arange(nq, 0, -1)) # b = 1 + a*(min+x)
    #2nd half = lines passing to (min+x,1) and (min+x-1/a,0) with x = {nq-1..0}*gap/(nq-1)
    q.weight.data[nq:] = a
    q.bias.data[nq:] = torch.from_numpy(np.arange(2-nq, 2, 1) - a*min) # b = 1 - a*(min+x)
    # first and last one are special: just horizontal straight line
    q.weight.data[0] = q.weight.data[-1] = 0
    q.bias.data[0] = q.bias.data[-1] = 1

其中nq = 20min = 0max = 1

我的重新实现如下所示:

my_weight = my_init_weight((1,1,nq*2))
q = tf.nn.convolution(input_q, my_weight)
q = tf.nn.bias_add(q, my_init_bias((40,1), tf.float32))

将这些函数用作权重和偏差初始化:

def my_init_weight(shape, dtype=None):

    weights = np.zeros(shape, dtype=np.float32)
    weights[:, :, :nq] = -a
    weights[:, :, nq:] = a
    weights[:, :, 0] = weights[:, :, -1] = 0
    return tf.convert_to_tensor(weights, dtype=tf.float32)
def my_init_bias(shape, dtype=None):
    weights = np.zeros(shape[0], dtype=np.float32)
    weights[:nq] = a*min + np.arange(nq, 0, -1)
    weights[nq:] = np.arange(2-nq, 2, 1) - a*min
    weights[0] = weights[-1] = 1
    return weights

输入是一个矩阵,PyTorch 的形状为1681, 1, 1600(因为它首先使用通道),Tensorflow 的形状为1681, 1600, 1(因为它最后使用通道),输出是1681, 40, 16001681, 1600, 40。所以它应该是正确的,但是,两个卷积的输出是不同的。

输入、输出:随机100, 100 图像上的 Tensorflow:

my_weight = my_init_weight((1,1,nq*2))
my_weight = tf.nn.bias_add(my_weight, my_init_bias((40,1), tf.float32))
q = tf.nn.convolution(test_conv, my_weight)

q_left, q_right = tf.split(q, 2, axis=2)
q = tf.math.minimum(q_left, q_right)
nbs = tf.reduce_sum(q, axis=0)

输入、输出:随机100, 100 图像上的 PyTorch:

output = q(input_t_t)
output = torch.min(output[:,:nq], output[:,nq:]).clamp(min=0)    
nbs = output.sum(dim=-1)

【问题讨论】:

    标签: python tensorflow pytorch


    【解决方案1】:

    好的,我找到了解决办法:

    我忘了添加.clamp(min=0)

    添加q = tf.clip_by_value(q, 0, tf.keras.backend.max(q))

    my_weight = my_init_weight((1,1,nq*2))
    my_weight = tf.nn.bias_add(my_weight, my_init_bias((40,1), tf.float32))
    q = tf.nn.convolution(test_conv, my_weight)
    
    q_left, q_right = tf.split(q, 2, axis=2)
    q = tf.math.minimum(q_left, q_right)
    q = tf.clip_by_value(q, 0, tf.keras.backend.max(q)) <-----------
    nbs = tf.reduce_sum(q, axis=0)
    

    解决了问题。

    【讨论】:

      猜你喜欢
      • 2019-08-18
      • 1970-01-01
      • 2018-11-11
      • 2019-06-09
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-12-08
      • 2019-02-15
      相关资源
      最近更新 更多