【问题标题】:TensorFlow Sigmoid Cross Entropy with Logits for 1D dataTensorFlow Sigmoid Cross Entropy with Logits 用于一维数据
【发布时间】:2023-03-16 19:00:01
【问题描述】:

上下文

假设我们有一些一维数据(例如时间序列),其中所有序列都有固定长度l

        # [ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11] index
example = [ 0,  1,  1,  0, 23, 22, 20, 14,  9,  2,  0,  0] # l = 12

我们想要执行语义分割,有 n 个类:

          # [ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11]    index            
labeled = [
            [ 0,  1,  1,  0,  0,  0,  0,  0,  0,  0,  0,  0], # class 1
            [ 0,  0,  0,  0,  1,  1,  1,  1,  0,  0,  0,  0], # class 2
            [ 0,  0,  0,  0,  0,  0,  0,  1,  1,  1,  0,  0], # class 3
           #[                     ...                      ],
            [ 1,  1,  1,  0,  0,  0,  0,  0,  1,  1,  1,  1], # class n
 ]

那么单个示例的输出形状为[n, l](即data_format 不是"channels_last"),批处理输出的形状为[b, n, l],其中b 是批处理中示例的数量。

这些类是独立的,所以我的理解是使用 sigmoid 交叉熵在这里适用作为损失而不是 softmax 交叉熵。


问题

关于tf.nn.sigmoid_cross_entropy_with_logits 的预期格式和使用,我有一些相关的小问题:

  1. 由于网络输出的张量与批处理标签的形状相同,我应该在输出 logits 的假设下训练网络,还是采用 keras 方法(参见 keras 的 binary_crossentropy)并假设它输出概率?

  2. 考虑到一维分割问题,我应该打电话给tf.nn.sigmoid_cross_entropy_with_logits

    • data_format='channels_first'(如上图),或
    • data_format='channels_last' (example.T)

    如果我想为每个频道单独分配标签?

  3. 传递给优化器的损失操作应该是:

    • tf.nn.sigmoid_cross_entropy_with_logits(labels, logits),
    • tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels, logits)),或
    • tf.losses.sigmoid_cross_entropy?

代码

这个Colab 突出了我的困惑,并证明data_format 实际上很重要...,但文档没有明确说明预期的情况。

虚拟数据

c = 5  # number of channels (label classes)
p = 10 # number of positions ('pixels')


# data_format = 'channels_first', shape = [classes, pixels]
# 'logits' for 2 examples
pred_1 = np.array([[random.random() for v in range(p)]for n in range(c)]).astype(float)
pred_2 = np.array([[random.random() for v in range(p)]for n in range(c)]).astype(float)

# 'ground truth' for the above 2 examples
targ_1 = np.array([[0 if random.random() < 0.8 else 1 for v in range(p)]for n in range(c)]).astype(float)
targ_2 = np.array([[0 if random.random() < 0.8 else 1 for v in range(p)]for n in range(c)]).astype(float)

# batched form of the above examples
preds = np.array([pred_1, pred_2])
targs = np.array([targ_1, targ_2])


# data_format = 'channels_last', shape = [pixels, classes]
t_pred_1 = pred_1.T
t_pred_2 = pred_2.T
t_targ_1 = targ_1.T
t_targ_2 = targ_2.T

t_preds = np.array([t_pred_1, t_pred_2])
t_targs = np.array([t_targ_1, t_targ_2])

损失

tf.nn

# calculate individual losses for 'channels_first'
loss_1 = tf.nn.sigmoid_cross_entropy_with_logits(labels=targ_1, logits=pred_1)
loss_2 = tf.nn.sigmoid_cross_entropy_with_logits(labels=targ_2, logits=pred_2)
# calculate batch loss for 'channels_first'
b_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targs, logits=preds)

# calculate individual losses for 'channels_last'
t_loss_1 = tf.nn.sigmoid_cross_entropy_with_logits(labels=t_targ_1, logits=t_pred_1)
t_loss_2 = tf.nn.sigmoid_cross_entropy_with_logits(labels=t_targ_2, logits=t_pred_2)
# calculate batch loss for 'channels_last'
t_b_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=t_targs, logits=t_preds)
# get actual tensors
with tf.Session() as sess:
  # loss for 'channels_first'
  l1   = sess.run(loss_1)
  l2   = sess.run(loss_2)
  # batch loss for 'channels_first'
  bl   = sess.run(b_loss)

  # loss for 'channels_last'
  t_l1 = sess.run(t_loss_1)
  t_l2 = sess.run(t_loss_2)

  # batch loss for 'channels_last'
  t_bl = sess.run(t_b_loss)

tf.reduced_mean(tf.nn)

# calculate individual losses for 'channels_first'
rm_loss_1 = tf.reduce_mean(loss_1)
rm_loss_2 = tf.reduce_mean(loss_2)
# calculate batch loss for 'channels_first'
rm_b_loss = tf.reduce_mean(b_loss)

# calculate individual losses for 'channels_last'
rm_t_loss_1 = tf.reduce_mean(t_loss_1)
rm_t_loss_2 = tf.reduce_mean(t_loss_2)
# calculate batch loss for 'channels_last'
rm_t_b_loss = tf.reduce_mean(t_b_loss)
# get actual tensors
with tf.Session() as sess:
  # loss for 'channels_first'
  rm_l1   = sess.run(rm_loss_1)
  rm_l2   = sess.run(rm_loss_2)
  # batch loss for 'channels_first'
  rm_bl   = sess.run(rm_b_loss)

  # loss for 'channels_last'
  rm_t_l1 = sess.run(rm_t_loss_1)
  rm_t_l2 = sess.run(rm_t_loss_2)

  # batch loss for 'channels_last'
  rm_t_bl = sess.run(rm_t_b_loss)

tf.losses

# calculate individual losses for 'channels_first'
tf_loss_1 = tf.losses.sigmoid_cross_entropy(multi_class_labels=targ_1, logits=pred_1)
tf_loss_2 = tf.losses.sigmoid_cross_entropy(multi_class_labels=targ_2, logits=pred_2)
# calculate batch loss for 'channels_first'
tf_b_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=targs, logits=preds)

# calculate individual losses for 'channels_last'
tf_t_loss_1 = tf.losses.sigmoid_cross_entropy(multi_class_labels=t_targ_1, logits=t_pred_1)
tf_t_loss_2 = tf.losses.sigmoid_cross_entropy(multi_class_labels=t_targ_2, logits=t_pred_2)
# calculate batch loss for 'channels_last'
tf_t_b_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=t_targs, logits=t_preds)
# get actual tensors
with tf.Session() as sess:
  # loss for 'channels_first'
  tf_l1   = sess.run(tf_loss_1)
  tf_l2   = sess.run(tf_loss_2)
  # batch loss for 'channels_first'
  tf_bl   = sess.run(tf_b_loss)

  # loss for 'channels_last'
  tf_t_l1 = sess.run(tf_t_loss_1)
  tf_t_l2 = sess.run(tf_t_loss_2)

  # batch loss for 'channels_last'
  tf_t_bl = sess.run(tf_t_b_loss)

测试等效性

data_format 等效

# loss _should_(?) be the same for 'channels_first' and 'channels_last' data_format
# test example_1
e1 = (l1 == t_l1.T).all()
# test example 2
e2 = (l2 == t_l2.T).all()

# loss calculated for each example and then batched together should be the same 
# as the loss calculated on the batched examples
ea = (np.array([l1, l2]) == bl).all()
t_ea = (np.array([t_l1, t_l2]) == t_bl).all()

# loss calculated on the batched examples for 'channels_first' should be the same
# as loss calculated on the batched examples for 'channels_last'
eb = (bl == np.transpose(t_bl, (0, 2, 1))).all()


e1, e2, ea, t_ea, eb
# (True, False, False, False, True) <- changes every time, so True is happenstance

tf.reduce_mean 和 tf.losses 之间的等价性

l_e1 = tf_l1 == rm_l1
l_e2 = tf_l2 == rm_l2
l_eb = tf_bl == rm_bl

l_t_e1 = tf_t_l1 == rm_t_l1
l_t_e2 = tf_t_l2 == rm_t_l2
l_t_eb = tf_t_bl == rm_t_bl

l_e1, l_e2, l_eb, l_t_e1, l_t_e2, l_t_eb
# (False, False, False, False, False, False)

【问题讨论】:

  • 我认为this answer 可能会对您有所帮助。
  • @today 我之前读过那个答案,但我仍然不太清楚,因为没有明确证明独立性的维度,而且我在 Colab 中的结果与该答案所暗示的不同

标签: python tensorflow machine-learning computer-vision semantic-segmentation


【解决方案1】:

tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(...))tf.losses.sigmoid_cross_entropy(...)(带有默认参数)都在计算相同的东西。问题在于您使用== 比较两个浮点数的测试。相反,使用np.isclose 方法来检查两个浮点数是否相等:

# loss _should_(?) be the same for 'channels_first' and 'channels_last' data_format
# test example_1
e1 = np.isclose(l1, t_l1.T).all()
# test example 2
e2 = np.isclose(l2, t_l2.T).all()

# loss calculated for each example and then batched together should be the same 
# as the loss calculated on the batched examples
ea = np.isclose(np.array([l1, l2]), bl).all()
t_ea = np.isclose(np.array([t_l1, t_l2]), t_bl).all()

# loss calculated on the batched examples for 'channels_first' should be the same
# as loss calculated on the batched examples for 'channels_last'
eb = np.isclose(bl, np.transpose(t_bl, (0, 2, 1))).all()


e1, e2, ea, t_ea, eb
# (True, True, True, True, True)

还有:

l_e1 = np.isclose(tf_l1, rm_l1)
l_e2 = np.isclose(tf_l2, rm_l2)
l_eb = np.isclose(tf_bl, rm_bl)

l_t_e1 = np.isclose(tf_t_l1, rm_t_l1)
l_t_e2 = np.isclose(tf_t_l2, rm_t_l2)
l_t_eb = np.isclose(tf_t_bl, rm_t_bl)

l_e1, l_e2, l_eb, l_t_e1, l_t_e2, l_t_eb
# (True, True, True, True, True, True)

【讨论】:

  • 啊,这很有道理......那么,如果每个类都应该是独立的,那为什么data_format 无关紧要?还是每个类中的每一项都是独立的?
  • @SumNeuron 为每个单独的元素分别计算 sigomoid 和交叉熵损失(这是因为您的假设:每个元素可能属于多个类,因此类是独立的)。因此,data_format 在这里无关紧要。
  • ,澄清一下,在 sigmoid_cross_entropy_loss 之前是否应该有一个激活函数?还是应该有图的两个输出节点?一个用 sigmoid 交叉熵计算损失,一个返回输出层的 sigmoid?
  • @SumNeuron tf.nn.sigmoid_cross_entropy_with_logitstf.losses.sigmoid_cross_entropy 首先应用 sigmoid(这就是他们假设 logits 作为输入的原因),然后计算交叉熵损失。因此,您不应单独应用 sigmoid。
  • 好的,现在我有点困惑(对此感到抱歉)。我正在阅读 maxim 的答案,他说网络的输出被认为是“logits”(而 Keras 默认假设概率)。如果是这种情况,那么我将输出层传递给 sigmoid C.E. 以计算损失,但我可以不向返回概率的图中添加另一个输出节点吗?
猜你喜欢
  • 2020-09-14
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2019-10-11
  • 2017-01-09
  • 2019-03-15
  • 2019-07-26
  • 1970-01-01
相关资源
最近更新 更多