【发布时间】:2021-06-05 03:55:45
【问题描述】:
我正在关注 Tensorflow 教程 https://www.tensorflow.org/guide/migrate。这是一个例子:
def model(x, training, scope='model'):
with v1.variable_scope(scope, reuse=v1.AUTO_REUSE):
x = v1.layers.conv2d(x, 32, 3, activation=v1.nn.relu,
kernel_regularizer=lambda x:0.004*tf.reduce_mean(x**2))
x = v1.layers.max_pooling2d(x, (2, 2), 1)
x = v1.layers.flatten(x)
x = v1.layers.dropout(x, 0.1, training=training)
x = v1.layers.dense(x, 64, activation=v1.nn.relu)
x = v1.layers.batch_normalization(x, training=training)
x = v1.layers.dense(x, 10)
return x
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
print(train_out)
print(test_out)
training=True 所在的 train_out
tf.Tensor([[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]], shape=(1, 10), dtype=float32)
而 training=False 的 test_out 是随机的非零向量
tf.Tensor(
[[ 0.379358 -0.55901194 0.48704922 0.11619566 0.23902717 0.01691487
0.07227738 0.14556988 0.2459927 0.2501198 ]], shape=(1, 10), dtype=float32)
看了https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization的文档,还是不明白为什么?救命!
【问题讨论】:
标签: tensorflow batch-normalization