【问题标题】:Rebuild Keras-model in Tensorflow在 Tensorflow 中重建 Keras 模型
【发布时间】:2019-11-11 01:01:47
【问题描述】:

我是 Tensorflow 的新手,我正在尝试使用 Tensorflows Python API 在 Keras(TF 后端)中构建一个简单的网络。它是一个简单的函数逼近器 (z = sin(x + y))。

我尝试了不同的架构、优化器和学习率,但我没有让新网络正确训练。但是在我看来,这些网络似乎是相同的。两者都得到完全相同的特征向量和标签:

# making training data
start = 0
end = 2*np.pi
samp = 1000
num_samp = samp**2
step = end / samp

x_train  = np.arange(start, end, step)
y_train  = np.arange(start, end, step)

data = np.array(np.meshgrid(x_train,y_train)).T.reshape(-1,2)
z_label = np.sin(data[:,0] + data[:,1])

这是 Keras 模型:

#start model
model = Sequential()

#stack layers
model.add(Dense(units=128, activation='sigmoid', input_dim=2, name='dense_1'))
model.add(Dense(units=64, activation='sigmoid', input_dim=128, name='dense_2'))
model.add(Dense(units=1, activation='linear', name='output'))

#compile model
model.compile(loss='mean_squared_error',
              optimizer='sgd',
              metrics=['accuracy'])

checkpointer = ModelCheckpoint(filepath='./weights/weights.h5',
                               verbose=1, save_best_only=True)

tensorboard = TensorBoard(log_dir="logs/{}".format(time()))

model.fit(data, z_label, epochs=20, batch_size=32,
          shuffle='true',validation_data=(data_val, z_label_val),
          callbacks=[checkpointer, tensorboard])

这是使用 Tensorflows Python API 构建的新网络:

# hyperparameter
n_inputs = 2
n_hidden1 = 128
n_hidden2 = 64
n_outputs = 1
learning_rate = 0.01

# construction phase
X = tf.placeholder(tf.float32, shape=(None, n_inputs), name='input')
y = tf.placeholder(tf.float32, shape=(None), name="target")

hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", activation=tf.nn.sigmoid)
hidden2 = tf.layers.dense(hidden1, n_hidden2, name="hidden2", activation=tf.nn.sigmoid)
logits = tf.layers.dense(hidden2, n_outputs, activation='linear', name='output')

loss = tf.reduce_mean(tf.square(logits - y),  name='loss')

optimizer = tf.train.GradientDescentOptimizer(learning_rate)

training_op = optimizer.minimize(loss, name='train')

init = tf.global_variables_initializer()

saver = tf.train.Saver()

# --- execution phase ---
n_epochs = 40
batch_size = 32
n_batches = int(num_samp/batch_size)

with tf.Session() as sess:

    init.run()

    for epoch in range(n_epochs):
        print("Epoch: ", epoch, " Running...")
        loss_arr = np.array([])

        for iteration in range( n_batches ):
            start = iteration * batch_size
            end = start + batch_size

            sess.run(training_op, feed_dict={X: data[start:end], y: z_label[start:end] })
            loss_arr = np.append(loss_arr, loss.eval(feed_dict={X: data[start:end, :], y: z_label[start:end]}))

        mean_loss = np.mean(loss_arr)
        print("Epoch: ", epoch, " Calculated ==> Loss: ", mean_loss)

虽然 Keras 模型训练正确,损失减少且测试结果正确,但新模型收敛速度非常快并停止学习。因此,结果完全没有用。

我是在错误地构建/训练模型,还是 Keras 在后台做任何我不知道的事情?

【问题讨论】:

    标签: python tensorflow keras


    【解决方案1】:

    解决了这个问题。问题是标签向量的形状。它是一个形状为 (1000000,) 的说谎向量。虽然 Keras 显然能够处理不同形状的输出和标签向量,但 Tensorflow 错误地初始化了占位符和损失函数

    loss = tf.reduce_mean(tf.square(logits - y),  name='loss')
    

    不再有意义,因此训练失败。添加

    z_label = z_label.reshape(-1,1)
    

    将标签向量重新整形为 (1000000, 1) 并求解。或者,可以更精确地指定占位符的形状

    y = tf.placeholder(tf.float32, shape=(None,1), name="target")
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2021-11-03
      • 2017-12-17
      • 2018-05-08
      • 2019-06-04
      • 1970-01-01
      • 1970-01-01
      • 2021-12-26
      • 1970-01-01
      相关资源
      最近更新 更多