【发布时间】:2021-11-01 07:33:42
【问题描述】:
我想在训练迭代中访问训练点,并通过使用未包含在训练集中的数据点将软约束合并到我的损失函数中。我将使用this post 作为参考。
import numpy as np
import keras.backend as K
from keras.layers import Dense, Input
from keras.models import Model
# Some random training data and labels
features = np.random.rand(100, 5)
labels = np.random.rand(100, 2)
# Simple neural net with three outputs
input_layer = Input((20,))
hidden_layer = Dense(16)(input_layer)
output_layer = Dense(3)(hidden_layer)
# Model
model = Model(inputs=input_layer, outputs=output_layer)
#each training point has another data pair. In the real example, I will have multiple
#supporters. That is why I am using dict.
holder = np.random.rand(100, 5)
iter = np.arange(start=1, stop=features.shape[0], step=1)
supporters = {}
for i,j in zip(iter, holder): #i represent the ith training data
supporters[i]=j
# Write a custom loss function
def custom_loss(y_true, y_pred):
# Normal MSE loss
mse = K.mean(K.square(y_true-y_pred), axis=-1)
new_constraint = ....
return(mse+new_constraint)
model.compile(loss=custom_loss, optimizer='sgd')
model.fit(features, labels, epochs=1, ,batch_size=1=1)
为简单起见,让我们假设我想通过使用固定的网络权重来最小化预测值与存储在supporters 中的对数据的预测之间的最小绝对值差异。另外,假设我每批都通过一个训练点。但是,我无法弄清楚如何执行此操作。我尝试了如下所示的方法,但显然它不正确。
new_constraint = K.sum(y_pred - model.fit(supporters))
【问题讨论】:
标签: tensorflow keras neural-network tensorflow2.0