【问题标题】:Train each "head" of a multi-output neural network independtly独立训练多输出神经网络的每个“头”
【发布时间】:2019-07-23 20:42:44
【问题描述】:

我正在尝试训练一个使用共享特征提取器的模型,然后将其拆分为由小层组成的 n 个“头”以产生不同的输出。

当我首先训练头部“a”时,一切正常,但是当我切换到头部“b”时,python 会从 tensorflow 抛出 InvalidArgumentError。当我从头“b”开始然后训练头“a”时也是如此。

我尝试遵循在 stackoverflow 上找到的不同方法,例如 this one,但没有奏效。

我正在构建我的模型如下

alphaLeaky=0.3 

inputs =Input(shape=(state_shape[0],state_shape[1],state_shape[2]))
outputs=ZeroPadding2D(padding=(1,1))(inputs)
outputs=LocallyConnected2D(1, (6,6), activation='linear', padding='valid')(outputs) 
outputs=Flatten()(outputs) 
outputs=Dense(768,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                        
outputs=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs)

outputs=Dense(512,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                  
outputs=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs)

outputs1=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)
outputs1=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs1)
outputs1=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs1)     
outputs1=Activation('linear')(outputs1)

outputs2=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)
outputs2=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs2)
outputs2=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs2)     
outputs2=Activation('linear')(outputs2)

outputs3=Dense(256,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs)                        
outputs3=advanced_activations.LeakyReLU(alpha=alphaLeaky)(outputs3)
outputs3=Dense(action_number,kernel_initializer='lecun_uniform',bias_initializer='zeros')(outputs3)
outputs3=Activation('linear')(outputs3)

model1= Model(inputs=inputs, outputs=outputs1)
model2= Model(inputs=inputs, outputs=outputs2)
model3= Model(inputs=inputs, outputs=outputs3)

model1.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)


model2.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

model3.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)

然后我使用 fit 方法训练它们。

例如,如果我运行model1.fit(...),它可以工作,但是当我运行model2.fit(...)model3.fit(...) 时,我收到一条错误消息:

W tensorflow/core/framework/op_kernel.cc:993] Invalid argument: You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: dense_5/bias/read/_1075 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_60_dense_5/bias/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]


Caused by op 'activation_1_target', defined at:
  File "main.py", line 100, in <module>
    agent.init_brain()
  File "/dds/work/DQL/dql_last_version/8th_code_multi/agent_per.py", line 225, in init_brain
    self.brain = Brain_2D(self.state_shape,self.action_number)
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 141, in __init__
    Brain.__init__(self, action_number)
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 20, in __init__
    self.models, self.full_model = self._create_model()
  File "/dds/work/DQL/dql_last_version/8th_code_multi/brain.py", line 216, in _create_model
    neuralNet1.compile(loss='mse', optimizer=Adamax(lr=PAS_INITIAL, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/keras/engine/training.py", line 755, in compile
    dtype=K.dtype(self.outputs[i]))
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 497, in placeholder
    x = tf.placeholder(dtype, shape=shape, name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1502, in placeholder
    name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder
    name=name)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/dds/miniconda/envs/dds/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'activation_1_target' with dtype float
         [[Node: activation_1_target = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
         [[Node: dense_5/bias/read/_1075 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_60_dense_5/bias/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

我只想优化我选择的头部的权重,但似乎一旦某些输入通过网络通过了路径,它就在等待我再次通过同一个头部。即使我想训练其他重量。

我想只构建一个具有多个输出的模型

model= Model(inputs=inputs, outputs=[outputs1,outputs2,outputs3,outputs4]) 

但我希望每个头部都接受不同批次的数据训练(我正在做一个强化学习项目)。

谢谢!

【问题讨论】:

    标签: python machine-learning keras neural-network


    【解决方案1】:

    我解决了我的问题。

    我最终只编译了一个模型,但有 n 个输入和 n 个输出,n 是磁头数。 我给每个输入关联一个不同的批次,以便他们可以用不同的数据分布训练每个头部。

    对于测试部分,我只是将相同的输入复制 n 次并将其提供给模型。这可能不是最好的方法,但它确实有效。

    如果您对我的解决方案有任何想法或想法,请不要犹豫,我很高兴看到其他方法。

    谢谢

    【讨论】:

    • 我必须实现一个类似的模型,一个输入头和两个输出头,两个输出头之间共享偏置。我正在使用两个输出头进行对象检测图像分割。在您的情况下,您是对两个输入头使用相同的数据集还是使用不同的数据集?我仍然想知道如何实现两个头并使用两个数据集来训练模型。 (我正在使用 pytorch)
    猜你喜欢
    • 1970-01-01
    • 2014-06-23
    • 1970-01-01
    • 2011-04-07
    • 1970-01-01
    • 2019-11-14
    • 2021-10-10
    • 2010-11-20
    • 2019-09-15
    相关资源
    最近更新 更多