【问题标题】:How do I divide a deep network into two separate network?如何将深度网络分成两个独立的网络?
【发布时间】:2023-03-22 20:19:02
【问题描述】:

我学习了一个有两个输入的网络。它用作自动编码器。在网络的第一部分,输入被馈送到网络,经过一些处理并从高斯噪声层传递,使用网络的第二部分。在学习期间,所有网络一起学习,但为了测试,我需要将它分成两部分。第一部分有两个输入,第二个网络得到一个输入,它是第一个网络的输出。所以当我想为每个部分制作两个模型时,它说第二部分没有输入。你能告诉我该怎么做吗?是否可以为第二部分制作相同的网络但使用第一个网络的权重进行学习?我很快就会放代码。我在 keras 工作。谢谢

我的代码是:

wt_random=np.random.randint(2, size=(49999,4,4))
w_expand=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_expand=wv_random.astype(np.float32)
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))

#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
w_test=w_test.reshape((1,4,4,1))


#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)


wpad=Kr.layers.Lambda(lambda xy: xy[0] + Kr.backend.spatial_2d_padding(xy[1], padding=((0, 24), (0, 24))))
encoded_merged=wpad([encoded,wtm])


deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd) 

model=Model(inputs=[image,wtm],outputs=decoded)

decoded_noise = GaussianNoise(0.5)(decoded)

convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)  
model2=Model(inputs=decoded_noise,outputs=pred_w)
w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])

w_extraction.summary()

错误:

Traceback(最近一次调用最后一次):

文件“”,第 55 行,在 model2=Model(inputs=decoded_noise,outputs=pred_w)

文件 "D:\software\Anaconda3\envs\py36\lib\site-packages\keras\legacy\interfaces.py", 第 91 行,在包装器中 返回函数(*args, **kwargs)

文件 "D:\software\Anaconda3\envs\py36\lib\site-packages\keras\engine\network.py", 第 93 行,在 init 中 self._init_graph_network(*args, **kwargs)

文件 "D:\software\Anaconda3\envs\py36\lib\site-packages\keras\engine\network.py", 第 231 行,在 _init_graph_network self.inputs, self.outputs)

文件 "D:\software\Anaconda3\envs\py36\lib\site-packages\keras\engine\network.py", 第 1443 行,在 _map_graph_network str(layers_with_complete_input))

ValueError: Graph disconnected: cannot get value for tensor Tensor("input_14:0", shape=(?, 28, 28, 1), dtype=float32) 在层 “输入_14”。访问以下先前层没有问题: []

新代码

wtm=Input((4,4,1))
image = Input((28, 28, 1))

#your code:
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)


wpad=Kr.layers.Lambda(lambda xy: xy[0] + Kr.backend.spatial_2d_padding(xy[1], padding=((0, 24), (0, 24))))
encoded_merged=wpad([encoded,wtm])
#end of your code

encoder = Model([image, wtm], encoded_merged)

encoded_input = Input((28,28,1))

#your code
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_input)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd) 

#end of your code

decoder = Model(encoded_input, decoded)

decoded_input = Input((28,28,1))

#your code
decoded_noise = GaussianNoise(0.5)(decoded_input)

convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)  

#end of your code
noiseNet = Model(inputs=decoded_input,outputs=pred_w)

#input for full nets
full_wtm = Input((4,4,1))
full_image = Input((28, 28, 1)) 

#encoded 
full_encoded = encoder([full_image, full_wtm])

#decoded
full_decoded = decoder(full_encoded)

#with noise
full_w = noiseNet(full_decoded)

#autoencoder
autoencoder = Model([full_image,full_wtm], full_decoded)

#full net
w_extraction = Model([full_image, full_wtm], [full_decoded, full_w])

(x_train, _), (x_test, _) = mnist.load_data()
x_validation=x_train[1:10000,:,:]
x_train=x_train[10001:60000,:,:]
#
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_validation = x_validation.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1))  # adapt this if using `channels_first` image data format
x_validation = np.reshape(x_validation, (len(x_validation), 28, 28, 1))

#---------------------compile and train the model------------------------------
#opt=SGD(momentum=0.99)
w_extraction.compile(optimizer='adam', loss={'decoder_output':'mse','reconstructed_W':'binary_crossentropy'}, loss_weights={'decoder_output': 0.45, 'reconstructed_W': 1.0},metrics=['mae'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20)
#rlrp = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=20, min_delta=1E-4, verbose=1)
mc = ModelCheckpoint('best_model_5x5F_dil_Los751.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
history=w_extraction.fit([x_train,w_expand], [x_train,w_expand],
          epochs=200,
          batch_size=16, 
          validation_data=([x_validation,wv_expand], [x_validation,wv_expand]),
          callbacks=[TensorBoard(log_dir='E:/concatnatenetwork', histogram_freq=0, write_graph=False),es,mc])
w_extraction.summary()

产生的错误:

Traceback(最近一次调用最后一次):

文件“”,第 136 行,在 w_extraction.compile(optimizer='adam', loss={'decoder_output':'mse','reconstructed_W':'binary_crossentropy'}, loss_weights={'decoder_output': 0.45, 'reconstructed_W': 1.0},metrics=['mae'])

文件 "D:\software\Anaconda3\envs\py36\lib\site-packages\keras\engine\training.py", 第 119 行,在编译中 str(self.output_names))

ValueError:损失字典中的未知条目:“decoder_output”。仅有的 预期以下键:['model_17', 'model_18']

【问题讨论】:

标签: python tensorflow keras


【解决方案1】:

理想情况下,您应该首先单独创建模型。

net1 = createNet1()
net2 = createNet2()

net2OutFrom1 = net2(net1.output)

entireModel = Model(net1.input, net2OutFrom1)

然后你训练entireModel,你可以自动使用net1net2,没有任何麻烦。

当你的网被做成一个单一的网时这样做。

你需要创建一个新的输入:

net2Input = Input(input_shape)

然后通过第二层网络的所有层。

out = originalNet.layers[firstLayerOfNet2](net2Input)
out = originalNet.layers[secondLayerOfNet2](out)
out = originalNet.layers[thirdLayerOfNet2](out)
....

然后分别创建第二个网:

net2 = Model(net2Input, out)

仍然可以轻松创建第一个网络:

net1 = Model(originalNet.input, originalNet.layers[lastLayerOfNet1].output)

用你的例子

个人网

编码器
wtm=Input((4,4,1))
image = Input((28, 28, 1))

#your code:
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded =  Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)


wpad=Kr.layers.Lambda(lambda xy: xy[0] + Kr.backend.spatial_2d_padding(xy[1], padding=((0, 24), (0, 24))))
encoded_merged=wpad([encoded,wtm])
#end of your code

encoder = Model([image, wtm], encoded_merged)
解码器
encoded_input = Input(shape_of_encoded_merged_without_batch_size)

#your code
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_input)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)

decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd) 

#end of your code

decoder = Model(encoded_input, decoded)
噪音网
decoded_input = Input(shape_of_decoded_without_batch_size)

#your code
decoded_noise = GaussianNoise(0.5)(decoded_input)

convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)  

#end of your code
noiseNet = Model(inputs=noiseInput,outputs=pred_w)

加入网络

#input for full nets
full_wtm = Input((4,4,1))
full_image = Input((28, 28, 1)) 

#encoded 
full_encoded = encoder([full_image, full_wtm])

#decoded
full_decoded = decoder(full_encoded)

#with noise
full_w = noiseNet(full_decoded)

#autoencoder
autoencoder = Model([full_image,full_wtm], full_decoded)

#full net
w_extraction = Model([full_image, full_wtm], [full_decoded, full_w])

您需要使用此解决方案再次进行训练。您训练的任何网络都会训练所有其他网络。

【讨论】:

  • 我把我的代码放在上面。对于我的网络的第一部分,我有一个名为 model 的模型,对于所有网络都有一个名为 w_extraction 的模型,但对于网络的第二部分,我需要一个模型,它的输入是解码噪声,输出是 pred_w。我学习了网络,但现在我需要有第二部分的模型。我现在该怎么办?我要重新学习网络吗?因为解码后的噪声是高斯噪声层的输出,它应该作为输入发送到网络的第二部分
  • 什么是 creatNet1() 和 creatNet2() 我应该将这些网络定义为函数吗?因为我是初学者,不知道你说的这些。
  • 我的问题是我必须将高斯噪声层的输出发送到第二个网络,我不知道该怎么做?你能告诉我该怎么做吗?
  • 我把模型和创建的错误请帮我解决这个问题。
  • 你能帮我解决这个问题吗?
猜你喜欢
  • 1970-01-01
  • 2023-03-23
  • 2018-10-23
  • 2013-03-25
  • 2018-05-08
  • 2014-08-24
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多