【问题标题】:Keras functional api Graph disconnected errorKeras 功能 api 图形断开连接错误
【发布时间】:2023-03-27 12:07:01
【问题描述】:

以下代码给了我一个图表断开错误,但我无法确定它来自哪里,并且不知道如何进行调试。最后一行decoder = Model(latentInputs, outputs, name="decoder") 抛出错误,我已将其与我修改但无济于事的工作代码进行了比较。

from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Conv2DTranspose
from tensorflow.keras.layers import LeakyReLU
from tensorflow.keras.layers import ReLU
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import GlobalAveragePooling2D
from tensorflow.keras.layers import UpSampling2D
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import GaussianNoise
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Reshape
from tensorflow.keras.layers import Add
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
import tensorflow as tf
import numpy as np

width=256
height=256
depth=3
inputShape = (height, width, depth)
chanDim = -1
filter_size = 3
latentDim = 512

# initialize the input shape to be "channels last" along with
# the channels dimension itself

inputShape = (height, width, depth)
chanDim = -1

# define the input to the encoder
inputs = Input(shape=inputShape)
x = GaussianNoise(0.2)(inputs)

x = Conv2D(128, filter_size, strides=1, padding="same")(x)

x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)
layer_1 = Conv2D(128, filter_size, strides=2, padding="same")(x)


x = BatchNormalization(axis=chanDim)(layer_1)
x = LeakyReLU(alpha=0.2)(x)
layer_2 = Conv2D(128, filter_size, strides=2, padding="same")(x)

x = BatchNormalization(axis=chanDim)(layer_2)
x = LeakyReLU(alpha=0.2)(x)
layer_3 = Conv2D(128, filter_size, strides=2, padding="same")(x)


x = BatchNormalization(axis=chanDim)(layer_3)
x = LeakyReLU(alpha=0.2)(x)
layer_4 = Conv2D(128, filter_size, strides=2, padding="same")(x)


x = BatchNormalization(axis=chanDim)(layer_4)
x = LeakyReLU(alpha=0.2)(x)
layer_5 = Conv2D(128, filter_size, strides=2, padding="same")(x)


x = BatchNormalization(axis=chanDim)(layer_5)
x = LeakyReLU(alpha=0.2)(x)
layer_6 = Conv2D(128, filter_size, strides=2, padding="same")(x)

x = BatchNormalization(axis=chanDim)(layer_6)
x = LeakyReLU(alpha=0.2)(x)
layer_7 = Conv2D(128, filter_size, strides=2, padding="same")(x)

latent = Flatten()(layer_7)
# flatten the network and then construct our latent vector
volumeSize = K.int_shape(layer_7)

# build the encoder model
encoder = Model(inputs, latent, name="encoder")
encoder.summary()   
# start building the decoder model which will accept the
# output of the encoder as its inputs
#%%
latentInputs = Input(shape=(np.prod(volumeSize[1:]),))
x = Reshape((volumeSize[1], volumeSize[2], volumeSize[3]))(latentInputs)

dec_layer_7 = Add()([x, layer_7])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_7)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_6 = Add()([x, layer_6])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_6)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_5 = Add()([x, layer_5])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_5)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_4 = Add()([x, layer_4])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_4)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_3 = Add()([x, layer_3])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_3)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_2 = Add()([x, layer_2])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_2)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)

dec_layer_1 = Add()([x, layer_1])
x = Conv2DTranspose(128, filter_size, strides=2, padding="same")(dec_layer_1)
x = BatchNormalization(axis=chanDim)(x)
x = LeakyReLU(alpha=0.2)(x)
outputs = Conv2DTranspose(depth, filter_size, padding="same")(x)
# apply a single CONV_TRANSPOSE layer used to recover the
# original depth of the image
# =============================================================================
# outputs = ReLU(max_value=1.0)(x)
# =============================================================================

# build the decoder model
decoder = Model(latentInputs, outputs, name="decoder")

错误是:

ValueError: Graph disconnected: cannot obtain value for tensor Tensor("input_37:0", shape=(None, 256, 256, 3), dtype=float32) at layer "input_37". The following previous layers were accessed without issue: []

【问题讨论】:

    标签: python-3.x tensorflow keras conv-neural-network autoencoder


    【解决方案1】:

    layer_7 指的是另一个模型...您必须在decoder 中为layer_7 提供输入。一个解决方案是用这种方式定义你的解码器

    decoder = Model([latentInputs, encoder.input], outputs, name="decoder")
    

    这里是完整的例子:https://colab.research.google.com/drive/1W8uLy49H_8UuD9DGZvtP7Md1f4ap3u6A?usp=sharing

    【讨论】:

      猜你喜欢
      • 2019-07-14
      • 2019-04-07
      • 2019-10-08
      • 1970-01-01
      • 1970-01-01
      • 2017-11-26
      • 2022-01-18
      • 2019-05-07
      • 1970-01-01
      相关资源
      最近更新 更多