【问题标题】:Why is accuracy and loss staying exactly the same while training?为什么训练时准确率和损失保持不变?
【发布时间】:2020-01-06 08:41:48
【问题描述】:

所以我尝试从https://www.tensorflow.org/tutorials/keras/basic_classification 修改入门教程,以使用我自己的数据。目标是对狗和猫的图像进行分类。代码非常简单,如下所示。问题是网络似乎根本没有学习,训练损失和准确率在每个 epoch 之后都保持不变。

图像 (X_training) 和标签 (y_training) 似乎具有正确的格式: X_training.shape 返回:(18827, 80, 80, 3)

y_training 是一个在 {0,1} 中包含条目的一维列表

我已经检查了好几次,X_training 中的“图像”被正确标记: 假设X_training[i,:,:,:] 代表一只狗,那么y_training[i] 将返回一个1,如果X_training[i,:,:,:] 代表一只猫,那么y_training[i] 将返回一个0。

下面显示的是没有导入语句的完整 python 文件。

#loading the data from 4 pickle files:
pickle_in = open("X_training.pickle","rb")
X_training = pickle.load(pickle_in)

pickle_in = open("X_testing.pickle","rb")
X_testing = pickle.load(pickle_in)

pickle_in = open("y_training.pickle","rb")
y_training = pickle.load(pickle_in)

pickle_in = open("y_testing.pickle","rb")
y_testing = pickle.load(pickle_in)


#normalizing the input data:
X_training = X_training/255.0
X_testing = X_testing/255.0


#building the model:
model = keras.Sequential([
    keras.layers.Flatten(input_shape=(80, 80,3)),
    keras.layers.Dense(128, activation=tf.nn.relu),
    keras.layers.Dense(1,activation='sigmoid')
])
model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])


#running the model:
model.fit(X_training, y_training, epochs=10)

代码编译和训练了 10 个 epoch,但损失和准确性都没有提高,在每个 epoch 之后它们都保持不变。 该代码适用于本教程中使用的 MNIST 时尚数据集,但由于多类与二进制分类和输入形状的差异而略有变化。

【问题讨论】:

  • 这个网络对于狗/猫分类来说可能太简单了,一个模型在一个数据集上工作并不意味着它可以在另一个数据集上工作,特别是如果后者本质上要复杂得多。

标签: python tensorflow keras neural-network classification


【解决方案1】:

如果你想训练一个分类模型,你必须有 binary_crossentropy 作为你失去的功能,而不是用于回归任务的mean_squared_error

替换

model.compile(optimizer='adam',loss='mean_squared_error',metrics=['accuracy'])

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])

此外,我建议不要在密集层上使用relu 激活,而是使用linear

替换

keras.layers.Dense(128, activation=tf.nn.relu),

keras.layers.Dense(128),

当然,为了更好地利用神经网络的力量,在你的 flatten layer 之前使用一些 convolutional layers

【讨论】:

    【解决方案2】:

    我发现了一个不同的实现,它的模型稍微复杂一些。 以下是没有导入语句的完整代码:

    #global variables:
    batch_size = 32
    nr_of_epochs = 64
    input_shape = (80,80,3)
    
    
    #loading the data from 4 pickle files:
    pickle_in = open("X_training.pickle","rb")
    X_training = pickle.load(pickle_in)
    
    pickle_in = open("X_testing.pickle","rb")
    X_testing = pickle.load(pickle_in)
    
    pickle_in = open("y_training.pickle","rb")
    y_training = pickle.load(pickle_in)
    
    pickle_in = open("y_testing.pickle","rb")
    y_testing = pickle.load(pickle_in)
    
    
    
    #building the model
    def define_model():
        model = Sequential()
        model.add(Conv2D(32, (3, 3), activation='relu', input_shape=input_shape))
        model.add(MaxPooling2D((2, 2)))
        model.add(Flatten())
        model.add(Dense(128, activation='relu'))
        model.add(Dense(1, activation='sigmoid'))
        # compile model
        model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
        return model
    model = define_model()
    
    
    #Possibility for image data augmentation
    train_datagen = ImageDataGenerator(rescale=1.0/255.0)
    val_datagen = ImageDataGenerator(rescale=1./255.) 
    train_generator =train_datagen.flow(X_training,y_training,batch_size=batch_size)
    val_generator = val_datagen.flow(X_testing,y_testing,batch_size= batch_size)
    
    
    
    #running the model
    history = model.fit_generator(train_generator,steps_per_epoch=len(X_training) //batch_size,
                                  epochs=nr_of_epochs,validation_data=val_generator,
                                  validation_steps=len(X_testing) //batch_size)
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 2018-12-20
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2020-10-25
      • 1970-01-01
      • 1970-01-01
      • 2018-02-02
      相关资源
      最近更新 更多