【问题标题】:How to decrease Losses of Autoencoder Keras如何减少自编码器 Keras 的损失
【发布时间】:2022-01-02 15:24:47
【问题描述】:

我是 Keras 的新手,我正在尝试在 Keras 中使用自动编码器进行降噪,但我不知道为什么我的模型损失会迅速增加!我在这个数据集上应用了自动编码器:

https://archive.ics.uci.edu/ml/datasets/Parkinson%27s+Disease+Classification#

所以,我们有 756 个实例和 753 个特征。 (例如 x.shape=(756,753))

这是我到目前为止所做的:

# This is the size of our encoded representations:
encoding_dim = 64

# This is the input data:
input = keras.Input(shape=(x.shape[1],))

# "encoded" is the encoded representation of the input
encoded = layers.Dense(encoding_dim, activation = 'relu')(input)

# "decoded" is the lossy reconstruction of the input
decoded = layers.Dense(x.shape[1], activation = 'sigmoid')(encoded)

# "decoded" is the lossy reconstruction of the input
autoencoder = keras.Model(input, decoded)

autoencoder.compile(optimizer = 'adam', loss = 'binary_crossentropy')
autoencoder.fit(x, x, epochs = 20, batch_size = 10, shuffle = True, validation_split = 0.2)

但结果令人失望:

Epoch 1/20
61/61 [==============================] - 1s 4ms/step - loss: -0.1663 - val_loss: -1.5703
Epoch 2/20
61/61 [==============================] - 0s 2ms/step - loss: -5.7013 - val_loss: -10.0048
Epoch 3/20
61/61 [==============================] - 0s 3ms/step - loss: -20.5371 - val_loss: -27.9583
Epoch 4/20
61/61 [==============================] - 0s 2ms/step - loss: -46.5077 - val_loss: -54.0411
Epoch 5/20
61/61 [==============================] - 0s 3ms/step - loss: -83.1050 - val_loss: -90.6973
Epoch 6/20
61/61 [==============================] - 0s 3ms/step - loss: -130.1922 - val_loss: -135.2853
Epoch 7/20
61/61 [==============================] - 0s 3ms/step - loss: -186.8624 - val_loss: -188.3201
Epoch 8/20
61/61 [==============================] - 0s 3ms/step - loss: -252.7997 - val_loss: -250.6024
Epoch 9/20
61/61 [==============================] - 0s 2ms/step - loss: -328.5535 - val_loss: -317.7751
Epoch 10/20
61/61 [==============================] - 0s 2ms/step - loss: -413.2261 - val_loss: -396.6747
Epoch 11/20
61/61 [==============================] - 0s 3ms/step - loss: -508.1084 - val_loss: -479.6847
Epoch 12/20
61/61 [==============================] - 0s 2ms/step - loss: -610.1725 - val_loss: -573.7590
Epoch 13/20
61/61 [==============================] - 0s 2ms/step - loss: -721.8989 - val_loss: -671.3677
Epoch 14/20
61/61 [==============================] - 0s 3ms/step - loss: -840.6516 - val_loss: -780.9920
Epoch 15/20
61/61 [==============================] - 0s 3ms/step - loss: -970.8052 - val_loss: -894.2467
Epoch 16/20
61/61 [==============================] - 0s 3ms/step - loss: -1107.9106 - val_loss: -1015.4778
Epoch 17/20
61/61 [==============================] - 0s 2ms/step - loss: -1252.6410 - val_loss: -1147.4821
Epoch 18/20
61/61 [==============================] - 0s 2ms/step - loss: -1406.9744 - val_loss: -1276.9229
Epoch 19/20
61/61 [==============================] - 0s 2ms/step - loss: -1567.7247 - val_loss: -1421.1270
Epoch 20/20
61/61 [==============================] - 0s 2ms/step - loss: -1734.9993 - val_loss: -1569.7350

如何改进结果?

如果有任何帮助,我将不胜感激。谢谢。

来源:https://blog.keras.io/building-autoencoders-in-keras.html

【问题讨论】:

    标签: python keras neural-network autoencoder


    【解决方案1】:

    主要问题与您使用的参数或模型结构无关,而仅仅是来自您使用的数据。在基础教程中,作者喜欢使用经过完美预处理的数据来避免不必要的步骤。在您的情况下,您可能会避免 id 和 class 列留下 753 个功能。另一方面,我假设您已经标准化了您的数据而没有任何进一步的探索性分析并转发给自动编码器。解决对二元交叉熵没有意义的负损失的快速解决方法是规范化数据。

    我使用以下代码来规范化您的数据;

    df = pd.read_csv('pd_speech_features.csv', header=1)
    x = df.iloc[:,1:-1].apply(lambda x: (x-x.min())/ (x.max() - x.min()), axis=0)
    

    标准化后模型的前 20 个 epoch 结果

    Epoch 1/20
    61/61 [==============================] - 1s 9ms/step - loss: 0.4791 - val_loss: 0.4163
    Epoch 2/20
    61/61 [==============================] - 0s 6ms/step - loss: 0.4154 - val_loss: 0.4102
    Epoch 3/20
    61/61 [==============================] - 0s 6ms/step - loss: 0.4090 - val_loss: 0.4052
    Epoch 4/20
    61/61 [==============================] - 0s 6ms/step - loss: 0.4049 - val_loss: 0.4025
    Epoch 5/20
    61/61 [==============================] - 0s 7ms/step - loss: 0.4017 - val_loss: 0.4002
    Epoch 6/20
    61/61 [==============================] - 0s 8ms/step - loss: 0.3993 - val_loss: 0.3985
    Epoch 7/20
    61/61 [==============================] - 1s 9ms/step - loss: 0.3974 - val_loss: 0.3972
    Epoch 8/20
    61/61 [==============================] - 1s 13ms/step - loss: 0.3959 - val_loss: 0.3961
    Epoch 9/20
    61/61 [==============================] - 0s 8ms/step - loss: 0.3946 - val_loss: 0.3950
    Epoch 10/20
    61/61 [==============================] - 0s 6ms/step - loss: 0.3935 - val_loss: 0.3942
    Epoch 11/20
    61/61 [==============================] - 0s 7ms/step - loss: 0.3926 - val_loss: 0.3934
    Epoch 12/20
    61/61 [==============================] - 0s 7ms/step - loss: 0.3917 - val_loss: 0.3928
    Epoch 13/20
    61/61 [==============================] - 1s 9ms/step - loss: 0.3909 - val_loss: 0.3924
    Epoch 14/20
    61/61 [==============================] - 0s 4ms/step - loss: 0.3902 - val_loss: 0.3918
    Epoch 15/20
    61/61 [==============================] - 0s 3ms/step - loss: 0.3895 - val_loss: 0.3913
    Epoch 16/20
    61/61 [==============================] - 0s 3ms/step - loss: 0.3889 - val_loss: 0.3908
    Epoch 17/20
    61/61 [==============================] - 0s 4ms/step - loss: 0.3885 - val_loss: 0.3905
    Epoch 18/20
    61/61 [==============================] - 0s 4ms/step - loss: 0.3879 - val_loss: 0.3903
    Epoch 19/20
    61/61 [==============================] - 0s 4ms/step - loss: 0.3874 - val_loss: 0.3895
    Epoch 20/20
    61/61 [==============================] - 0s 4ms/step - loss: 0.3870 - val_loss: 0.3892
    

    【讨论】:

    • 亲爱的 Slybot,感谢您抽出宝贵时间。事实上,我已经将数据标准化并删除了 id 和 class 列。之后,我使用 StandardScaler 对数据进行标准化。在这种情况下,规范化数据似乎非常关键。你能解释更多或给我一个链接吗?再次感谢您。
    • 我建议您再次阅读 Keras 中的自动编码器教程,并意识到自动编码器与其他模型有些不同。传统目标变量的缺失以及与目标相同的训练特征的使用使得在这种情况下对损失函数的理解变得复杂。另请参阅二元交叉熵以及为什么它应该在 [0,1.] 中:peltarion.com/knowledge-center/documentation/modeling-view/… 然后,您可以看到为什么归一化在您的情况下至关重要。
    猜你喜欢
    • 2020-09-13
    • 2018-03-17
    • 2018-01-31
    • 1970-01-01
    • 2019-11-26
    • 2018-09-13
    • 1970-01-01
    • 2023-03-13
    • 1970-01-01
    相关资源
    最近更新 更多