【问题标题】:Training problem, Val loss and accuracy not changing训练问题,Val 损失和准确率不变
【发布时间】:2020-12-13 07:28:27
【问题描述】:

在下面提供的代码中,我试图用 MNIST 数字编号 (to_categorical) 映射幅度(10 个值)信号。

所以我输入了 10 个数字唯一的值并尝试对数字进行分类。

问题在于验证损失和准确性没有改变。代码是可重现的,我附上了x_trainx_test的链接。

请谁能告诉我这种情况下可能出现的问题。

(X_train, y_train), (X_test, y_test) = mnist.load_data()
    # reshape to be [samples][width][height][channels]
    X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
    X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
    # convert from int to float
    X_train = X_train.astype('float32')
    X_test = X_test.astype('float32')



    num_classes=10
    y_train=to_categorical(y_train,num_classes)
    y_test=to_categorical(y_test,num_classes)


    x_train=(60000,10,1,1)
    y_train=(60000,10)
    x_test=(10000,10,1,1)
    y_test=(10000,10)




input_img = Input(shape=(10,1,1))
x = Flatten()(input_img)
x = Dense(100, activation='relu')(x)
x = Dense(200, activation='relu')(x)
x = Dense(500, activation='relu')(x)
x = Dense(200, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(100, activation='relu')(x)
decoded = Dense(10, activation='softmax')(x)


autoencoder=Model(input_img,decoded)
adam=0.01
autoencoder.compile(optimizer=adam,loss='categorical_crossentropy',metrics=['accuracy'])

history=autoencoder.fit(x_train, y_train,
                epochs=30,
                batch_size=32, 
                verbose=1,
                shuffle=True,
                validation_data=(x_test, y_test))

请建议可以进行哪些更改。

x_train 数据为found at

x_test 数据为available at

轨迹为

Train on 60000 samples, validate on 10000 samples
Epoch 1/30
60000/60000 [==============================] - 6s 106us/step - loss: 2.2495 - acc: 0.1665 - val_loss: 2.2312 - val_acc: 0.1794
Epoch 2/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2275 - acc: 0.1800 - val_loss: 2.2292 - val_acc: 0.1790
Epoch 3/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2217 - acc: 0.1845 - val_loss: 2.2087 - val_acc: 0.1944
Epoch 4/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2184 - acc: 0.1861 - val_loss: 2.2533 - val_acc: 0.1631
Epoch 5/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2152 - acc: 0.1882 - val_loss: 2.2084 - val_acc: 0.1934
Epoch 6/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2139 - acc: 0.1877 - val_loss: 2.2234 - val_acc: 0.1779
Epoch 7/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2125 - acc: 0.1886 - val_loss: 2.2245 - val_acc: 0.1776
Epoch 8/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2107 - acc: 0.1932 - val_loss: 2.2173 - val_acc: 0.1888
Epoch 9/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2113 - acc: 0.1909 - val_loss: 2.2074 - val_acc: 0.1890
Epoch 10/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2097 - acc: 0.1910 - val_loss: 2.1980 - val_acc: 0.1953
Epoch 11/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2081 - acc: 0.1914 - val_loss: 2.2248 - val_acc: 0.1814
Epoch 12/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2089 - acc: 0.1912 - val_loss: 2.2367 - val_acc: 0.1739
Epoch 13/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2076 - acc: 0.1922 - val_loss: 2.2233 - val_acc: 0.1841
Epoch 14/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2063 - acc: 0.1914 - val_loss: 2.2039 - val_acc: 0.1934
Epoch 15/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2065 - acc: 0.1936 - val_loss: 2.2435 - val_acc: 0.1783
Epoch 16/30
60000/60000 [==============================] - 6s 92us/step - loss: 2.2053 - acc: 0.1957 - val_loss: 2.2050 - val_acc: 0.1958
Epoch 17/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2048 - acc: 0.1943 - val_loss: 2.2285 - val_acc: 0.1796
Epoch 18/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2038 - acc: 0.1958 - val_loss: 2.2069 - val_acc: 0.1954
Epoch 19/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2034 - acc: 0.1945 - val_loss: 2.2001 - val_acc: 0.2020
Epoch 20/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2030 - acc: 0.1938 - val_loss: 2.2140 - val_acc: 0.1894
Epoch 21/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2028 - acc: 0.1949 - val_loss: 2.2047 - val_acc: 0.1953
Epoch 22/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2016 - acc: 0.1954 - val_loss: 2.2338 - val_acc: 0.1748
Epoch 23/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2017 - acc: 0.1956 - val_loss: 2.2158 - val_acc: 0.1862
Epoch 24/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2010 - acc: 0.1944 - val_loss: 2.2195 - val_acc: 0.1915
Epoch 25/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1997 - acc: 0.1949 - val_loss: 2.2128 - val_acc: 0.1893
Epoch 26/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.1994 - acc: 0.1938 - val_loss: 2.2114 - val_acc: 0.1927
Epoch 27/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.1983 - acc: 0.1968 - val_loss: 2.2269 - val_acc: 0.1821
Epoch 28/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1992 - acc: 0.1953 - val_loss: 2.2127 - val_acc: 0.1885
Epoch 29/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1980 - acc: 0.1966 - val_loss: 2.2455 - val_acc: 0.1717
Epoch 30/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1974 - acc: 0.1965 - val_loss: 2.2155 - val_acc: 0.1914

【问题讨论】:

    标签: python tensorflow keras deep-learning neural-network


    【解决方案1】:

    我看到了一些问题,第一个是您为模型定义的 Input 形状。

    MNIST 数据集包含形状为 (28,28) 的图像,您将其设置为 (10,1,1)

    您还使用Flatten 作为模型的第一层,其中对张量的展平操作将张量重塑为具有等于张量中包含的元素数量的形状,不包括批次维度。您应该展平卷积层的输出以创建单个长特征向量。

    这里有一个更适合您的模型:

    model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(28, 28, 1)))
    model.add(MaxPooling2D((2, 2)))
    model.add(Flatten())
    model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
    model.add(Dense(10, activation='softmax'))
    

    您还应该在将数据发送到模型之前对数据的像素值进行标准化:

    # convert from integers to floats
    train_norm = train.astype('float32')
    test_norm = test.astype('float32')
    # normalize to range 0-1
    train_norm = train_norm / 255.0
    test_norm = test_norm / 255.0
    

    【讨论】:

    • 感谢您的评论,但正如我在帖子中提到的,我的输入是一个具有 10 个值的信号,代表 MNIST 集中的每个数字,因此 (28,28) 不是我的输入大小我不使用图像作为输入。
    • 对不起,我不明白你在这里做什么。如果不是图片,那你在输入什么?
    • 我觉得帖子第一行写的很清楚
    猜你喜欢
    • 1970-01-01
    • 1970-01-01
    • 2020-01-06
    • 2019-09-06
    • 2020-08-06
    • 2018-09-23
    • 1970-01-01
    • 2020-09-05
    • 1970-01-01
    相关资源
    最近更新 更多