【发布时间】:2020-12-13 07:28:27
【问题描述】:
在下面提供的代码中,我试图用 MNIST 数字编号 (to_categorical) 映射幅度(10 个值)信号。
所以我输入了 10 个数字唯一的值并尝试对数字进行分类。
问题在于验证损失和准确性没有改变。代码是可重现的,我附上了x_train和x_test的链接。
请谁能告诉我这种情况下可能出现的问题。
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# reshape to be [samples][width][height][channels]
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1))
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1))
# convert from int to float
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
num_classes=10
y_train=to_categorical(y_train,num_classes)
y_test=to_categorical(y_test,num_classes)
x_train=(60000,10,1,1)
y_train=(60000,10)
x_test=(10000,10,1,1)
y_test=(10000,10)
input_img = Input(shape=(10,1,1))
x = Flatten()(input_img)
x = Dense(100, activation='relu')(x)
x = Dense(200, activation='relu')(x)
x = Dense(500, activation='relu')(x)
x = Dense(200, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(100, activation='relu')(x)
decoded = Dense(10, activation='softmax')(x)
autoencoder=Model(input_img,decoded)
adam=0.01
autoencoder.compile(optimizer=adam,loss='categorical_crossentropy',metrics=['accuracy'])
history=autoencoder.fit(x_train, y_train,
epochs=30,
batch_size=32,
verbose=1,
shuffle=True,
validation_data=(x_test, y_test))
请建议可以进行哪些更改。
x_train 数据为found at。
x_test 数据为available at。
轨迹为
Train on 60000 samples, validate on 10000 samples
Epoch 1/30
60000/60000 [==============================] - 6s 106us/step - loss: 2.2495 - acc: 0.1665 - val_loss: 2.2312 - val_acc: 0.1794
Epoch 2/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2275 - acc: 0.1800 - val_loss: 2.2292 - val_acc: 0.1790
Epoch 3/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2217 - acc: 0.1845 - val_loss: 2.2087 - val_acc: 0.1944
Epoch 4/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2184 - acc: 0.1861 - val_loss: 2.2533 - val_acc: 0.1631
Epoch 5/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2152 - acc: 0.1882 - val_loss: 2.2084 - val_acc: 0.1934
Epoch 6/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2139 - acc: 0.1877 - val_loss: 2.2234 - val_acc: 0.1779
Epoch 7/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2125 - acc: 0.1886 - val_loss: 2.2245 - val_acc: 0.1776
Epoch 8/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2107 - acc: 0.1932 - val_loss: 2.2173 - val_acc: 0.1888
Epoch 9/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2113 - acc: 0.1909 - val_loss: 2.2074 - val_acc: 0.1890
Epoch 10/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2097 - acc: 0.1910 - val_loss: 2.1980 - val_acc: 0.1953
Epoch 11/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2081 - acc: 0.1914 - val_loss: 2.2248 - val_acc: 0.1814
Epoch 12/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2089 - acc: 0.1912 - val_loss: 2.2367 - val_acc: 0.1739
Epoch 13/30
60000/60000 [==============================] - 5s 90us/step - loss: 2.2076 - acc: 0.1922 - val_loss: 2.2233 - val_acc: 0.1841
Epoch 14/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2063 - acc: 0.1914 - val_loss: 2.2039 - val_acc: 0.1934
Epoch 15/30
60000/60000 [==============================] - 5s 91us/step - loss: 2.2065 - acc: 0.1936 - val_loss: 2.2435 - val_acc: 0.1783
Epoch 16/30
60000/60000 [==============================] - 6s 92us/step - loss: 2.2053 - acc: 0.1957 - val_loss: 2.2050 - val_acc: 0.1958
Epoch 17/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2048 - acc: 0.1943 - val_loss: 2.2285 - val_acc: 0.1796
Epoch 18/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2038 - acc: 0.1958 - val_loss: 2.2069 - val_acc: 0.1954
Epoch 19/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2034 - acc: 0.1945 - val_loss: 2.2001 - val_acc: 0.2020
Epoch 20/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2030 - acc: 0.1938 - val_loss: 2.2140 - val_acc: 0.1894
Epoch 21/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2028 - acc: 0.1949 - val_loss: 2.2047 - val_acc: 0.1953
Epoch 22/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2016 - acc: 0.1954 - val_loss: 2.2338 - val_acc: 0.1748
Epoch 23/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.2017 - acc: 0.1956 - val_loss: 2.2158 - val_acc: 0.1862
Epoch 24/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.2010 - acc: 0.1944 - val_loss: 2.2195 - val_acc: 0.1915
Epoch 25/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1997 - acc: 0.1949 - val_loss: 2.2128 - val_acc: 0.1893
Epoch 26/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.1994 - acc: 0.1938 - val_loss: 2.2114 - val_acc: 0.1927
Epoch 27/30
60000/60000 [==============================] - 6s 93us/step - loss: 2.1983 - acc: 0.1968 - val_loss: 2.2269 - val_acc: 0.1821
Epoch 28/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1992 - acc: 0.1953 - val_loss: 2.2127 - val_acc: 0.1885
Epoch 29/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1980 - acc: 0.1966 - val_loss: 2.2455 - val_acc: 0.1717
Epoch 30/30
60000/60000 [==============================] - 6s 94us/step - loss: 2.1974 - acc: 0.1965 - val_loss: 2.2155 - val_acc: 0.1914
【问题讨论】:
标签: python tensorflow keras deep-learning neural-network