【发布时间】:2017-09-06 22:20:33
【问题描述】:
最近我使用以下代码在 MNIST 数据集上实现了反向传播,并获得了大约 95.7% 的整体准确率。
我的问题是如何进一步提高下面给出的代码的准确性。
我尝试增加/减少隐藏节点的数量。还将学习率更改为不同的值,但准确度不会超过 96%。
import numpy as np
import matplotlib.pyplot as plt
import scipy.special
from sklearn.metrics import confusion_matrix
k = list()
k_ =list()
class NeuralNetworks:
def __init__(self, inputnodes, hiddennodes, outputnodes, learningrate ):
self.inodes = inputnodes
self.hnodes = hiddennodes
self.onodes = outputnodes
self.lr = learningrate
self.wih = np.random.normal(0.0 , pow(self.hnodes , -0.5),(self.hnodes, self.inodes))
self.who = np.random.normal(0.0 , pow(self.onodes , -0.5),(self.onodes , self.hnodes))
self.activation_function = lambda x: scipy.special.expit(x)
pass
def train(self, input_list, target_list):
inputs = np.array(input_list , ndmin = 2).T
targets = np.array(target_list , ndmin =2).T
hidden_inputs = np.dot(self.wih , inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.who , hidden_outputs)
final_outputs = self.activation_function(final_inputs)
output_errors = targets - final_outputs
hidden_errors = np.dot(self.who.T, output_errors)
self.who += self.lr*np.dot((output_errors * final_outputs * (1 - final_outputs)), np.transpose(hidden_outputs))
self.wih += self.lr*np.dot((hidden_errors * hidden_outputs *(1 - hidden_outputs)), np.transpose(inputs))
pass
def query(self, input_list):
inputs = np.array(input_list , ndmin = 2).T
hidden_inputs = np.dot(self.wih , inputs)
hidden_outputs = self.activation_function(hidden_inputs)
final_inputs = np.dot(self.who , hidden_outputs)
final_outputs = self.activation_function(final_inputs)
return final_outputs
input_nodes = 784
hidden_nodes = 300
output_nodes = 10
learning_rate = 0.2
n = NeuralNetworks(input_nodes , hidden_nodes , output_nodes , learning_rate)
train_data_f = open("C:\Python27\mnist\mnist_train.csv" , 'r')
train_data_all = train_data_f.readlines()
train_data_f.close()
for rec in train_data_all:
all_val = rec.split(',')
inputs = (np.asfarray(all_val[1:])/255.0*.99) + 0.01
targets = np.zeros(output_nodes) + 0.01
targets[int(all_val[0])] = 0.99
n.train(inputs , targets)
test_data_f = open("C:\Python27\mnist\mnist_test.csv" , 'r')
test_data_all = test_data_f.readlines()
test_data_f.close()
for rec in test_data_all:
all_val = rec.split(',')
p = (n.query((np.asfarray(all_val[1:])/255*.99)+0.01))
k.append(list(p).index(max(list(p))))
k_.append(int(all_val[0]))
print confusion_matrix(k_ , k)
print np.trace(np.asarray(confusion_matrix(k_ , k)))/10000.0
上述代码的输出是(Confusion Matrix & Whole Accuracy):
Confusion Matrix-
[[ 965 0 1 0 0 1 9 0 3 1]
[ 0 1126 2 1 0 1 2 0 3 0]
[ 8 4 958 19 1 1 6 10 22 3]
[ 1 0 2 982 0 5 1 4 9 6]
[ 3 0 4 0 923 0 9 0 3 40]
[ 3 3 0 14 1 843 11 0 12 5]
[ 7 3 0 0 3 9 935 0 1 0]
[ 4 16 5 1 3 1 1 952 2 43]
[ 3 3 1 12 6 8 8 5 920 8]
[ 4 7 0 8 8 2 2 3 8 967]]
Overall Accuracy is 0.9571
剧情如下:
【问题讨论】:
-
可重现的数据集?另外,您为什么认为使用这种技术可以实现超过 96% 的准确率?
-
@C8H10N4O2 好的,没关系...精度达不到那么高,为什么达不到?
标签: python neural-network backpropagation