【发布时间】:2016-05-25 22:08:11
【问题描述】:
我正在为 ANN(多层反向传播)的学习过程编写此代码,但学习的结果非常糟糕,它在任何时候都不接近 1 我知道我们无法保证学习成功但我想知道我是否在此代码中犯了错误,或者我是否可以使这些步骤具有更高的性能。
步骤:
1- 上传我的数据集
2- 从 225 行中选择 170 行用于学习,其余 50 行用于测试(随机)
3- 在 0 到 1 之间随机创建输入和隐藏层的权重
4- 在 -1 和 1 之间随机创建隐藏层和输出层的 Bias
5-找到每一行的输出
6- 找到每个输出的错误,然后找到每个隐藏层
7- 在每次迭代时更新权重和偏差数组
8- 计算每次迭代的平方误差总和 (MSE)。
每个输出的结果总是介于 .2 和 .5 之间,也不是所需的输出。 我的逻辑或代码中可能出现的错误是什么!
注意事项: 1-(我使用的数据集有 225 行,108 列,25 个结果作为类) 170行学习 55行用于测试
2- 50,000 次迭代
3- 学习率 0.3
4- 动量 = 0.7
5- 隐藏层 ne。否 = 90
代码:
%Initialize the weight matrices with random weights
V = rand(inlayer,hlayer); % Weight matrix from Input to Hidden between [0,1]
W = rand(hlayer,olayer); % Weight matrix from Hidden to Output between [0,1]
%Initialize the theta matrices for hidden and output layers
Thetahidden = randi(1,hlayer);
Thetaoutput = randi(1,olayer);
for i=1:iteration
for j=1:170 % depends on training data set
%This for output between input-hidden
for h=1:hlayer % depends on neuron number at hidden layer
sum = 0;
for k=1:108 % depends on column number
sum = sum + (V(k,h)* trainingdata(j,k));
end
H(h) = sum + Thetahidden(h);
Oh(h) = 1/(1+exp(-H(h)));
end
%This for output between hidden-output
for o=1:olayer % depends on number of output layer
sumO = 0;
for hh=1:hlayer
sumO = sumO+W(hh,o)*Oh(hh);
end
O(o)=sumO + Thetaoutput(o);
OO(o) = 1/(1+exp(-O(o)));
finaloutputforeachrow(j,o)= OO(o);
end
% Store real value of real output
for r=1:170
for o=1:olayer
i=outputtrainingdata(r);
if i == o
RO(r,o)=1;
else
RO(r,o)=0;
end
end
end
sumerror =0;
% Compute Error ( output layer )
for errorout=1:olayer
lamdaout(errorout) = OO(errorout)*(1-OO(errorout))*(RO(j,errorout)-OO(errorout));
errorrate = RO(j,errorout)-OO(errorout);
sumerror = sumerror+(errorrate^2);
FinalError(j,errorout) = errorrate;
% Compute Error ( hidden layer )
ersum=0;
for errorh=1:hlayer
ersum= lamdaout(errorout)*W(errorh,errorout);
lamdahidden(errorh)= Oh(errorh)*(1-Oh(errorh))*ersum;
end
FinalSumError(j) = (1/2)*sumerror;
end
%update weights between input and hidden layer
for h=1:hlayer
for k=1:108
deltaw(k,h) = learningrate*lamdahidden(h)*trainingdata(j,k);
V(k,h) = (m*V(k,h)) + deltaw(k,h);
end
end
%update weights/Theta between hidden and output layer
for h=1:hlayer
for outl=1:olayer
%weight
deltaw2(h,outl) = learningrate * lamdaout(outl)*Oh(h);
W(h,outl)= (m*W(h,outl))+deltaw2(h,outl);
end
end
for h=1:hlayer
%Theta-Hidden
deltaHiddenTh(h) = learningrate * lamdahidden(h);
Thetahidden(h) = (m*Thetahidden(h)) + deltaHiddenTh(h);
end
for outl=1:olayer
%Theta-Output
deltaOutputTh(outl) = learningrate * lamdaout(outl);
Thetaoutput(outl) = (m*Thetaoutput(outl)) + deltaOutputTh(outl);
end
end
end
【问题讨论】:
标签: matlab neural-network backpropagation