【问题标题】:MxNet with R: Simple XOR Neural Network is not learning带 R 的 MxNet:简单的 XOR 神经网络不学习
【发布时间】:2018-07-24 04:18:53
【问题描述】:

我想尝试 MxNet 库并构建一个简单的神经网络来学习 XOR 函数。我面临的问题是模型没有学习。

这是完整的脚本:

library(mxnet)

train = matrix(c(0,0,0,
                 0,1,1,
                 1,0,1,
                 1,1,0),
               nrow=4,
               ncol=3,
               byrow=TRUE)

train.x = train[,-3]
train.y = train[,3]

data <- mx.symbol.Variable("data")
fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")

mx.set.seed(0)
model <- mx.model.FeedForward.create(
  softmax,
  X = t(train.x),
  y = train.y,
  num.round = 10,
  array.layout = "columnmajor",
  learning.rate = 0.01,
  momentum = 0.4,
  eval.metric = mx.metric.accuracy,
  epoch.end.callback = mx.callback.log.train.metric(100))

predict(model,train.x,array.layout="rowmajor")

产生这个输出:

Start training with 1 devices
[1] Train-accuracy=NaN
[2] Train-accuracy=0.5
[3] Train-accuracy=0.5
[4] Train-accuracy=0.5
[5] Train-accuracy=0.5
[6] Train-accuracy=0.5
[7] Train-accuracy=0.5
[8] Train-accuracy=0.5
[9] Train-accuracy=0.5
[10] Train-accuracy=0.5

> predict(model,train.x,array.layout="rowmajor")
[,1] [,2] [,3] [,4]
[1,]    1    1    1    1

我应该如何使用 mxnet 来使这个示例正常工作?

问候, 瓦卡

【问题讨论】:

    标签: r neural-network mxnet


    【解决方案1】:

    通常激活层不会在输入后立即进行,因为它应该在第一层的计算完成后被激活。 您仍然可以使用旧代码实现模仿 XOR 功能,但需要进行一些调整:

    1. 你是对的,你需要初始化权重。这是深度学习社区中的一个大讨论,初始权重是最好的,但从我的实践来看,Xavier 权重运行良好

    2. 如果你想使用softmax,你需要将最后一个隐藏层单元数量改为2,因为你有2个类:0和1

    在做了这 2 件事 + 一些小的优化之后,比如删除矩阵的转置,使用以下代码:

    library(mxnet)
    
    train = matrix(c(0,0,0,
                     0,1,1,
                     1,0,1,
                     1,1,0),
                   nrow=4,
                   ncol=3,
                   byrow=TRUE)
    
    train.x = train[,-3]
    train.y = train[,3]
    
    data <- mx.symbol.Variable("data")
    fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=2)
    act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
    fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
    act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
    fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=2)
    softmax <- mx.symbol.Softmax(fc3, name="sm")
    
    mx.set.seed(0)
    model <- mx.model.FeedForward.create(
      softmax,
      X = train.x,
      y = train.y,
      num.round = 50,
      array.layout = "rowmajor",
      learning.rate = 0.1,
      momentum = 0.99,
      eval.metric = mx.metric.accuracy,
      initializer = mx.init.Xavier(rnd_type = "uniform", factor_type = "avg", magnitude = 3),
      epoch.end.callback = mx.callback.log.train.metric(100))
    
    predict(model,train.x,array.layout="rowmajor")
    

    我们得到以下结果:

    Start training with 1 devices
    [1] Train-accuracy=NaN
    [2] Train-accuracy=0.75
    [3] Train-accuracy=0.5
    [4] Train-accuracy=0.5
    [5] Train-accuracy=0.5
    [6] Train-accuracy=0.5
    [7] Train-accuracy=0.5
    [8] Train-accuracy=0.5
    [9] Train-accuracy=0.5
    [10] Train-accuracy=0.75
    [11] Train-accuracy=0.75
    [12] Train-accuracy=0.75
    [13] Train-accuracy=0.75
    [14] Train-accuracy=0.75
    [15] Train-accuracy=0.75
    [16] Train-accuracy=0.75
    [17] Train-accuracy=0.75
    [18] Train-accuracy=0.75
    [19] Train-accuracy=0.75
    [20] Train-accuracy=0.75
    [21] Train-accuracy=0.75
    [22] Train-accuracy=0.5
    [23] Train-accuracy=0.5
    [24] Train-accuracy=0.5
    [25] Train-accuracy=0.75
    [26] Train-accuracy=0.75
    [27] Train-accuracy=0.75
    [28] Train-accuracy=0.75
    [29] Train-accuracy=0.75
    [30] Train-accuracy=0.75
    [31] Train-accuracy=0.75
    [32] Train-accuracy=0.75
    [33] Train-accuracy=0.75
    [34] Train-accuracy=0.75
    [35] Train-accuracy=0.75
    [36] Train-accuracy=0.75
    [37] Train-accuracy=0.75
    [38] Train-accuracy=0.75
    [39] Train-accuracy=1
    [40] Train-accuracy=1
    [41] Train-accuracy=1
    [42] Train-accuracy=1
    [43] Train-accuracy=1
    [44] Train-accuracy=1
    [45] Train-accuracy=1
    [46] Train-accuracy=1
    [47] Train-accuracy=1
    [48] Train-accuracy=1
    [49] Train-accuracy=1
    [50] Train-accuracy=1
    > 
    > predict(model,train.x,array.layout="rowmajor")
              [,1]         [,2]         [,3]         [,4]
    [1,] 0.9107883 2.618128e-06 6.384078e-07 0.9998743534
    [2,] 0.0892117 9.999974e-01 9.999994e-01 0.0001256234
    '''
    

    softmax 的输出被解释为“属于某个类别的概率”——它不是在进行常规数学运算后得到的“0”或“1”值。答案的含义如下:

    • 在“0 和 0”的情况下:“0”类的概率 = 0.9107883 和“1”类的概率 = 0.0892117,即为 0
    • 在“0 和 1”的情况下:“0”类的概率 = 2.618128e-06 和“1”类的概率 = 9.999974e-01,这意味着它是 1(1 的概率要高得多)
    • 在“1 和 0”的情况下:“0”类的概率 = 6.384078e-07 和“1”类的概率 = 9.999994e-01(1 的概率要高得多)
    • 在“1 和 1”的情况下:“0”类的概率 = 0.9998743534,“1”类的概率 = 0.0001256234,即为 0。

    【讨论】:

      【解决方案2】:

      好的,我尝试了更多,现在我有了一个在 R 中与 mxnet 进行 XOR 的工作示例。复杂的部分不是 mxnet API,而是神经网络的使用。

      所以这里是工作的 R 代码:

      library(mxnet)
      
      train = matrix(c(0,0,0,
                       0,1,1,
                       1,0,1,
                       1,1,0),
                     nrow=4,
                     ncol=3,
                     byrow=TRUE)
      
      train.x = t(train[,-3])
      train.y = t(train[,3])
      
      data <- mx.symbol.Variable("data")
      act0 <- mx.symbol.Activation(data, name="relu1", act_type="relu")
      fc1 <- mx.symbol.FullyConnected(act0, name="fc1", num_hidden=2)
      act1 <- mx.symbol.Activation(fc1, name="relu2", act_type="tanh")
      fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=3)
      act2 <- mx.symbol.Activation(fc2, name="relu3", act_type="relu")
      fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=1)
      act3 <- mx.symbol.Activation(fc3, name="relu4", act_type="relu")
      softmax <- mx.symbol.LinearRegressionOutput(act3, name="sm")
      
      mx.set.seed(0)
      model <- mx.model.FeedForward.create(
        softmax,
        X = train.x,
        y = train.y,
        num.round = 10000,
        array.layout = "columnmajor",
        learning.rate = 10^-2,
        momentum = 0.95,
        eval.metric = mx.metric.rmse,
        epoch.end.callback = mx.callback.log.train.metric(10),
        lr_scheduler=mx.lr_scheduler.FactorScheduler(1000,factor=0.9),
        initializer=mx.init.uniform(0.5)
        )
      
      predict(model,train.x,array.layout="columnmajor")
      

      初始代码有一些不同:

      • 我通过在数据和第一层之间放置另一个激活层来更改神经网络的布局。我将其解释为在数据和输入层之间放置权重(对吗?)

      • 我把隐藏层(有3个神经元)的激活函数改成tanh,因为我猜XOR需要负权重

      • 我将 SoftmaxOutput 更改为 LinearRegressionOutput 以优化平方损失

      • 微调学习率和动量

      • 最重要的是:我为权重添加了统一初始化程序。我猜默认模式是将权重设置为零。使用随机初始权重确实可以加快学习速度。

      输出:

      Start training with 1 devices
      [1] Train-rmse=NaN
      [2] Train-rmse=0.706823888574888
      [3] Train-rmse=0.705537411582449
      [4] Train-rmse=0.701298592443344
      [5] Train-rmse=0.691897326795625
      ...
      [9999] Train-rmse=1.07453801496744e-07
      [10000] Train-rmse=1.07453801496744e-07
      > predict(model,train.x,array.layout="columnmajor")
           [,1]      [,2] [,3] [,4]
      [1,]    0 0.9999998    1    0
      

      【讨论】:

        猜你喜欢
        • 2019-03-15
        • 2016-11-28
        • 1970-01-01
        • 2019-10-17
        • 2018-10-01
        • 2016-07-11
        • 1970-01-01
        • 1970-01-01
        • 2011-08-17
        相关资源
        最近更新 更多