【问题标题】:Keras: learning rate scheduleKeras:学习率表
【发布时间】:2024-01-22 02:03:01
【问题描述】:

我正在 Keras 中实现 MLP,并调整超参数。实验对象之一是学习率。我正在尝试使用两个时间表,都在this tutorial 中列出。一种是使用学习率/历元专门定义的,一种是使用单独定义的阶跃衰减函数。必要的代码如下。

错误是'“调度”函数的输出应该是浮点数'。我专门将学习率转换为浮点数,所以我不确定我哪里出错了?

编辑:原始代码不是 MWE,我很抱歉。要重现此错误,您可以保存下面的数据 sn-ps 并运行此代码。

import numpy as np
import sys, argparse, keras, string
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.callbacks import LearningRateScheduler, EarlyStopping
from keras.optimizers import SGD
from keras.constraints import maxnorm

def load_data(data_file, test_file):
    dataset = np.loadtxt(data_file, delimiter=",")

    # split into input (X) and output (Y) variables
    X = dataset[:, 0:(dataset.shape[1]-2)]
    Y = dataset[:, dataset.shape[1]-1]
    Y = Y - 1

    testset = np.loadtxt(test_file, delimiter=",")

    X_test = testset[:, 0:(testset.shape[1]-2)]
    Y_test = testset[:, testset.shape[1]-1]
    Y_test = Y_test - 1

    return (X, Y, X_test, Y_test)

def mlp_keras(data_file, test_file, save_file, num_layers, num_units_per_layer, learning_rate_, epochs_, batch_size_):

        history = History()
        seed = 7
        np.random.seed(seed)

        X, y_binary, X_test, ytest = load_data(data_file, test_file)

        d1 = True

        ### create model  
        model = Sequential()
        model.add(Dense(num_units_per_layer[0], input_dim=X.shape[1], init='uniform', activation='relu', W_constraint=maxnorm(3)))
        model.add(Dropout(0.2))
        model.add(Dense(num_units_per_layer[1], init='uniform', activation = 'relu', W_constraint=maxnorm(3))) #W_constraint for dropout
        model.add(Dropout(0.2))
        model.add(Dense(1, init='uniform', activation='sigmoid')) 

        def step_decay(epoch):
                drop_every = 10
                decay_rate = (learning_rate_*np.power(0.5, np.floor((1+drop_every)/drop_every))).astype('float32')
                return decay_rate

        earlyStopping = EarlyStopping(monitor='val_loss', patience=2)

        sgd = SGD(lr = 0.0, momentum = 0.8, decay = 0.0, nesterov=False)
        model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
        if d1 == True:
                lrate = LearningRateScheduler(step_decay)
        else:
                lrate = (learning_rate_/epochs).astype('float32')

        callbacks_list = [lrate, earlyStopping]

        ## Fit the model
        hist = model.fit(X, y_binary, validation_data=(X_test, ytest), nb_epoch=epochs_, batch_size=batch_size_, callbacks=callbacks_list) #48 batch_size, 2 epochs
        scores = model.evaluate(X, y_binary)
        print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
if __name__ == '__main__':

        m1 = mlp_keras('train_rows.csv', 'test_rows.csv', 'res1.csv', 2, [100, 100], 0.001,  10, 10)

错误信息:

  File "/user/pkgs/anaconda2/lib/python2.7/site-packages/keras/callbacks.py", line 435, in on_epoch_begin
    assert type(lr) == float, 'The output of the "schedule" function should be float.'
AssertionError: The output of the "schedule" function should be float.

数据 sn-p (train_ex.csv):

1,21,38,33,20,8,8,6,4,0,1,1,1,2,1,1,0,2,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
1,19,29,26,28,13,6,7,3,2,4,4,3,2,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
1,22,21,22,15,11,12,9,4,6,4,5,4,2,1,0,4,1,0,0,1,2,2,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
1,18,24,14,17,6,14,10,5,7,4,2,4,1,4,2,0,3,4,1,3,3,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2

数据 sn-p (test_ex.csv):

1,16,30,40,44,8,7,1,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
1,19,32,16,18,32,5,7,4,6,1,1,0,2,1,0,1,0,1,0,2,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1
1,29,55,21,11,6,6,7,8,5,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2
1,23,18,11,16,10,7,5,7,9,3,7,8,5,3,4,0,3,3,3,0,1,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2

编辑 2:

基于@sascha 的cmets,我尝试进行了一些修改(这是下面的相关部分)。同样的错误。

def step_decay(epoch):
        drop_every = 10
        decay_rate = (learning_rate_*np.power(0.5, np.floor((1+drop_every)/drop_every))).astype('float32')
        return decay_rate

def step_exp_decay(epoch):
        return (learning_rate_/epochs).astype('float32')

earlyStopping = EarlyStopping(monitor='val_loss', patience=2)

sgd = SGD(lr = 0.0, momentum = 0.8, decay = 0.0, nesterov=False)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
if d1 == True:
        lrate = LearningRateScheduler(step_decay)
else:   
        lrate = LearningRateScheduler(step_exp_decay)

【问题讨论】:

  • lr = 0.0 完全没有意义。 decay=0.0 也很糟糕。你了解神经网络学习的基础知识吗? (这里只判断代码)
  • 好问题 - 这是在我链接到的教程中完成的。这对我来说也没有意义,但我不是专家。查看“基于下降的学习率计划”部分。
  • 我明白了。他正在用他的回调覆盖这个。但他的回调是调用一个函数。你的不是。这更像是一个基本的 Python 编程问题!
  • 谢谢,你知道具体怎么做吗?
  • 在他的回调列表lrate被执行,是一个函数。在您的回调中,lrate 是一个变量。不调用设置它的函数(因为函数名称不是回调列表的一部分;因此您的 learning_rate 仍然为 0;衰减相同)。我强烈推荐一些 python 研究! (基本问题是您对各种东西的重命名;但请确保您理解代码!)

标签: python machine-learning keras


【解决方案1】:

如果监控值在一定数量的 epoch 内没有改变,您也可以尝试检查 ReduceLROnPlateau 回调以将学习率降低预定义的因子,例如如果 5 个 epoch 的验证准确度没有提高,则学习率的一半如下所示:

learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', 
                                            patience=5, 
                                            verbose=1, 
                                            factor=0.5, 
                                            min_lr=0.0001)
model.fit_generator(..., callbacks=[learning_rate_reduction], ...)

【讨论】:

    【解决方案2】:

    首先:我之前误解了您的代码,并且我的 cmets 已被弃用!对不起!

    错误消息将我们引向这里的真正问题!

    您可以这样定义调度程序:

    def step_decay(epoch):
        drop_every = 10
        decay_rate = (learning_rate_*np.power(0.5, np.floor((1+drop_every)/drop_every))).astype('float32')
        return decay_rate
    

    检查它返回的类型!*它是<class 'numpy.float32'>。 (试试看:用python的type()函数)

    不知何故,keras 并没有对这些类型进行非常一般的检查,而是期望 <class 'float'>(python 的原生浮点数)。

    只需将您的 numpy-float 转换为原生 python-float:

    替换:decay_rate = (learning_rate_*np.power(0.5, np.floor((1+drop_every)/drop_every))).astype('float32')

    与:decay_rate = (learning_rate_*np.power(0.5, np.floor((1+drop_every)/drop_every))).astype('float32').item()

    Read the docs of numpy.ndarray.item(尤其是关于这种行为原因的注释)

    博客作者没有这个问题,因为他没有在他的调度程序中使用 numpy,而是使用了 python 的数学函数。这将产生一个原生浮点数!

    【讨论】: