【问题标题】:To optimize four parameters in Python Scipy.optimize.fmin_l_bfgs_b, with an error在Python Scipy.optimize.fmin_l_bfgs_b中优化四个参数,报错
【发布时间】:2015-10-29 05:16:32
【问题描述】:

我正在编写一个用于主动学习的算法,使用来自 scipy.optimize 的 L-BFGS 算法。我需要优化四个参数:alpha,beta,Wgamma

但是,它不起作用,错误

optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0=np.array(alpha,beta,W,gamma), fprime=func_grad)
ValueError: only 2 non-keyword arguments accepted   

请注意,在代码的最后一句中,x0 是四个参数的初始猜测。如果我改成x0=np.array((alpha,beta,W,gamma),dtype=float),我得到一个错误

ValueError: setting an array element with a sequence.

我不确定为什么会发生错误。

from sys import argv
import numpy as np
import scipy as sp
import pandas as pd
import scipy.stats as sps

num_labeler = 3
num_instance = 5

X = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5]])
Z = np.array([1,0,1,0,1])
Y = np.array([[1,0,1],[0,1,0],[0,0,0],[1,1,1],[1,0,0]])

W = np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
gamma = np.ones(5)
alpha = np.ones(4)
beta = 1


def log_p_y_xz(yit,zi,sigmati): #log P(y_it|x_i,z_i)
    return np.log(sps.norm(zi,sigmati).pdf(yit))#tested

def log_p_z_x(alpha,beta,xi): #log P(z_i=1|x_i)
    return -np.log(1+np.exp(-np.dot(alpha,xi)-beta))#tested

def sigma_eta_ti(xi, w_t, gamma_t): # 1+exp(-w_t x_i -gamma_t)^-1
    return 1/(1+np.exp(-np.dot(xi,w_t)-gamma_t)) #tested

def df_alpha(X,Y,Z,W,alpha,beta,gamma):#df/dalpha
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)*X[i]/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance))

def df_beta(X,Y,Z,W,alpha,beta,gamma):#df/dbelta
    return np.sum((2/(1+np.exp(-np.dot(alpha,X[i])-beta))-1)*np.exp(-np.dot(alpha,X[i])-beta)/(1+np.exp(-np.dot(alpha,X[i])-beta))**2 for i in range (num_instance))

def df_w(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dw
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))*X[i]for t in range(num_labeler)) for i in range (num_instance))

def df_gamma(X,Y,Z,W,alpha,beta,gamma):#df/sigma * sigma/dgamma
    return np.sum(np.sum((-3)*(Y[i][t]**2-(-np.log(1+np.exp(-np.dot(alpha,X[i])-beta)))*(2*Y[i][t]-1))*(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**4)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))+(1/(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))**2)*(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t])))*(1-(1/(1+np.exp(-np.dot(X[i],W[t])-gamma[t]))))for t in range(num_labeler)) for i in range (num_instance))

def func(para, *args):
    #the function to minimize
    #parameters
    alpha = para[0]#alpha should be an array
    beta = para[1]
    W = para[2]
    gamma = para[3]
    #args
    X = args [0]
    Y = args[1]
    Z = args[2]        
    return  np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],W[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(num_labeler)) for i in range (num_instance))


def func_grad(para, *args):
    #para have 4 values
    alpha = para[0]#alpha should be an array
    beta = para[1]
    W = para[2]
    gamma = para[3]
    #args
    X = args [0]
    Y = args[1]
    Z = args[2]
    #gradiants
    d_f_a = df_alpha(X,Y,Z,W,alpha,beta,gamma)
    d_f_b = df_beta(X,Y,Z,W,alpha,beta,gamma)
    d_f_w = df_w(X,Y,Z,W,alpha,beta,gamma)
    d_f_g = df_gamma(X,Y,Z,W,alpha,beta,gamma)
    return np.array([d_f_a, d_f_b,d_f_w,d_f_g])


optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 =np.array(alpha,beta,W,gamma), fprime = func_grad)

【问题讨论】:

  • np.array(alpha,beta,W,gamma) 在最后一行无效。

标签: python scipy mathematical-optimization


【解决方案1】:

scipy 优化例程只能优化参数的一维向量。看起来您正在尝试优化包含标量、向量和矩阵的混合值的元组。

您要做的是将所有相关参数值展平为一维数组,然后让您的 func 展开并适当地设置这些值。


编辑:我将继续创建一个提取参数的便捷函数;例如:

def get_params(para):
    # extract parameters from 1D parameter vector
    assert len(para) == 22
    alpha = para[0:4]
    beta = para[4]
    W = para[5:17].reshape(3, 4)
    gamma = para[17:]
    return alpha, beta, gamma, W

def func(para, *args):
    #the function to minimize
    #parameters
    alpha, beta, gamma, W = get_params(para)

    #args
    X = args [0]
    Y = args[1]
    Z = args[2]        
    return  np.sum(np.sum(log_p_y_xz(Y[i][t], Z[i], sigma_eta_ti(X[i],W[t],gamma[t]))+log_p_z_x(alpha, beta, X[i]) for t in range(num_labeler)) for i in range (num_instance))


def func_grad(para, *args):
    #para have 4 values
    alpha, beta, gamma, W = get_params(para)

    #args
    X = args [0]
    Y = args[1]
    Z = args[2]
    #gradiants
    d_f_a = df_alpha(X,Y,Z,W,alpha,beta,gamma)
    d_f_b = df_beta(X,Y,Z,W,alpha,beta,gamma)
    d_f_w = df_w(X,Y,Z,W,alpha,beta,gamma)
    d_f_g = df_gamma(X,Y,Z,W,alpha,beta,gamma)
    return np.array([d_f_a, d_f_b,d_f_w,d_f_g])


x0 = np.concatenate([np.ravel(alpha), np.ravel(beta), np.ravel(W), np.ravel(gamma)])
optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0=x0, fprime=func_grad)

它无法运行,因为您的 func()func_grad() 需要您的代码 sn-p 中未指定的其他参数,但此更改解决了您所询问的特定问题。

【讨论】:

  • 您的评论是对的...但是我不知道如何将所有参数展平为一维数组...
  • 那么您能否给我一些提示?非常感谢~
  • 非常感谢您的代码。这是非常有帮助的。代码的最后一行应该是:optimLogitLBFGS = sp.optimize.fmin_l_bfgs_b(func, x0 = x0, args = (X,Y,Z), fprime = func_grad)。我仍然无法运行我的程序,出现错误_lbfgsb.error: failed in converting 7th argument g' of _lbfgsb.setulb to C/Fortran array 0-th dimension must be fixed to 22 but got 4`。我不知道为什么
  • func_grad 需要返回一个与x0 长度相同的向量,这意味着您的df 函数的输出必须与输入的大小相同。情况并非如此:我会集中精力确保正确定义 df 函数。
  • 哦,我看到你又问了你的问题,得到了similar answer。祝你好运!
猜你喜欢
  • 2016-01-27
  • 2016-02-03
  • 1970-01-01
  • 1970-01-01
  • 2017-09-04
  • 2021-11-19
  • 2019-01-12
  • 1970-01-01
  • 1970-01-01
相关资源
最近更新 更多