quri_parts.algo.optimizer.lbfgs module#

class OptimizerStateLBFGS(params, cost=0.0, status=OptimizerStatus.SUCCESS, niter=0, funcalls=0, gradcalls=0, grad=<factory>, p=<factory>, s=<factory>, y=<factory>, rho=<factory>, cost_prev=0.0, ind=0, a=<factory>)#

Bases: OptimizerState

Optimizer state for LBFGS.

Parameters:
  • params (algo.optimizer.interface.Params) –

  • cost (float) –

  • status (OptimizerStatus) –

  • niter (int) –

  • funcalls (int) –

  • gradcalls (int) –

  • grad (algo.optimizer.interface.Params) –

  • p (algo.optimizer.interface.Params) –

  • s (algo.optimizer.interface.Params) –

  • y (algo.optimizer.interface.Params) –

  • rho (algo.optimizer.interface.Params) –

  • cost_prev (float) –

  • ind (int) –

  • a (algo.optimizer.interface.Params) –

grad: Params#
p: Params#
s: Params#
y: Params#
rho: Params#
cost_prev: float = 0.0#
ind: int = 0#
a: Params#
class LBFGS(c1=0.0001, c2=0.4, amin=1e-100, amax=1e+100, maxiter_linesearch=20, rho_const=1000.0, m=5, gtol=1e-06)#

Bases: Optimizer

L-BFGS (Limited memory Bryden-Fletcher-Goldfarb-Shanno) optimizer. Partially inspired by SciPy implementation of BFGS optimizer [1]. For the details of algorithm, see [2].

Parameters:
  • c1 (float) – coefficient in strong Wolfe condition. It determines the range of the cost function.

  • c2 (float) – coefficient in strong Wolfe condition. It determines the range of the derivative of the cost function.

  • amin (float) – lower bound of the value of the step size that is computed in line search.

  • amax (float) – upper bound of the value of the step size that is computed in line search.

  • maxiter_linesearch (int) – the maximum number of the iteration used in line search.

  • rho_const (float) – when computing \(1/x\), where \(x\) is a scaler, sometimes it returns zero division error. In that case \(1/x\) is replaced by rho_cost.

  • m (int) – In parameters update of each step, it uses the info of the last m steps.

  • gtol (Optional[float]) – If not None, it is used for determining if the opotimization has terminated successfully. If gtol is less than the infinity norm of the gradient of the cost function, the optimization is regarded to have terminated successfully. The infinity norm of the gradient \(g\) is defined as \(||g||_{\infty} = \max\{|g_1|, |g_2|, \ldots, |g_n|\}\)

Refs:

[1]: https://github.com/scipy/scipy/blob/master/scipy/optimize/optimize.py [2]: Jorge Nocedal and Stephen J. Wright.

Numerical Optimization (Springer, New York, 2006).

get_init_state(init_params)#

Returns an initial state for optimization.

Parameters:

init_params (algo.optimizer.interface.Params) –

Return type:

OptimizerStateLBFGS

step(state, cost_function, grad_function=None)#

Run a single optimization step and returns a new state.

Parameters:
  • state (OptimizerState) –

  • cost_function (algo.optimizer.interface.CostFunction) –

  • grad_function (algo.optimizer.interface.GradientFunction | None) –

Return type:

OptimizerStateLBFGS