quri_parts.algo.optimizer.spsa module#

class quri_parts.algo.optimizer.spsa.OptimizerStateSPSA(params: 'npt.NDArray[np.float_]', cost: float = 0.0, status: quri_parts.algo.optimizer.interface.OptimizerStatus = <OptimizerStatus.SUCCESS: 1>, niter: int = 0, funcalls: int = 0, gradcalls: int = 0, rng: numpy.random._generator.Generator = Generator(PCG64) at 0x7FAF1730DC80)#

Bases: OptimizerState

rng: Generator = Generator(PCG64) at 0x7FAF1730DC80#
class quri_parts.algo.optimizer.spsa.SPSA(a: float = 0.6283185307179586, c: float = 0.1, alpha: float = 0.602, gamma: float = 0.101, A: float = 0.0, ftol: float | None = 1e-05, rng_seed: int | None = None)#

Bases: Optimizer

Simultaneous perturbation stochastic approximation (SPSA) optimizer. The implementation is heavily inspired by [1]. Given the parameters \(\theta_k\) at an iteration \(k\), the updated parameters \(\theta_{k+1}\) is given by

\[\theta_{k+1} = \theta_k - \alpha_k g_k(\theta_k), g_k(\theta_k) = \frac{f(\theta_k+c_k \Delta_k)-f(\theta_k-c_k \Delta_k)}{2 c_k} \Delta_k^{-1}, a_k = a / (A + k + 1)^\alpha, c_k = c / (k + 1)^\gamma,\]

where \(f\) is the cost function to be minimized, and \(\Delta_k\) is a vector generated randomly. \(\Delta_k^{-1}\) is defined as the element-wise inverse of \(\Delta_k\). The dimension of \(\Delta_k\) is the same as that of \(\theta_k\). In this optimizer, \(\Delta_k\) is generated from a Bernoulli distribution. Note that \(g_k(\theta_k)\) works as an estimate of the first order gradient of the cost function \(f(\theta_k)\).

The main advantage of the SPSA optimizer is that it requires only 2 function evaluations to estimate the gradient in each iteration. Whereas the standard gradient-based optimizers require \(2p\) or more function evaluations (\(p\): the parameter length/size) in each iteration to compute the gradient. Hence the SPSA could be useful when performing VQE with sampling.

  • a\(a\) in the parameter update rule that is defined above.

  • c\(c\) in the parameter update rule that is defined above.

  • alpha\(\alpha\) in the parameter update rule that is defined above.

  • gamma\(\gamma\) in the parameter update rule that is defined above.

  • A\(A\) in the parameter update rule that is defined above. A recommended choice is A = (10 or 100) multiplied by the maximum number of iterations that the optimizer runs.

  • ftol – If not None, judge convergence by cost function tolerance. See ftol() for details.

[1]: J. C. Spall, “Implementation of the simultaneous perturbation algorithm

for stochastic optimization,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 34, no. 3, pp. 817-823, July 1998, doi: 10.1109/7.705889.

get_init_state(init_params: npt.NDArray[np.float_]) OptimizerStateSPSA#

Returns an initial state for optimization.

step(state: OptimizerState, cost_function: Callable[[npt.NDArray[np.float_]], float], grad_function: Callable[[npt.NDArray[np.float_]], npt.NDArray[np.float_]] | None = None) OptimizerStateSPSA#

Run a single optimization step and returns a new state.