quri_parts.algo.optimizer.spsa module#
- class OptimizerStateSPSA(params: 'Params', cost: 'float' = 0.0, status: 'OptimizerStatus' = <OptimizerStatus.SUCCESS: 1>, niter: 'int' = 0, funcalls: 'int' = 0, gradcalls: 'int' = 0, rng: 'np.random.Generator' = Generator(PCG64) at 0x7FDCA6BA4F20)#
Bases:
OptimizerState
- Parameters:
params (algo.optimizer.interface.Params) –
cost (float) –
status (OptimizerStatus) –
niter (int) –
funcalls (int) –
gradcalls (int) –
rng (Generator) –
- rng: np.random.Generator = Generator(PCG64) at 0x7FDCA6BA4F20#
- class SPSA(a=0.6283185307179586, c=0.1, alpha=0.602, gamma=0.101, A=0.0, ftol=1e-05, rng_seed=None)#
Bases:
Optimizer
Simultaneous perturbation stochastic approximation (SPSA) optimizer. The implementation is heavily inspired by [1]. Given the parameters \(\theta_k\) at an iteration \(k\), the updated parameters \(\theta_{k+1}\) is given by
\[\theta_{k+1} = \theta_k - \alpha_k g_k(\theta_k), g_k(\theta_k) = \frac{f(\theta_k+c_k \Delta_k)-f(\theta_k-c_k \Delta_k)}{2 c_k} \Delta_k^{-1}, a_k = a / (A + k + 1)^\alpha, c_k = c / (k + 1)^\gamma,\]where \(f\) is the cost function to be minimized, and \(\Delta_k\) is a vector generated randomly. \(\Delta_k^{-1}\) is defined as the element-wise inverse of \(\Delta_k\). The dimension of \(\Delta_k\) is the same as that of \(\theta_k\). In this optimizer, \(\Delta_k\) is generated from a Bernoulli distribution. Note that \(g_k(\theta_k)\) works as an estimate of the first order gradient of the cost function \(f(\theta_k)\).
The main advantage of the SPSA optimizer is that it requires only 2 function evaluations to estimate the gradient in each iteration. Whereas the standard gradient-based optimizers require \(2p\) or more function evaluations (\(p\): the parameter length/size) in each iteration to compute the gradient. Hence the SPSA could be useful when performing VQE with sampling.
- Parameters:
a (float) – \(a\) in the parameter update rule that is defined above.
c (float) – \(c\) in the parameter update rule that is defined above.
alpha (float) – \(\alpha\) in the parameter update rule that is defined above.
gamma (float) – \(\gamma\) in the parameter update rule that is defined above.
A (float) – \(A\) in the parameter update rule that is defined above. A recommended choice is A = (10 or 100) multiplied by the maximum number of iterations that the optimizer runs.
ftol (Optional[float]) – If not None, judge convergence by cost function tolerance. See
ftol()
for details.rng_seed (Optional[int]) –
- Ref:
- [1]: J. C. Spall, “Implementation of the simultaneous perturbation algorithm
for stochastic optimization,” in IEEE Transactions on Aerospace and Electronic Systems, vol. 34, no. 3, pp. 817-823, July 1998, doi: 10.1109/7.705889.
- get_init_state(init_params)#
Returns an initial state for optimization.
- Parameters:
init_params (algo.optimizer.interface.Params) –
- Return type:
- step(state, cost_function, grad_function=None)#
Run a single optimization step and returns a new state.
- Parameters:
state (OptimizerState) –
cost_function (algo.optimizer.interface.CostFunction) –
grad_function (algo.optimizer.interface.GradientFunction | None) –
- Return type: