Shortcuts

pypose.optim.LevenbergMarquardt

class pypose.optim.LevenbergMarquardt(model, solver=None, strategy=None, kernel=None, corrector=None, weight=None, reject=16, min=1e-06, max=1e+32, vectorize=True)[source]

The Levenberg-Marquardt (LM) algorithm solving non-linear least squares problems. It is also known as the damped least squares (DLS) method. This implementation is for optimizing the model parameters to approximate the target, which can be a Tensor/LieTensor or a tuple of Tensors/LieTensors.

\[\bm{\theta}^* = \arg\min_{\bm{\theta}} \sum_i \rho\left((\bm{f}(\bm{\theta},\bm{x}_i)-\bm{y}_i)^T \mathbf{W}_i (\bm{f}(\bm{\theta},\bm{x}_i)-\bm{y}_i)\right), \]

where \(\bm{f}()\) is the model, \(\bm{\theta}\) is the parameters to be optimized, \(\bm{x}\) is the model input, \(\mathbf{W}_i\) is a weighted square matrix (positive definite), and \(\rho\) is a robust kernel function to reduce the effect of outliers. \(\rho(x) = x\) is used by default.

\[\begin{aligned} &\rule{113mm}{0.4pt} \\ &\textbf{input}: \lambda~\text{(damping)}, \bm{\theta}_0~\text{(params)}, \bm{f}~\text{(model)}, \bm{x}~(\text{input}), \bm{y}~(\text{target}) \\ &\hspace{12mm} \rho~(\text{kernel}), \epsilon_{s}~(\text{min}), \epsilon_{l}~(\text{max}) \\ &\rule{113mm}{0.4pt} \\ &\textbf{for} \: t=1 \: \textbf{to} \: \ldots \: \textbf{do} \\ &\hspace{5mm} \mathbf{J} \leftarrow {\dfrac {\partial } {\partial \bm{\theta}_{t-1}}} \left(\sqrt{\mathbf{W}}\bm{f}\right)~(\sqrt{\cdot} \text{is the Cholesky decomposition}) \\ &\hspace{5mm} \mathbf{A} \leftarrow (\mathbf{J}^T \mathbf{J}) .\mathrm{diagnal\_clamp(\epsilon_{s}, \epsilon_{l})} \\ &\hspace{5mm} \mathbf{R} = \sqrt{\mathbf{W}} (\bm{f(\bm{\theta}_{t-1}, \bm{x})}-\bm{y}) \\ &\hspace{5mm} \mathbf{R}, \mathbf{J}=\mathrm{corrector}(\rho, \mathbf{R}, \mathbf{J})\\ &\hspace{5mm} \textbf{while}~\text{first iteration}~\textbf{or}~ \text{loss not decreasing} \\ &\hspace{10mm} \mathbf{A} \leftarrow \mathbf{A} + \lambda \mathrm{diag}(\mathbf{A}) \\ &\hspace{10mm} \bm{\delta} = \mathrm{solver}(\mathbf{A}, -\mathbf{J}^T\mathbf{R}) \\ &\hspace{10mm} \lambda \leftarrow \mathrm{strategy}(\lambda,\text{model information})\\ &\hspace{10mm} \bm{\theta}_t \leftarrow \bm{\theta}_{t-1} + \bm{\delta} \\ &\hspace{10mm} \textbf{if}~\text{loss not decreasing}~\textbf{and}~ \text{maximum reject step not reached} \\ &\hspace{15mm} \bm{\theta}_t \leftarrow \bm{\theta}_{t-1} - \bm{\delta} ~(\text{reject step}) \\ &\rule{113mm}{0.4pt} \\[-1.ex] &\bf{return} \: \theta_t \\[-1.ex] &\rule{113mm}{0.4pt} \\[-1.ex] \end{aligned} \]
Parameters
  • model (nn.Module) – a module containing learnable parameters.

  • solver (nn.Module, optional) – a linear solver. If None, solver.Cholesky() is used. Default: None.

  • strategy (object, optional) – strategy for adjusting the damping factor. If None, the strategy.TrustRegion() will be used. Defult: None.

  • kernel (nn.Module, optional) – a robust kernel function. Default: None.

  • corrector – (nn.Module, optional): a Jacobian and model residual corrector to fit the kernel function. If a kernel is given but a corrector is not specified, auto correction is used. Auto correction can be unstable when the robust model has indefinite Hessian. Default: None.

  • weight (Tensor, optional) – a square positive definite matrix defining the weight of model residual. Use this only when all inputs shared the same weight matrices. This is ignored when weight is given when calling step() or optimize() method. Default: None.

  • reject (integer, optional) – the maximum number of rejecting unsuccessfull steps. Default: 16.

  • min (float, optional) – the lower-bound of the Hessian diagonal. Default: 1e-6.

  • max (float, optional) – the upper-bound of the Hessian diagonal. Default: 1e32.

  • vectorize (bool, optional) – the method of computing Jacobian. If True, the gradient of each scalar in output with respect to the model parameters will be computed in parallel with "reverse-mode". More details go to pypose.optim.functional.modjac(). Default: True.

Available solvers: solver.PINV(); solver.LSTSQ(), solver.Cholesky().

Available kernels: kernel.Huber(); kernel.PseudoHuber(); kernel.Cauchy().

Available correctors: corrector.FastTriggs(), corrector.Triggs().

Available strategies: strategy.Constant(); strategy.Adaptive(); strategy.TrustRegion();

Warning

The output of model \(\bm{f}(\bm{\theta},\bm{x}_i)\) and target \(\bm{y}_i\) can be any shape, while their last dimension \(d\) is always taken as the dimension of model residual, whose inner product will be input to the kernel function. This is useful for residuals like re-projection error, whose last dimension is 2.

Note that auto correction is equivalent to the method of ‘square-rooting the kernel’ mentioned in Section 3.3 of the following paper. It replaces the \(d\)-dimensional residual with a one-dimensional one, which loses residual-level structural information.

Therefore, the users need to keep the last dimension of model output and target to 1, even if the model residual is a scalar. If the model output only has one dimension, the model Jacobian will be a row vector, instead of a matrix, which loses sample-level structural information, although computing Jacobian vector is faster.

step(input, target=None, weight=None)[source]

Performs a single optimization step.

Parameters
  • input (Tensor/LieTensor or tuple of Tensors/LieTensors) – the input to the model.

  • target (Tensor/LieTensor) – the model target to optimize. If not given, the squared model output is minimized. Defaults: None.

  • weight (Tensor, optional) – a square positive definite matrix defining the weight of model residual. Default: None.

Returns

the minimized model loss.

Return type

Tensor

Note

The (non-negative) damping factor \(\lambda\) can be adjusted at each iteration. If the residual reduces rapidly, a smaller value can be used, bringing the algorithm closer to the Gauss-Newton algorithm, whereas if an iteration gives insufficient residual reduction, \(\lambda\) can be increased, giving a step closer to the gradient descent direction.

See more details of Levenberg-Marquardt (LM) algorithm on Wikipedia.

Note

Different from PyTorch optimizers like SGD, where the model error has to be a scalar, the output of model \(\bm{f}\) can be a Tensor/LieTensor or a tuple of Tensors/LieTensors.

Example

Optimizing a simple module to approximate pose inversion.

>>> class PoseInv(nn.Module):
...     def __init__(self, *dim):
...         super().__init__()
...         self.pose = pp.Parameter(pp.randn_se3(*dim))
...
...     def forward(self, input):
...         # the last dimension of the output is 6,
...         # which will be the residual dimension.
...         return (self.pose.Exp() @ input).Log()
...
>>> posinv = PoseInv(2, 2)
>>> input = pp.randn_SE3(2, 2)
>>> optimizer = pp.optim.LM(posinv, damping=1e-6)
...
>>> for idx in range(10):
...     loss = optimizer.step(input)
...     print('Pose Inversion loss %.7f @ %d it'%(loss, idx))
...     if loss < 1e-5:
...         print('Early Stopping with loss:', loss.item())
...         break
...
Pose Inversion error: 1.6600330 @ 0 it
Pose Inversion error: 0.1296970 @ 1 it
Pose Inversion error: 0.0008593 @ 2 it
Pose Inversion error: 0.0000004 @ 3 it
Early Stopping with error: 4.443569991963159e-07

Note

More practical examples, e.g., pose graph optimization (PGO), can be found at examples/module/pgo.

Docs

Access documentation for PyPose

View Docs

Tutorials

Get started with tutorials and examples

View Tutorials

Get Started

Find resources and how to start using pypose

View Resources