class pypose.optim.corrector.FastTriggs(kernel)[source]

Faster yet stable version of Triggs correction of model residual and Jacobian.

\[\begin{align*} \mathbf{R}_i^\rho &= \sqrt{\rho'(c_i)} \mathbf{R}_i\\ \mathbf{J}_i^\rho &= \sqrt{\rho'(c_i)} \mathbf{J}_i \end{align*}, \]

where \(\mathbf{R}_i\) and \(\mathbf{J}_i\) are the \(i\)-th item of the model residual and Jacobian, respectively. \(\rho()\) is the kernel function and \(c_i = \mathbf{R}_i^T\mathbf{R}_i\) is the point to compute the gradient.


kernel (nn.Module) – the robust kernel (cost) function.


This implementation has a faster and numerically stable solution than Triggs(). It removes the kernel’s 2nd order derivatives (often negative), which can lead a 2nd order optimizer unstable. It basically aims to solve

\[\bm{\theta}^* = \arg\min_{\bm{\theta}} \mathbf{g}(\bm{x}) = \arg\min_{\bm{\theta}} \sum_i \rho(\mathbf{R}_i^T \mathbf{R}_i), \]

where \(\mathbf{R}_i = \bm{f}(\bm{\theta},\bm{x}_i) - \bm{y}_i\) and \(\bm{f}(\bm{\theta}, \bm{x})\) is the model, \(\bm{\theta}\) is the parameters to be optimized, \(\bm{x}\) is the model inputs, \(\bm{y}\) is the model targets. Considering the 1st order Taylor expansion of the model \(\bm{f}(\bm{\theta}+\bm{\delta})\approx\bm{f}(\bm{\theta})+\mathbf{J}_i\bm{\delta}\). If we take \(c_i = \mathbf{R}_i^T \mathbf{R}_i\) and set the first derivative of \(\mathbf{g}(\bm{\delta})\) to zero, we have

\[\frac{\partial \bm{g}}{\partial \bm{\delta}} = \sum_i \frac{\partial \rho}{\partial c_i} \frac{\partial c_i}{\partial \bm{\delta}} = \bm{0} \]

This leads to

\[\sum_i \frac{\partial \rho}{\partial c_i} \mathbf{J}_i^T \mathbf{J}_i \bm{\delta} = - \sum_i \frac{\partial \rho}{\partial c_i} \mathbf{J}_i^T \mathbf{R}_i \]

Rearrange the gradient of \(\rho\), we have

\[\sum_i \left(\sqrt{\frac{\partial \rho}{\partial c_i}} \mathbf{J}_i\right)^T \left(\sqrt{\frac{\partial \rho}{\partial c_i}} \mathbf{J}_i\right) \bm{\delta} = - \sum_i \left(\sqrt{\frac{\partial \rho}{\partial c_i}} \mathbf{J}_i\right)^T \left(\sqrt{\frac{\partial \rho}{\partial c_i}} \mathbf{R}_i\right) \]

This gives us the corrected model residual \(\mathbf{R}_i^\rho\) and Jacobian \(\mathbf{J}_i^\rho\), which ends with the same problem formulation as the standard 2nd order optimizers such as pypose.optim.GN() and pypose.optim.LM().

\[\sum_i {\mathbf{J}_i^\rho}^T \mathbf{J}_i^\rho \bm{\delta} = - \sum_i {\mathbf{J}_i^\rho}^T \mathbf{R}_i^\rho \]
forward(R, J)[source]
  • R (Tensor) – the model residual.

  • J (Tensor) – the model Jacobian.


the corrected model residual and model Jacobian.

Return type

tuple of Tensors


The users basically only need to call the constructor, while the forward() function is not supposed to be directly called by PyPose users. It will be called internally by optimizers such as pypose.optim.GN() and pypose.optim.LM().


Access documentation for PyPose

View Docs


Get started with tutorials and examples

View Tutorials

Get Started

Find resources and how to start using pypose

View Resources