Shortcuts

pypose.optim.solver.LSTSQ

class pypose.optim.solver.LSTSQ(rcond=None, driver=None)[source]

The batched linear solver with fast pseudo inversion.

\[\mathbf{A}_i \bm{x}_i = \mathbf{b}_i, \]

where \(\mathbf{A}_i \in \mathbb{C}^{M \times N}\) and \(\bm{b}_i \in \mathbb{C}^{M \times 1}\) are the \(i\)-th item of batched linear equations.

The solution is given by

\[\bm{x}_i = \mathrm{lstsq}(\mathbf{A}_i, \mathbf{b}_i), \]

where \(\mathrm{lstsq}()\) computes a solution to the least squares problem of a system of linear equations. More details go to torch.linalg.lstsq.

Parameters
  • rcond (float, optional) – Cut-off ratio for small singular values. For the purposes of rank determination, singular values are treated as zero if they are smaller than rcond times the largest singular value. It is used only when the fast model is enabled. If None, rcond is set to the machine precision of the dtype of \(\mathbf{A}\). Default: None.

  • driver (string, optional) –

    chooses the LAPACK/MAGMA function that will be used. It is used only when the fast model is enabled. For CPU users, the valid values are gels, gelsy, gelsd, gelss. For CUDA users, the only valid driver is gels, which assumes that input matrices (\(\mathbf{A}\)) are full-rank. If None, gelsy is used for CPU inputs and gels for CUDA inputs. Default: None. To choose the best driver on CPU consider:

    • If input matrices (\(\mathbf{A}\)) are well-conditioned (condition number is not too large), or you do not mind some precision loss.

      • For a general matrix: gelsy (QR with pivoting) (default)

      • If A is full-rank: gels (QR)

    • If input matrices (\(\mathbf{A}\)) are not well-conditioned.

      • gelsd (tridiagonal reduction and SVD)

      • But if you run into memory issues: gelss (full SVD).

    See full description of drivers.

Note

This solver is faster and more numerically stable than PINV().

It is also preferred to use Cholesky() if input matrices (\(\mathbf{A}\)) are guaranteed to complex Hermitian or real symmetric positive-definite.

Examples

>>> import pypose.optim.solver as ppos
>>> A, b = torch.randn(2, 3, 3), torch.randn(2, 3, 1)
>>> solver = ppos.LSTSQ(driver='gels')
>>> x = solver(A, b)
tensor([[[ 0.9997],
         [-1.3288],
         [-1.6327]],
        [[ 3.1639],
         [-0.5379],
         [-1.2872]]])
forward(A, b)[source]
Parameters
  • A (Tensor) – the input batched tensor.

  • b (Tensor) – the batched tensor on the right hand side.

Returns

the solved batched tensor.

Return type

Tensor

Docs

Access documentation for PyPose

View Docs

Tutorials

Get started with tutorials and examples

View Tutorials

Get Started

Find resources and how to start using pypose

View Resources