LinOp tutorial#
This tutorial follows tutorials/4_LinOp.ipynb. It introduces LinOp as
the abstraction for linear maps between spaces.
Current implemented operator types are:
DenseLinOpSparseLinOpBlockDiagonalLinOpStackedLinOpSumToSingleLinOp
What a LinOp signifies#
Let \(A\) be a linear map
LinOp encodes this map together with the numerical policy under which it is
applied. It is not just an array or matrix. It is a map equipped with:
a domain space;
a codomain space;
a context;
methods for forward and adjoint application.
The geometry is explicit:
Core ingredients#
A linear operator is organized around (X, Y, A):
op.domis the domain \(X\);op.codis the codomain \(Y\);apply(x)computes the forward action;rapply(y)computes the adjoint action.
The expected geometry is:
In finite-dimensional real or complex spaces, rapply represents the adjoint
operator \(A^* : Y \to X\), satisfying
DenseLinOp#
DenseLinOp is the standard dense linear operator. If \(A : X \to Y\),
then dense storage is a tensor compatible with domain and codomain shapes.
import numpy as np
import spacecore as sc
ctx = sc.Context(sc.NumpyOps(), dtype=np.float64, enable_checks=True)
X = sc.VectorSpace((2,), ctx=ctx)
Y = sc.VectorSpace((3,), ctx=ctx)
A = np.array([
[1.0, 2.0],
[0.0, -1.0],
[3.0, 1.0],
])
op = sc.DenseLinOp(A, X, Y, ctx=ctx)
Forward application computes \(x \mapsto Ax\); adjoint application computes \(y \mapsto A^*y\).
SparseLinOp#
SparseLinOp stores a sparse backend matrix and uses it for forward and
adjoint actions. It represents the same kind of map, \(A : X \to Y\), but
the internal storage is sparse rather than dense. Use it when sparsity is part
of the operator structure.
import scipy.sparse as sp
A_sparse = sp.csr_matrix(np.array([
[1.0, 0.0],
[0.0, -1.0],
[3.0, 0.0],
]))
op_sparse = sc.SparseLinOp(A_sparse, X, Y, ctx=ctx)
Product operators#
Product operators compose linear maps over product spaces:
Product LinOp classes
BlockDiagonalLinOpapplies independent component operators.StackedLinOpmaps one domain into a product codomain.SumToSingleLinOpmaps a product domain into one codomain by summing component outputs.
BlockDiagonalLinOp is built from operators \(A_i : X_i \to Y_i\) and
acts componentwise:
StackedLinOp stacks operators with the same domain,
\(A_i : X \to Y_i\), into one operator
\(X \to Y_1 \times \cdots \times Y_k\):
Its adjoint combines contributions:
SumToSingleLinOp combines operators with a common codomain,
\(A_i : X_i \to Y\), into one operator
\(X_1 \times \cdots \times X_k \to Y\):
Its adjoint sends one output element back to a tuple:
Defining a custom operator#
Use existing concrete operator classes when you have dense or sparse storage.
Subclass LinOp when the map has special structure and you do not want to
materialize a matrix.
from spacecore.linop import LinOp
class ScaleLinOp(LinOp):
def __init__(self, alpha, dom, cod=None, ctx=None):
if cod is None:
cod = dom
super().__init__(dom=dom, cod=cod, ctx=ctx)
self.alpha = alpha
def apply(self, x):
return self.cod.scale(self.alpha, x)
def rapply(self, y):
return self.dom.scale(self.alpha, y)
Summary#
LinOp turns raw matrix-like storage and structured block maps into explicit
geometric linear operators. It stores a domain, codomain, context, and forward
and adjoint actions.