SpaceCore ========= SpaceCore exists for writing numerical algorithms once, independently of the array backend. For example, the same algorithm can run with NumPy for debugging, JAX for JIT/autodiff, and Torch for tensor workflows, while preserving the same mathematical spaces and linear operators. What problem does SpaceCore solve? ---------------------------------- Numerical algorithms often start as clear NumPy code and later need to move to JAX, Torch, or another array system. Without a backend boundary, that migration usually leaks through the whole implementation: array constructors, dtype handling, inner products, sparse support, and linear-operator conventions all become backend-specific. SpaceCore keeps those choices in a ``Context``, while algorithms work with mathematical objects: * a ``Space`` knows the structure and geometry of its elements; * a ``LinOp`` maps one space to another; * backend-specific array creation and operations live behind ``BackendOps``. The result is ordinary Python code whose core numerical logic is not tied to one array library. Mental model: .. code-block:: text BackendOps -> Context -> Space/LinOp -> Algorithm Write once, run twice --------------------- This gradient descent loop uses only the ``Space`` and ``LinOp`` APIs. It does not know whether the arrays are NumPy arrays, JAX arrays, or Torch tensors. .. code-block:: python import numpy as np import spacecore as sc def as_numpy(x): if hasattr(x, "detach"): return x.detach().cpu().numpy() return np.asarray(x) def make_problem(ctx): X = sc.VectorSpace((3,), ctx) Y = sc.VectorSpace((2,), ctx) A = sc.DenseLinOp( ctx.asarray([[1.0, 2.0, 3.0], [0.0, 1.0, 0.0]]), dom=X, cod=Y, ctx=ctx, ) x = ctx.asarray([1.0, 0.0, -1.0]) b = ctx.asarray([0.5, 0.25]) return X, Y, A, x, b def gradient_step(X, A, x, b, eta): r = A.apply(x) - b grad = A.rapply(r) return X.axpy(-eta, grad, x) def run_gradient_descent(X, A, x, b, eta, steps): for _ in range(steps): x = gradient_step(X, A, x, b, eta) return x Run it with NumPy: .. code-block:: python np_ctx = sc.Context(sc.NumpyOps(), dtype="float64") X, Y, A, x, b = make_problem(np_ctx) x_numpy = run_gradient_descent(X, A, x, b, eta=0.1, steps=5) print(as_numpy(x_numpy)) Later, run the same problem and the same ``run_gradient_descent`` with JAX: .. code-block:: python import jax jax.config.update("jax_enable_x64", True) jax_ctx = sc.Context(sc.JaxOps(), dtype="float64") X, Y, A, x, b = make_problem(jax_ctx) x_jax = run_gradient_descent(X, A, x, b, eta=0.1, steps=5) print(as_numpy(x_jax)) print(np.allclose(as_numpy(x_numpy), as_numpy(x_jax))) Run it the same way with Torch: .. code-block:: python torch_ctx = sc.Context(sc.TorchOps(), dtype="float64") X, Y, A, x, b = make_problem(torch_ctx) x_torch = run_gradient_descent(X, A, x, b, eta=0.1, steps=5) print(as_numpy(x_torch)) print(np.allclose(as_numpy(x_numpy), as_numpy(x_torch))) All three backends produce the same result: .. code-block:: text [ 1.184125 0.3411875 -0.447625 ] [ 1.184125 0.3411875 -0.447625 ] True [ 1.184125 0.3411875 -0.447625 ] True If you do not want to enable JAX 64-bit mode, use a supported dtype such as ``"float32"``. What SpaceCore is not --------------------- SpaceCore is not an optimizer and not a NumPy/JAX/Torch replacement. It provides backend-aware spaces, operators, and context handling so you can write your own algorithms without wiring them to one array library. Core concepts ------------- ``Context`` ~~~~~~~~~~~ A ``Context`` specifies how objects are represented: * backend operations (``NumpyOps``, ``JaxOps``, ``TorchOps``, etc.); * default dtype; * runtime validation behavior. Constructors resolve contexts in priority order: explicit ``ctx=...``, then contexts inferred from inputs, then the global default context. Advanced code that needs this resolution step directly can call ``spacecore.resolve_context_priority(...)``. ``Space`` ~~~~~~~~~ A ``Space`` describes the structure and geometry of values: * ``VectorSpace`` for Euclidean vectors and tensors; * ``HermitianSpace`` for Hermitian or symmetric matrices; * ``ProductSpace`` for Cartesian products of spaces. Algorithms should use space methods such as ``zeros``, ``add``, ``scale``, ``axpy``, ``inner``, ``norm``, ``flatten``, and ``unflatten`` instead of hard-coding backend array operations. ``LinOp`` ~~~~~~~~~ A ``LinOp`` represents a linear operator between spaces: * ``DenseLinOp`` for dense matrix or tensor operators; * ``SparseLinOp`` for sparse operators; * ``BlockDiagonalLinOp`` for block-diagonal product-space operators; * ``StackedLinOp`` for operators from one space into a product space; * ``SumToSingleLinOp`` for operators from a product space into one space. Operators expose ``apply`` and ``rapply``, so algorithms can use a linear map and its adjoint without depending on the storage format. Who should use this? -------------------- SpaceCore is aimed at people writing optimization, inverse-problem, optimal transport, semidefinite programming, or scientific ML algorithms that should not be tied to one backend. It is most useful when you want the mathematical model to stay stable while the execution backend changes. Installation ------------ Base install: .. code-block:: bash pip install spacecore With JAX support: .. code-block:: bash pip install "spacecore[jax]" With PyTorch support: .. code-block:: bash pip install "spacecore[torch]" * ``spacecore[jax]`` installs optional JAX support. * GPU users should install the appropriate CUDA-enabled JAX build first, following the official JAX installation guide. * ``spacecore[torch]`` installs optional PyTorch support for ``torch.Tensor`` backends. * GPU users should install the appropriate CUDA-enabled PyTorch build first, following the official PyTorch installation guide. For local development: .. code-block:: bash python -m pip install -e ".[dev]" Full example ------------ For a complete example of regularized optimal transport problem, `see `_ the model is written once and solved with NumPy/JAX backends and its `notebook `_. Documentation ------------- The hosted documentation is available `here `_. The documentation website is built with Sphinx from ``docs/source``. Install the documentation dependencies: .. code-block:: bash python -m pip install -e ".[docs]" Build the local HTML documentation: .. code-block:: bash sphinx-build -b html docs/source docs/build/html Status ------ SpaceCore is currently experimental and under active development. The public API may still evolve. License ------- Apache License 2.0 .. toctree:: :maxdepth: 2 :hidden: tutorials/index design/index api/index release_notes