Special Functions¶
numerax.special ¶
erfcinv ¶
Inverse complementary error function.
Overview¶
Computes the inverse of the complementary error function, finding \(y\) such that \(\text{erfc}(y) = x\) for given \(x \in (0, 2)\).
Mathematical Background¶
The inverse complementary error function is related to the inverse error function by:
This relationship allows us to implement erfcinv as a simple wrapper around the existing JAX implementation of erfinv.
Args¶
- x: Input values in \((0, 2)\). Can be scalar or array.
Returns¶
Values \(y\) where \(\text{erfc}(y) = x\).
Example¶
import jax.numpy as jnp
import numerax
# Single value
y = numerax.special.erfcinv(0.5) # ≈ 0.4769
# Array input
x_vals = jnp.array([0.1, 0.5, 1.0, 1.5, 1.9])
y_vals = numerax.special.erfcinv(x_vals)
# Differentiable for optimization
grad_fn = jax.grad(numerax.special.erfcinv)
sensitivity = grad_fn(0.5)
Notes¶
- Differentiable: Full automatic differentiation support through JAX
- Broadcasting: Supports JAX array broadcasting
- Domain: Input must be in \((0, 2)\) for real outputs
- Performance: JIT-compiled compatibility
gammap_inverse ¶
Inverse of the regularized incomplete gamma function.
Overview¶
Computes the inverse of the regularized incomplete gamma function, finding \(x\) such that \(P(a, x) = p\), where \(P(a, x)\) is the regularized incomplete gamma function. This is equivalent to computing quantiles of the \(\text{Gamma}(a, 1)\) distribution. The general strategy and the initial guess are based on the methods described in Numerical Recipes (Press et al., 2007).
Mathematical Background¶
The regularized incomplete gamma function is defined as:
This function solves the inverse problem:
For a random variable \(X \sim \text{Gamma}(a, 1)\), this gives:
Numerical Method¶
Uses Halley's method for fast quadratic convergence:
where \(f(x) = P(a, x) - p\).
Initial guess based on Numerical Recipes (Press et al., 2007): - For \(a > 1\): Wilson-Hilferty approximation - For \(a \leq 1\): Asymptotic expansions
Args¶
- p: Probability values in \([0, 1]\). Can be scalar or array.
- a: Shape parameter (must be positive). Scalar value.
Returns¶
Quantiles \(x\) where \(P(a, x) = p\). Same shape as input p.
Example¶
import jax.numpy as jnp
import numerax
# Single quantile
x = numerax.special.gammap_inverse(0.5, 2.0) # Median of Gamma(2, 1)
# Multiple quantiles
p_vals = jnp.array([0.1, 0.25, 0.5, 0.75, 0.9])
x_vals = numerax.special.gammap_inverse(p_vals, 3.0)
# Verify inverse relationship
from jax.scipy.special import gammainc
p_recovered = gammainc(2.0, x) # Should equal original p
# Differentiable for optimization
grad_fn = jax.grad(numerax.special.gammap_inverse)
sensitivity = grad_fn(0.5, 2.0) # ∂x/∂p at median
Notes¶
- Convergence: Typically converges in 3-8 iterations
- Differentiable: Custom JVP implementation using implicit function theorem
- Domain: \(p \in [0, 1]\) and \(a > 0\)
References¶
Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical Recipes: The Art of Scientific Computing (3rd ed.). Cambridge University Press.
iv ¶
Modified Bessel function of the first kind, real order.
Computes \(I_v(z)\) for real \(v\) and real \(z \ge 0\). This is a thin
wrapper over ive:
For large \(z\) (\(z \gtrsim 709\) in float64), \(\exp(z)\) overflows and
this function returns +inf. Use ive directly when working
with large arguments.
Matches the signature of scipy.special.iv, except that the
out= parameter is not supported (JAX arrays are immutable).
Differentiation w.r.t. z is provided by the underlying ive
custom JVP; w.r.t. v is not supported.
ive ¶
Exponentially scaled modified Bessel function of the first kind.
Computes \(ive(v, z) = e^{-z} I_v(z)\) for real order \(v\) and real \(z \ge 0\). Unlike \(I_v(z)\), this scaled form stays finite for large \(z\) (decays like \(1/\sqrt{2\pi z}\)), so it is the numerically preferred primitive in likelihoods and probability densities.
Three regimes are evaluated in parallel and selected per element
via jnp.where:
- \(z < 30\): log-space power series (DLMF 10.25.2).
- \(z \ge 30\) and \(v \ge 10\): Olver uniform asymptotic for large order (DLMF 10.41.3).
- \(z \ge 30\) and \(v < 10\): Hankel large-argument asymptotic for fixed order (DLMF 10.40.1).
The thresholds (_SERIES_Z_THRESHOLD = 30.0,
_OLVER_V_THRESHOLD = 10.0) are calibrated empirically against
scipy.special.ive on a \((v, z)\) grid; the procedure lives in
scripts/calibrate_bessel.py.
Best regime per cell, with current thresholds overlaid¶

Regime-validity overlay¶
Where each regime has relative error below 1e-7 vs. scipy.special.ive:

Combined ive relative error¶
The dispatched result, with threshold lines overlaid for direct comparison against the data-derived regions above:

Args¶
- v: real order. Scalar or array.
- z: argument; must be real and
>= 0. Scalar or array.
Returns¶
Scaled Bessel values; broadcast of the inputs.
Notes¶
- Differentiable: custom JVP w.r.t.
zonly, using the recurrence \(I_v'(z) = (I_{v-1}(z) + I_{v+1}(z))/2\) (DLMF 10.29.1). Gradient w.r.t.vis not implemented and is silently ignored. - JIT/vmap: regime selection is via
jnp.where, no Python control flow on data. - Accuracy target: ~1e-6 relative vs.
scipy.special.iveforv >= 0,z > 0.