Skip to content

Datasets

btorch.datasets

Dataset utilities and noise generation for neuromorphic simulations.

This module provides noise generators (functional and layer-based) commonly used for simulating background activity, synaptic noise, and input currents in spiking neural networks.

Noise Types

Ornstein-Uhlenbeck (OU): Temporally correlated Gaussian noise with configurable time constant (tau) and standard deviation (sigma). Useful for modeling synaptic noise and membrane potential fluctuations.

Poisson: Discrete event noise with configurable rate. Suitable for spike train generation and stochastic synaptic inputs.

Pink (1/f): Colored noise with power spectral density proportional to 1/frequency. Generated via causal FIR filtering of white noise. Useful for modeling naturalistic temporal correlations.

Functional API

- [`ou_noise`](btorch/datasets/noise.py:25): Generate OU noise sequence
- [`ou_noise_like`](btorch/datasets/noise.py:123): OU noise with
  reference tensor
- [`poisson_noise`](btorch/datasets/noise.py:156): Generate Poisson
  events
- [`poisson_noise_like`](btorch/datasets/noise.py:192): Poisson with
  reference tensor
- [`pink_noise`](btorch/datasets/noise.py:284): Generate pink noise
- [`pink_noise_like`](btorch/datasets/noise.py:344): Pink noise with
  reference tensor

Layer API

- [`OUNoiseLayer`](btorch/datasets/noise.py:367): Stateful OU noise
  module with single/multi-step modes
- [`PoissonNoiseLayer`](btorch/datasets/noise.py:510): Stateless Poisson
  encoder/generator module
- [`PinkNoiseLayer`](btorch/datasets/noise.py:603): Stateful pink noise
  module with FIR history
All noise functions support
  • Per-neuron or scalar parameters (broadcastable)
  • Deterministic sampling via torch.Generator
  • GPU/CPU device placement
  • Compatible with torch.compile

Attributes

__all__ = ['OUNoiseLayer', 'PinkNoiseLayer', 'PoissonNoiseLayer', 'ou_noise', 'ou_noise_like', 'pink_noise', 'pink_noise_like', 'poisson_noise', 'poisson_noise_like'] module-attribute

Classes

OUNoiseLayer

Bases: _BaseNoiseLayer

Ornstein-Uhlenbeck (OU) noise layer for temporally correlated noise.

Implements exact discretization of the OU process where sigma is the stationary standard deviation:

n_{t+1} = alpha * n_t + beta * eps_t
alpha = exp(-dt/tau)
beta  = sigma * sqrt(1 - exp(-2*dt/tau))
eps_t ~ N(0, 1)

The layer supports both single-step (stateful) and multi-step (vectorized) modes. In stateful mode, the noise state persists across forward calls.

Scale and bias are handled internally by :class:btorch.models.linear.LearnableScale.

Learnable parameters
  • scale: Multiplicative scaling (default: 1.0)
  • bias: Additive offset (default: 0.0)
  • sigma: Stationary standard deviation
  • tau: Time constant
Tensor Conventions
  • Multi-step input: Output shape is (T, *batch_dims, *n_neuron) where neuron dims are trailing.
  • Single-step input: Output shape is (*batch_dims, *n_neuron).
  • Internal state self.noise stores the state BEFORE the current step/sequence (i.e., the initial condition).
Multi-step Backend
  • Scalar tau/sigma: Uses single conv1d (fast)
  • Per-neuron tau/sigma: Uses grouped conv1d, O(T^2 * D) complexity
Determinism Note

Multi-step uses vectorized RNG and convolution, so exact equality with repeated single-step updates is not guaranteed even with a generator. Use single-step loops if you need step-by-step equivalence.

Parameters:

Name Type Description Default
n_neuron int | Sequence[int]

Number of neurons or shape of trailing neuron dims.

required
sigma float | Tensor

Stationary standard deviation (scalar or per-neuron).

0.5
tau float | Tensor

Time constant in same units as dt (scalar or per-neuron).

10.0
step_mode Literal['s', 'm']

's' for single-step, 'm' for multi-step.

'm'
trainable_param bool | set[str]

Set of parameter names to make trainable, or True/False for all/none. Options: {"scale", "bias", "sigma", "tau"}.

False
trainable_shape str

Shape policy for trainable values: - "scalar" (default): Store as scalar, broadcast to neurons - "full": Store as full per-neuron tensor

'scalar'
stateful bool

If True, maintain noise state between calls (required for single-step mode).

False
tau_min float

Minimum value for tau (clamped for numerical stability).

1e-06
scale float | Tensor

Initial multiplicative scaling.

1.0
bias float | Tensor

Initial additive offset.

0.0

Attributes:

Name Type Description
noise Tensor

Current noise state tensor (only if stateful=True).

scale Tensor

Output scaling (via LearnableScale).

bias Tensor

Output offset (via LearnableScale).

sigma Tensor

Stationary std dev (Parameter if trainable, else buffer).

tau Tensor

Time constant (Parameter if trainable, else buffer).

Source code in btorch/datasets/noise.py
class OUNoiseLayer(_BaseNoiseLayer):
    """Ornstein-Uhlenbeck (OU) noise layer for temporally correlated noise.

    Implements exact discretization of the OU process where ``sigma`` is the
    stationary standard deviation:

        n_{t+1} = alpha * n_t + beta * eps_t
        alpha = exp(-dt/tau)
        beta  = sigma * sqrt(1 - exp(-2*dt/tau))
        eps_t ~ N(0, 1)

    The layer supports both single-step (stateful) and multi-step (vectorized)
    modes. In stateful mode, the noise state persists across forward calls.

    Scale and bias are handled internally by
    :class:`btorch.models.linear.LearnableScale`.

    Learnable parameters:
        - ``scale``: Multiplicative scaling (default: 1.0)
        - ``bias``: Additive offset (default: 0.0)
        - ``sigma``: Stationary standard deviation
        - ``tau``: Time constant

    Tensor Conventions:
        - Multi-step input: Output shape is ``(T, *batch_dims, *n_neuron)``
          where neuron dims are trailing.
        - Single-step input: Output shape is ``(*batch_dims, *n_neuron)``.
        - Internal state ``self.noise`` stores the state BEFORE the current
          step/sequence (i.e., the initial condition).

    Multi-step Backend:
        - Scalar tau/sigma: Uses single conv1d (fast)
        - Per-neuron tau/sigma: Uses grouped conv1d, O(T^2 * D) complexity

    Determinism Note:
        Multi-step uses vectorized RNG and convolution, so exact equality with
        repeated single-step updates is not guaranteed even with a generator.
        Use single-step loops if you need step-by-step equivalence.

    Args:
        n_neuron: Number of neurons or shape of trailing neuron dims.
        sigma: Stationary standard deviation (scalar or per-neuron).
        tau: Time constant in same units as dt (scalar or per-neuron).
        step_mode: ``'s'`` for single-step, ``'m'`` for multi-step.
        trainable_param: Set of parameter names to make trainable, or True/False
            for all/none. Options: {"scale", "bias", "sigma", "tau"}.
        trainable_shape: Shape policy for trainable values:
            - ``"scalar"`` (default): Store as scalar, broadcast to neurons
            - ``"full"``: Store as full per-neuron tensor
        stateful: If True, maintain noise state between calls (required for
            single-step mode).
        tau_min: Minimum value for tau (clamped for numerical stability).
        scale: Initial multiplicative scaling.
        bias: Initial additive offset.

    Attributes:
        noise: Current noise state tensor (only if ``stateful=True``).
        scale: Output scaling (via LearnableScale).
        bias: Output offset (via LearnableScale).
        sigma: Stationary std dev (Parameter if trainable, else buffer).
        tau: Time constant (Parameter if trainable, else buffer).
    """

    noise: Tensor

    def __init__(
        self,
        n_neuron: int | Sequence[int],
        sigma: float | Tensor = 0.5,
        tau: float | Tensor = 10.0,
        step_mode: Literal["s", "m"] = "m",
        trainable_param: bool | set[str] = False,
        *,
        trainable_shape: str = "scalar",
        stateful: bool = False,
        tau_min: float = 1e-6,
        scale: float | Tensor = 1.0,
        bias: float | Tensor = 0.0,
    ):
        super().__init__(
            n_neuron,
            scale=scale,
            bias=bias,
            trainable_param=trainable_param,
            trainable_shape=trainable_shape,
            step_mode=step_mode,
            stateful=stateful,
        )
        if not stateful and step_mode == "s":
            raise ValueError("stateful must be True for single-step mode.")
        self.tau_min = float(tau_min)

        # Memory state: stored as "n_0" (state BEFORE current step/sequence).
        if stateful:
            self.register_memory("noise", 0.0, self.n_neuron)

        # OU-specific parameters
        self.def_param(
            "sigma",
            sigma,
            trainable_param=self.trainable_param,
            trainable_shape=trainable_shape,
        )
        self.def_param(
            "tau",
            tau,
            trainable_param=self.trainable_param,
            trainable_shape=trainable_shape,
        )

    def single_step_forward(
        self, dt: float | None = None, *, generator: torch.Generator | None = None
    ) -> Tensor:
        """Single-step update of OU noise.

        Args:
            dt: Timestep (defaults to ``environ.get("dt")``).
            generator: Optional RNG generator.

        Returns:
            Updated noise tensor with same shape as ``self.noise``.
        """
        assert self.stateful, "single_step_forward requires stateful=True"

        sigma, tau = self.sigma, self.tau
        dt: float = dt if dt is not None else environ.get("dt")

        alpha = torch.exp(-dt / tau.clamp(min=self.tau_min))
        beta = sigma * torch.sqrt(
            1.0 - torch.exp(-2.0 * dt / tau.clamp(min=self.tau_min))
        )

        self.noise = alpha * self.noise + beta * randn_like(
            self.noise, generator=generator
        )
        return self._apply_scale_bias(self.noise)

    def multi_step_forward(
        self,
        T: int,
        dt: float | None = None,
        *,
        generator: torch.Generator | None = None,
    ) -> Tensor:
        """Generate a multi-step OU noise sequence.

        Args:
            T: Number of timesteps.
            dt: Timestep (defaults to ``environ.get("dt")``).
            generator: Optional RNG generator.

        Returns:
            Noise sequence of shape ``(T, *noise_shape)`` where ``noise_shape``
            matches ``self.noise.shape``.

        Raises:
            RuntimeError: If stateful but noise buffer not initialized.
        """
        if T == 0:
            return torch.empty(
                (0,) + tuple(self.noise.shape),
                device=self.noise.device,
                dtype=self.noise.dtype,
            )

        if not hasattr(self, "noise"):
            raise RuntimeError(
                "OUNoiseLayer: noise buffer is not initialized. Call reset(...) first."
            )

        dt = dt if dt is not None else environ.get("dt")
        out = ou_noise(
            sigma=self.sigma,
            tau=self.tau.clamp(min=self.tau_min),
            T=T,
            dt=dt,
            noise0=self.noise,
            generator=generator,
        )

        if self.stateful:
            # Update self.noise to the final state after the sequence.
            self.noise = out[-1]
        return self._apply_scale_bias(out)

    def extra_repr(self) -> str:
        return (
            f"n_neuron={self.n_neuron}, sigma={self._format_repr_value(self.sigma)}, "
            f"tau={self._format_repr_value(self.tau)}, "
            f"scale={self._format_repr_value(self.scale)}, "
            f"bias={self._format_repr_value(self.bias)}, "
            f"step_mode={self.step_mode}, tau_min={self.tau_min}"
        )
Functions
multi_step_forward(T, dt=None, *, generator=None)

Generate a multi-step OU noise sequence.

Parameters:

Name Type Description Default
T int

Number of timesteps.

required
dt float | None

Timestep (defaults to environ.get("dt")).

None
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Noise sequence of shape (T, *noise_shape) where noise_shape

Tensor

matches self.noise.shape.

Raises:

Type Description
RuntimeError

If stateful but noise buffer not initialized.

Source code in btorch/datasets/noise.py
def multi_step_forward(
    self,
    T: int,
    dt: float | None = None,
    *,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Generate a multi-step OU noise sequence.

    Args:
        T: Number of timesteps.
        dt: Timestep (defaults to ``environ.get("dt")``).
        generator: Optional RNG generator.

    Returns:
        Noise sequence of shape ``(T, *noise_shape)`` where ``noise_shape``
        matches ``self.noise.shape``.

    Raises:
        RuntimeError: If stateful but noise buffer not initialized.
    """
    if T == 0:
        return torch.empty(
            (0,) + tuple(self.noise.shape),
            device=self.noise.device,
            dtype=self.noise.dtype,
        )

    if not hasattr(self, "noise"):
        raise RuntimeError(
            "OUNoiseLayer: noise buffer is not initialized. Call reset(...) first."
        )

    dt = dt if dt is not None else environ.get("dt")
    out = ou_noise(
        sigma=self.sigma,
        tau=self.tau.clamp(min=self.tau_min),
        T=T,
        dt=dt,
        noise0=self.noise,
        generator=generator,
    )

    if self.stateful:
        # Update self.noise to the final state after the sequence.
        self.noise = out[-1]
    return self._apply_scale_bias(out)
single_step_forward(dt=None, *, generator=None)

Single-step update of OU noise.

Parameters:

Name Type Description Default
dt float | None

Timestep (defaults to environ.get("dt")).

None
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Updated noise tensor with same shape as self.noise.

Source code in btorch/datasets/noise.py
def single_step_forward(
    self, dt: float | None = None, *, generator: torch.Generator | None = None
) -> Tensor:
    """Single-step update of OU noise.

    Args:
        dt: Timestep (defaults to ``environ.get("dt")``).
        generator: Optional RNG generator.

    Returns:
        Updated noise tensor with same shape as ``self.noise``.
    """
    assert self.stateful, "single_step_forward requires stateful=True"

    sigma, tau = self.sigma, self.tau
    dt: float = dt if dt is not None else environ.get("dt")

    alpha = torch.exp(-dt / tau.clamp(min=self.tau_min))
    beta = sigma * torch.sqrt(
        1.0 - torch.exp(-2.0 * dt / tau.clamp(min=self.tau_min))
    )

    self.noise = alpha * self.noise + beta * randn_like(
        self.noise, generator=generator
    )
    return self._apply_scale_bias(self.noise)

PinkNoiseLayer

Bases: _BaseNoiseLayer

Pink (1/f) noise layer using causal FIR filtering.

Generates colored noise with PSD ~ 1/frequency by filtering white noise through a fractional integration FIR kernel. Supports both single-step (stateful, with history preservation) and multi-step (vectorized) modes.

In stateful mode, the FIR history is preserved across calls for seamless continuation of noise sequences.

Scale and bias are handled internally by :class:btorch.models.linear.LearnableScale.

Learnable parameters
  • scale: Multiplicative scaling (default: 1.0)
  • bias: Additive offset (default: 0.0)

Parameters:

Name Type Description Default
n_neuron int | Sequence[int]

Number of neurons or shape of trailing neuron dims.

required
fir_order int

Length of the FIR filter kernel (default 64).

64
step_mode Literal['s', 'm']

's' for single-step, 'm' for multi-step.

'm'
trainable_param bool | set[str]

Set of parameter names to make trainable, or True/False for all/none. Options: {"scale", "bias"}.

False
trainable_shape str

Shape policy for trainable values: - "scalar" (default): Store as scalar, broadcast to neurons - "full": Store as full per-neuron tensor

'scalar'
stateful bool

If True, maintain FIR history between calls (required for single-step mode).

False
scale float | Tensor

Initial multiplicative scaling.

1.0
bias float | Tensor

Initial additive offset.

0.0

Attributes:

Name Type Description
noise Tensor

Current noise value (only if stateful=True).

white_history Tensor

Previous white noise samples for FIR continuity (shape (*n_neuron, fir_order-1)).

fir_order

Length of the FIR kernel.

scale Tensor

Output scaling (via LearnableScale).

bias Tensor

Output offset (via LearnableScale).

Source code in btorch/datasets/noise.py
class PinkNoiseLayer(_BaseNoiseLayer):
    """Pink (1/f) noise layer using causal FIR filtering.

    Generates colored noise with PSD ~ 1/frequency by filtering white noise
    through a fractional integration FIR kernel. Supports both single-step
    (stateful, with history preservation) and multi-step (vectorized) modes.

    In stateful mode, the FIR history is preserved across calls for seamless
    continuation of noise sequences.

    Scale and bias are handled internally by
    :class:`btorch.models.linear.LearnableScale`.

    Learnable parameters:
        - ``scale``: Multiplicative scaling (default: 1.0)
        - ``bias``: Additive offset (default: 0.0)

    Args:
        n_neuron: Number of neurons or shape of trailing neuron dims.
        fir_order: Length of the FIR filter kernel (default 64).
        step_mode: ``'s'`` for single-step, ``'m'`` for multi-step.
        trainable_param: Set of parameter names to make trainable, or True/False
            for all/none. Options: {"scale", "bias"}.
        trainable_shape: Shape policy for trainable values:
            - ``"scalar"`` (default): Store as scalar, broadcast to neurons
            - ``"full"``: Store as full per-neuron tensor
        stateful: If True, maintain FIR history between calls (required for
            single-step mode).
        scale: Initial multiplicative scaling.
        bias: Initial additive offset.

    Attributes:
        noise: Current noise value (only if ``stateful=True``).
        white_history: Previous white noise samples for FIR continuity
            (shape ``(*n_neuron, fir_order-1)``).
        fir_order: Length of the FIR kernel.
        scale: Output scaling (via LearnableScale).
        bias: Output offset (via LearnableScale).
    """

    noise: Tensor
    white_history: Tensor

    def __init__(
        self,
        n_neuron: int | Sequence[int],
        fir_order: int = 64,
        step_mode: Literal["s", "m"] = "m",
        trainable_param: bool | set[str] = False,
        *,
        trainable_shape: str = "scalar",
        stateful: bool = False,
        scale: float | Tensor = 1.0,
        bias: float | Tensor = 0.0,
    ):
        super().__init__(
            n_neuron,
            scale=scale,
            bias=bias,
            trainable_param=trainable_param,
            trainable_shape=trainable_shape,
            step_mode=step_mode,
            stateful=stateful,
        )
        if not stateful and step_mode == "s":
            raise ValueError("stateful must be True for single-step mode.")
        if fir_order < 1:
            raise ValueError(f"fir_order must be >= 1, got {fir_order}.")

        self.fir_order = int(fir_order)

        if stateful:
            self.register_memory("noise", 0.0, self.n_neuron)
            self.register_memory(
                "white_history", 0.0, self.n_neuron + (self.fir_order - 1,)
            )

    def single_step_forward(
        self, *, generator: torch.Generator | None = None
    ) -> Tensor:
        """Single-step pink-noise update using FIR history.

        Args:
            generator: Optional RNG generator.

        Returns:
            Single noise sample with shape ``(*n_neuron)``.
        """
        assert self.stateful, "single_step_forward requires stateful=True"
        out, new_hist = pink_noise(
            T=1,
            fir_order=self.fir_order,
            white_history=self.white_history,
            generator=generator,
            return_white_history=True,
        )
        self.noise = out[0]
        self.white_history = new_hist
        return self._apply_scale_bias(self.noise)

    def multi_step_forward(
        self,
        T: int,
        *,
        generator: torch.Generator | None = None,
    ) -> Tensor:
        """Vectorized multi-step pink-noise generation.

        Args:
            T: Number of timesteps.
            generator: Optional RNG generator.

        Returns:
            Noise sequence of shape ``(T, *n_neuron)``.
        """
        if T == 0:
            return torch.empty(
                (0,) + tuple(self.noise.shape),
                device=self.noise.device,
                dtype=self.noise.dtype,
            )
        if not hasattr(self, "noise") or not hasattr(self, "white_history"):
            raise RuntimeError(
                "PinkNoiseLayer memories are not initialized. "
                "Call init_state(...) first."
            )

        out, new_hist = pink_noise(
            T=T,
            fir_order=self.fir_order,
            white_history=self.white_history,
            generator=generator,
            return_white_history=True,
        )
        if self.stateful:
            self.noise = out[-1]
            self.white_history = new_hist
        return self._apply_scale_bias(out)

    def extra_repr(self) -> str:
        return (
            f"n_neuron={self.n_neuron}, fir_order={self.fir_order}, "
            f"scale={self._format_repr_value(self.scale)}, "
            f"bias={self._format_repr_value(self.bias)}, step_mode={self.step_mode}"
        )
Functions
multi_step_forward(T, *, generator=None)

Vectorized multi-step pink-noise generation.

Parameters:

Name Type Description Default
T int

Number of timesteps.

required
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Noise sequence of shape (T, *n_neuron).

Source code in btorch/datasets/noise.py
def multi_step_forward(
    self,
    T: int,
    *,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Vectorized multi-step pink-noise generation.

    Args:
        T: Number of timesteps.
        generator: Optional RNG generator.

    Returns:
        Noise sequence of shape ``(T, *n_neuron)``.
    """
    if T == 0:
        return torch.empty(
            (0,) + tuple(self.noise.shape),
            device=self.noise.device,
            dtype=self.noise.dtype,
        )
    if not hasattr(self, "noise") or not hasattr(self, "white_history"):
        raise RuntimeError(
            "PinkNoiseLayer memories are not initialized. "
            "Call init_state(...) first."
        )

    out, new_hist = pink_noise(
        T=T,
        fir_order=self.fir_order,
        white_history=self.white_history,
        generator=generator,
        return_white_history=True,
    )
    if self.stateful:
        self.noise = out[-1]
        self.white_history = new_hist
    return self._apply_scale_bias(out)
single_step_forward(*, generator=None)

Single-step pink-noise update using FIR history.

Parameters:

Name Type Description Default
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Single noise sample with shape (*n_neuron).

Source code in btorch/datasets/noise.py
def single_step_forward(
    self, *, generator: torch.Generator | None = None
) -> Tensor:
    """Single-step pink-noise update using FIR history.

    Args:
        generator: Optional RNG generator.

    Returns:
        Single noise sample with shape ``(*n_neuron)``.
    """
    assert self.stateful, "single_step_forward requires stateful=True"
    out, new_hist = pink_noise(
        T=1,
        fir_order=self.fir_order,
        white_history=self.white_history,
        generator=generator,
        return_white_history=True,
    )
    self.noise = out[0]
    self.white_history = new_hist
    return self._apply_scale_bias(self.noise)

PoissonNoiseLayer

Bases: _BaseNoiseLayer

Poisson noise layer serving as both generator and encoder.

Generates Poisson-distributed event counts with rate scaled by dt: lambda = rate * dt per timestep. Because Poisson processes are memoryless, this layer does not maintain internal state.

The layer can operate in two modes: - Generator mode: uses the rate provided at construction. - Encoder mode: accepts an external rate tensor via forward() or multi_step_forward(), similar to SpikingJelly's encoder pattern.

Scale and bias are handled internally by :class:btorch.models.linear.LearnableScale.

Learnable parameters
  • scale: Multiplicative scaling (default: 1.0)
  • bias: Additive offset (default: 0.0)
  • rate: Events per unit time (default: 1.0)

Parameters:

Name Type Description Default
n_neuron int | Sequence[int]

Number of neurons or shape of trailing neuron dims.

required
rate float | Tensor

Default events per unit time (scalar or per-neuron, broadcastable). Used when rate is not provided to forward.

1.0
step_mode Literal['s', 'm']

's' for single-step, 'm' for multi-step.

'm'
trainable_param bool | set[str]

Set of parameter names to make trainable, or True/False for all/none. Options: {"scale", "bias", "rate"}.

False
trainable_shape str

Shape policy for trainable values: - "scalar" (default): Store as scalar, broadcast to neurons - "full": Store as full per-neuron tensor

'scalar'
stateful bool

Kept for API compatibility but ignored (Poisson is memoryless).

False
scale float | Tensor

Initial multiplicative scaling.

1.0
bias float | Tensor

Initial additive offset.

0.0

Attributes:

Name Type Description
scale Tensor

Output scaling (via LearnableScale).

bias Tensor

Output offset (via LearnableScale).

rate Tensor

Event rate (Parameter if trainable, else buffer).

Source code in btorch/datasets/noise.py
class PoissonNoiseLayer(_BaseNoiseLayer):
    """Poisson noise layer serving as both generator and encoder.

    Generates Poisson-distributed event counts with rate scaled by ``dt``:
    ``lambda = rate * dt`` per timestep. Because Poisson processes are
    memoryless, this layer does not maintain internal state.

    The layer can operate in two modes:
    - **Generator mode**: uses the ``rate`` provided at construction.
    - **Encoder mode**: accepts an external ``rate`` tensor via ``forward()``
      or ``multi_step_forward()``, similar to SpikingJelly's encoder pattern.

    Scale and bias are handled internally by
    :class:`btorch.models.linear.LearnableScale`.

    Learnable parameters:
        - ``scale``: Multiplicative scaling (default: 1.0)
        - ``bias``: Additive offset (default: 0.0)
        - ``rate``: Events per unit time (default: 1.0)

    Args:
        n_neuron: Number of neurons or shape of trailing neuron dims.
        rate: Default events per unit time (scalar or per-neuron,
            broadcastable). Used when ``rate`` is not provided to ``forward``.
        step_mode: ``'s'`` for single-step, ``'m'`` for multi-step.
        trainable_param: Set of parameter names to make trainable, or True/False
            for all/none. Options: {"scale", "bias", "rate"}.
        trainable_shape: Shape policy for trainable values:
            - ``"scalar"`` (default): Store as scalar, broadcast to neurons
            - ``"full"``: Store as full per-neuron tensor
        stateful: Kept for API compatibility but ignored (Poisson is
            memoryless).
        scale: Initial multiplicative scaling.
        bias: Initial additive offset.

    Attributes:
        scale: Output scaling (via LearnableScale).
        bias: Output offset (via LearnableScale).
        rate: Event rate (Parameter if trainable, else buffer).
    """

    def __init__(
        self,
        n_neuron: int | Sequence[int],
        rate: float | Tensor = 1.0,
        step_mode: Literal["s", "m"] = "m",
        trainable_param: bool | set[str] = False,
        *,
        trainable_shape: str = "scalar",
        stateful: bool = False,
        scale: float | Tensor = 1.0,
        bias: float | Tensor = 0.0,
    ):
        super().__init__(
            n_neuron,
            scale=scale,
            bias=bias,
            trainable_param=trainable_param,
            trainable_shape=trainable_shape,
            step_mode=step_mode,
            stateful=stateful,
        )
        # stateful is a no-op for Poisson (memoryless), kept for compatibility.

        self.def_param(
            "rate",
            rate,
            trainable_param=self.trainable_param,
            trainable_shape=trainable_shape,
        )

    def forward(
        self,
        rate: Tensor | None = None,
        dt: float | None = None,
        *,
        generator: torch.Generator | None = None,
    ) -> Tensor:
        """Single-step Poisson sampling.

        Args:
            rate: Optional rate tensor with shape ``(*n_neuron)`` (or any
                batch-prefixed variant). If None, uses ``self.rate``.
            dt: Timestep (defaults to ``environ.get("dt")``).
            generator: Optional RNG generator.

        Returns:
            Event counts tensor broadcastable to the rate shape.
        """
        dt = dt if dt is not None else environ.get("dt")
        rate_tensor = self.rate if rate is None else rate
        rate_tensor = torch.as_tensor(rate_tensor)

        lam = rate_tensor * float(dt)
        if torch.any(lam < 0):
            raise ValueError("Poisson rate * dt must be non-negative.")

        # Broadcast to n_neuron so shape is always well-defined.
        base = torch.zeros(self.n_neuron, dtype=lam.dtype, device=lam.device)
        lam = base + lam

        out = torch.poisson(lam, generator=generator)
        return self._apply_scale_bias(out)

    def multi_step_forward(
        self,
        T: int,
        rate: Tensor | None = None,
        dt: float | None = None,
        *,
        generator: torch.Generator | None = None,
    ) -> Tensor:
        """Vectorized multi-step Poisson sampling.

        Args:
            T: Number of timesteps.
            rate: Optional rate tensor. If None, uses ``self.rate``.
                For shape ``(*n_neuron)``, the same rate is used at every
                timestep. For shape ``(T, *n_neuron)``, per-timestep rates
                are used.
            dt: Timestep (defaults to ``environ.get("dt")``).
            generator: Optional RNG generator.

        Returns:
            Event counts of shape ``(T, *n_neuron)``.
        """
        if T < 0:
            raise ValueError(f"T must be non-negative, got {T}.")
        if T == 0:
            return torch.empty((0,) + self.n_neuron)

        dt = dt if dt is not None else environ.get("dt")
        rate_tensor = self.rate if rate is None else rate
        rate_tensor = torch.as_tensor(rate_tensor)

        lam = rate_tensor * float(dt)
        if torch.any(lam < 0):
            raise ValueError("Poisson rate * dt must be non-negative.")

        if lam.ndim >= len(self.n_neuron) + 1 and lam.shape[0] == T:
            # Per-timestep rates provided directly.
            out = torch.poisson(lam, generator=generator)
        else:
            # Scalar, per-neuron, or batch-prefixed rates: replicate across T.
            base = torch.zeros(self.n_neuron, dtype=lam.dtype, device=lam.device)
            lam_full = base + lam
            lam_seq = lam_full.unsqueeze(0).expand((T,) + lam_full.shape)
            out = torch.poisson(lam_seq, generator=generator)

        return self._apply_scale_bias(out)

    def extra_repr(self) -> str:
        return (
            f"n_neuron={self.n_neuron}, rate={self._format_repr_value(self.rate)}, "
            f"scale={self._format_repr_value(self.scale)}, "
            f"bias={self._format_repr_value(self.bias)}, step_mode={self.step_mode}"
        )
Functions
forward(rate=None, dt=None, *, generator=None)

Single-step Poisson sampling.

Parameters:

Name Type Description Default
rate Tensor | None

Optional rate tensor with shape (*n_neuron) (or any batch-prefixed variant). If None, uses self.rate.

None
dt float | None

Timestep (defaults to environ.get("dt")).

None
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Event counts tensor broadcastable to the rate shape.

Source code in btorch/datasets/noise.py
def forward(
    self,
    rate: Tensor | None = None,
    dt: float | None = None,
    *,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Single-step Poisson sampling.

    Args:
        rate: Optional rate tensor with shape ``(*n_neuron)`` (or any
            batch-prefixed variant). If None, uses ``self.rate``.
        dt: Timestep (defaults to ``environ.get("dt")``).
        generator: Optional RNG generator.

    Returns:
        Event counts tensor broadcastable to the rate shape.
    """
    dt = dt if dt is not None else environ.get("dt")
    rate_tensor = self.rate if rate is None else rate
    rate_tensor = torch.as_tensor(rate_tensor)

    lam = rate_tensor * float(dt)
    if torch.any(lam < 0):
        raise ValueError("Poisson rate * dt must be non-negative.")

    # Broadcast to n_neuron so shape is always well-defined.
    base = torch.zeros(self.n_neuron, dtype=lam.dtype, device=lam.device)
    lam = base + lam

    out = torch.poisson(lam, generator=generator)
    return self._apply_scale_bias(out)
multi_step_forward(T, rate=None, dt=None, *, generator=None)

Vectorized multi-step Poisson sampling.

Parameters:

Name Type Description Default
T int

Number of timesteps.

required
rate Tensor | None

Optional rate tensor. If None, uses self.rate. For shape (*n_neuron), the same rate is used at every timestep. For shape (T, *n_neuron), per-timestep rates are used.

None
dt float | None

Timestep (defaults to environ.get("dt")).

None
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Event counts of shape (T, *n_neuron).

Source code in btorch/datasets/noise.py
def multi_step_forward(
    self,
    T: int,
    rate: Tensor | None = None,
    dt: float | None = None,
    *,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Vectorized multi-step Poisson sampling.

    Args:
        T: Number of timesteps.
        rate: Optional rate tensor. If None, uses ``self.rate``.
            For shape ``(*n_neuron)``, the same rate is used at every
            timestep. For shape ``(T, *n_neuron)``, per-timestep rates
            are used.
        dt: Timestep (defaults to ``environ.get("dt")``).
        generator: Optional RNG generator.

    Returns:
        Event counts of shape ``(T, *n_neuron)``.
    """
    if T < 0:
        raise ValueError(f"T must be non-negative, got {T}.")
    if T == 0:
        return torch.empty((0,) + self.n_neuron)

    dt = dt if dt is not None else environ.get("dt")
    rate_tensor = self.rate if rate is None else rate
    rate_tensor = torch.as_tensor(rate_tensor)

    lam = rate_tensor * float(dt)
    if torch.any(lam < 0):
        raise ValueError("Poisson rate * dt must be non-negative.")

    if lam.ndim >= len(self.n_neuron) + 1 and lam.shape[0] == T:
        # Per-timestep rates provided directly.
        out = torch.poisson(lam, generator=generator)
    else:
        # Scalar, per-neuron, or batch-prefixed rates: replicate across T.
        base = torch.zeros(self.n_neuron, dtype=lam.dtype, device=lam.device)
        lam_full = base + lam
        lam_seq = lam_full.unsqueeze(0).expand((T,) + lam_full.shape)
        out = torch.poisson(lam_seq, generator=generator)

    return self._apply_scale_bias(out)

Functions

ou_noise(*size, sigma, tau, T, dt, device=None, dtype=None, noise0=None, generator=None)

Generate Ornstein-Uhlenbeck (OU) noise sequence.

OU noise follows the stochastic differential equation

dx = -x/tau * dt + sigma * sqrt(2/tau) * dW

The exact discretization used is

n_{t+1} = alpha * n_t + beta * eps_t alpha = exp(-dt/tau) beta = sigma * sqrt(1 - exp(-2*dt/tau)) eps_t ~ N(0, 1)

Parameters:

Name Type Description Default
*size int

Shape of the noise per timestep (e.g., B, N for batch, neurons). Output will be (T, *size).

()
sigma Tensor

Standard deviation of the stationary distribution. Can be scalar or per-element (broadcastable to size).

required
tau Tensor

Time constant controlling correlation length. Can be scalar or per-element (broadcastable to size). Same units as dt.

required
T int

Number of timesteps to generate.

required
dt float

Simulation timestep (same units as tau).

required
device device | None

Device for the output tensor (if noise0 not provided).

None
dtype dtype | None

Dtype for the output tensor (if noise0 not provided).

None
noise0 Tensor | None

Initial noise state with shape size. If provided, size must match or be empty. Defaults to N(0,1) sample if None.

None
generator Generator | None

Optional RNG generator for deterministic sampling.

None

Returns:

Type Description
Tensor

Tensor of shape (T, *size) containing the OU noise sequence.

Raises:

Type Description
ValueError

If neither size nor noise0 is provided.

RuntimeError

If sigma or tau cannot broadcast to size.

Source code in btorch/datasets/noise.py
def ou_noise(
    *size: int,
    sigma: Tensor,
    tau: Tensor,
    T: int,
    dt: float,
    device: torch.device | None = None,
    dtype: torch.dtype | None = None,
    noise0: Tensor | None = None,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Generate Ornstein-Uhlenbeck (OU) noise sequence.

    OU noise follows the stochastic differential equation:
        dx = -x/tau * dt + sigma * sqrt(2/tau) * dW

    The exact discretization used is:
        n_{t+1} = alpha * n_t + beta * eps_t
        alpha = exp(-dt/tau)
        beta = sigma * sqrt(1 - exp(-2*dt/tau))
        eps_t ~ N(0, 1)

    Args:
        *size: Shape of the noise per timestep (e.g., ``B, N`` for batch,
            neurons). Output will be ``(T, *size)``.
        sigma: Standard deviation of the stationary distribution. Can be
            scalar or per-element (broadcastable to ``size``).
        tau: Time constant controlling correlation length. Can be scalar or
            per-element (broadcastable to ``size``). Same units as ``dt``.
        T: Number of timesteps to generate.
        dt: Simulation timestep (same units as ``tau``).
        device: Device for the output tensor (if ``noise0`` not provided).
        dtype: Dtype for the output tensor (if ``noise0`` not provided).
        noise0: Initial noise state with shape ``size``. If provided, ``size``
            must match or be empty. Defaults to N(0,1) sample if None.
        generator: Optional RNG generator for deterministic sampling.

    Returns:
        Tensor of shape ``(T, *size)`` containing the OU noise sequence.

    Raises:
        ValueError: If neither ``size`` nor ``noise0`` is provided.
        RuntimeError: If ``sigma`` or ``tau`` cannot broadcast to ``size``.
    """
    if noise0 is None:
        if len(size) == 0:
            raise ValueError("Provide size or noise0.")
        noise0 = torch.randn(size, device=device, dtype=dtype, generator=generator)
    elif len(size) != 0:
        if tuple(size) != tuple(noise0.shape):
            raise ValueError(f"size={size} does not match noise0.shape={noise0.shape}.")

    alpha = torch.exp(-dt / tau)
    beta = sigma * torch.sqrt(1.0 - torch.exp(-2.0 * dt / tau))

    rest_shape = noise0.shape
    D = noise0.numel()

    device = noise0.device
    dtype = noise0.dtype

    noise0_flat = noise0.reshape(-1)
    eps = torch.randn((D, T), device=device, dtype=dtype, generator=generator)

    beta_flat = beta.reshape(-1)
    if beta_flat.numel() == 1:
        beta_flat = beta_flat.expand(D)
    elif beta_flat.numel() != D:
        msg = (
            f"beta has {beta_flat.numel()} elems but D={D}; "
            "sigma must be broadcastable to noise."
        )
        raise RuntimeError(msg)

    u = beta_flat[:, None] * eps  # [D,T]

    if alpha.numel() == 1:
        a = alpha.reshape(1)

        powers = torch.arange(T - 1, -1, -1, device=device, dtype=dtype)
        w = torch.pow(a.to(dtype=dtype), powers).view(1, 1, T)  # [1,1,T]

        u_pad = F.pad(u.unsqueeze(1), (T - 1, 0))
        n_from_u = F.conv1d(u_pad, w).squeeze(1)

        factors = torch.pow(
            a.to(dtype=dtype),
            torch.arange(1, T + 1, device=device, dtype=dtype),
        )
        n_seq = n_from_u + noise0_flat[:, None] * factors[None, :]

    else:
        alpha_flat = alpha.reshape(-1)
        if alpha_flat.numel() == 1:
            alpha_flat = alpha_flat.expand(D)
        elif alpha_flat.numel() != D:
            msg = (
                f"alpha has {alpha_flat.numel()} elems but D={D}; "
                "tau must be broadcastable to noise."
            )
            raise RuntimeError(msg)

        u_ch = u.unsqueeze(0)
        u_pad = F.pad(u_ch, (T - 1, 0))

        powers = torch.arange(T - 1, -1, -1, device=device, dtype=dtype)
        w = torch.pow(alpha_flat[:, None], powers[None, :])  # [D,T]
        w = w[:, None, :]

        n_from_u = F.conv1d(u_pad, w, groups=D).squeeze(0)

        tpow = torch.arange(1, T + 1, device=device, dtype=dtype)
        factors = torch.pow(alpha_flat[:, None], tpow[None, :])
        n_seq = n_from_u + noise0_flat[:, None] * factors

    out = _unflatten_td(n_seq, rest_shape)
    return out

ou_noise_like(like, sigma, tau, *, T, dt, noise0=None, generator=None)

Generate OU noise matching a reference tensor's shape and device.

Convenience wrapper around ou_noise that infers size, device, and dtype from a reference tensor.

Parameters:

Name Type Description Default
like Tensor

Reference tensor with shape (*batch, *neuron) that defines the per-timestep shape. Output will be (T, *like.shape).

required
sigma Tensor

Standard deviation (scalar or broadcastable to like.shape).

required
tau Tensor

Time constant (scalar or broadcastable to like.shape).

required
T int

Number of timesteps.

required
dt float

Simulation timestep.

required
noise0 Tensor | None

Optional initial state with shape like.shape.

None
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

OU noise tensor of shape (T, *like.shape).

Source code in btorch/datasets/noise.py
def ou_noise_like(
    like: Tensor,
    sigma: Tensor,
    tau: Tensor,
    *,
    T: int,
    dt: float,
    noise0: Tensor | None = None,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Generate OU noise matching a reference tensor's shape and device.

    Convenience wrapper around ``ou_noise`` that infers ``size``, ``device``,
    and ``dtype`` from a reference tensor.

    Args:
        like: Reference tensor with shape ``(*batch, *neuron)`` that defines
            the per-timestep shape. Output will be ``(T, *like.shape)``.
        sigma: Standard deviation (scalar or broadcastable to ``like.shape``).
        tau: Time constant (scalar or broadcastable to ``like.shape``).
        T: Number of timesteps.
        dt: Simulation timestep.
        noise0: Optional initial state with shape ``like.shape``.
        generator: Optional RNG generator.

    Returns:
        OU noise tensor of shape ``(T, *like.shape)``.
    """
    if noise0 is None:
        noise0 = randn_like(like, generator=generator)
    return ou_noise(
        sigma=sigma,
        tau=tau,
        T=T,
        dt=dt,
        noise0=noise0,
        generator=generator,
    )

pink_noise(*size, T, fir_order=64, device=None, dtype=None, white_history=None, generator=None, return_white_history=False)

Generate pink (1/f) noise using a causal FIR filter.

Pink noise has power spectral density proportional to 1/frequency, creating naturalistic temporal correlations. Generated by filtering white noise through a fractional integration FIR kernel.

Parameters:

Name Type Description Default
*size int

Shape per timestep. Output will be (T, *size).

()
T int

Number of timesteps to generate.

required
fir_order int

Length of the FIR filter kernel (default 64). Higher values give better low-frequency approximation but more state.

64
device device | None

Device for the output tensor (if white_history not given).

None
dtype dtype | None

Dtype for the output (must be floating point).

None
white_history Tensor | None

Optional previous white noise history with shape (*size, fir_order-1) for continuity across calls.

None
generator Generator | None

Optional RNG generator for deterministic sampling.

None
return_white_history bool

If True, also return the updated history tensor for stateful usage.

False

Returns:

Type Description
Tensor | tuple[Tensor, Tensor]

Pink noise tensor of shape (T, *size). If return_white_history

Tensor | tuple[Tensor, Tensor]

is True, returns (noise, history) where history has shape

Tensor | tuple[Tensor, Tensor]

(*size, fir_order-1).

Raises:

Type Description
ValueError

If T < 0, fir_order < 1, or dtype not floating.

ValueError

If size conflicts with white_history shape.

Source code in btorch/datasets/noise.py
def pink_noise(
    *size: int,
    T: int,
    fir_order: int = 64,
    device: torch.device | None = None,
    dtype: torch.dtype | None = None,
    white_history: Tensor | None = None,
    generator: torch.Generator | None = None,
    return_white_history: bool = False,
) -> Tensor | tuple[Tensor, Tensor]:
    """Generate pink (1/f) noise using a causal FIR filter.

    Pink noise has power spectral density proportional to 1/frequency,
    creating naturalistic temporal correlations. Generated by filtering
    white noise through a fractional integration FIR kernel.

    Args:
        *size: Shape per timestep. Output will be ``(T, *size)``.
        T: Number of timesteps to generate.
        fir_order: Length of the FIR filter kernel (default 64). Higher
            values give better low-frequency approximation but more state.
        device: Device for the output tensor (if ``white_history`` not given).
        dtype: Dtype for the output (must be floating point).
        white_history: Optional previous white noise history with shape
            ``(*size, fir_order-1)`` for continuity across calls.
        generator: Optional RNG generator for deterministic sampling.
        return_white_history: If True, also return the updated history tensor
            for stateful usage.

    Returns:
        Pink noise tensor of shape ``(T, *size)``. If ``return_white_history``
        is True, returns ``(noise, history)`` where history has shape
        ``(*size, fir_order-1)``.

    Raises:
        ValueError: If ``T < 0``, ``fir_order < 1``, or dtype not floating.
        ValueError: If ``size`` conflicts with ``white_history`` shape.
    """
    if T < 0:
        raise ValueError(f"T must be non-negative, got {T}.")
    if fir_order < 1:
        raise ValueError(f"fir_order must be >= 1, got {fir_order}.")

    hist_len = fir_order - 1
    if white_history is not None:
        if white_history.ndim < 1:
            raise ValueError("white_history must have at least one dimension.")
        if white_history.shape[-1] != hist_len:
            raise ValueError(
                f"white_history last dim must be {hist_len}, "
                f"got {white_history.shape[-1]}."
            )
        rest_shape = tuple(white_history.shape[:-1])
        sample_device = white_history.device
        sample_dtype = white_history.dtype
        history_flat = white_history.reshape(-1, hist_len)
        if len(size) != 0 and tuple(size) != rest_shape:
            raise ValueError(
                f"size={size} does not match white_history shape {rest_shape}."
            )
    else:
        if len(size) == 0:
            raise ValueError("Provide size or white_history for pink_noise.")
        rest_shape = tuple(size)
        sample_device = device or torch.device("cpu")
        sample_dtype = dtype or torch.get_default_dtype()
        history_flat = None

    if not torch.empty((), dtype=sample_dtype).is_floating_point():
        raise ValueError("pink_noise requires a floating dtype.")

    template = torch.empty(rest_shape, device=sample_device, dtype=sample_dtype)
    white = _white_noise_2d(template, T=T, generator=generator)
    kernel = _pink_fir_kernel(fir_order, device=sample_device, dtype=sample_dtype)
    out_flat, new_hist_flat = _apply_fir_2d(white, kernel, history_flat)
    out = _unflatten_td(out_flat, rest_shape)

    if not return_white_history:
        return out
    new_history = new_hist_flat.reshape(rest_shape + (hist_len,))
    return out, new_history

pink_noise_like(like, *, T, fir_order=64, white_history=None, generator=None, return_white_history=False)

Generate pink noise matching a reference tensor's metadata.

Parameters:

Name Type Description Default
like Tensor

Reference tensor with shape (*size). Output will be (T, *like.shape).

required
T int

Number of timesteps.

required
fir_order int

FIR filter length.

64
white_history Tensor | None

Optional history tensor with shape (*like.shape, fir_order-1).

None
generator Generator | None

Optional RNG generator.

None
return_white_history bool

If True, also return updated history.

False

Returns:

Type Description
Tensor | tuple[Tensor, Tensor]

Pink noise of shape (T, *like.shape), or (noise, history)

Tensor | tuple[Tensor, Tensor]

tuple if return_white_history=True.

Source code in btorch/datasets/noise.py
def pink_noise_like(
    like: Tensor,
    *,
    T: int,
    fir_order: int = 64,
    white_history: Tensor | None = None,
    generator: torch.Generator | None = None,
    return_white_history: bool = False,
) -> Tensor | tuple[Tensor, Tensor]:
    """Generate pink noise matching a reference tensor's metadata.

    Args:
        like: Reference tensor with shape ``(*size)``. Output will be
            ``(T, *like.shape)``.
        T: Number of timesteps.
        fir_order: FIR filter length.
        white_history: Optional history tensor with shape
            ``(*like.shape, fir_order-1)``.
        generator: Optional RNG generator.
        return_white_history: If True, also return updated history.

    Returns:
        Pink noise of shape ``(T, *like.shape)``, or ``(noise, history)``
        tuple if ``return_white_history=True``.
    """
    dtype = like.dtype if like.is_floating_point() else torch.get_default_dtype()
    return pink_noise(
        *like.shape,
        T=T,
        fir_order=fir_order,
        device=like.device,
        dtype=dtype,
        white_history=white_history,
        generator=generator,
        return_white_history=return_white_history,
    )

poisson_noise(*size, rate, T, dt=1.0, device=None, dtype=None, generator=None)

Generate Poisson noise (discrete event counts).

Samples are drawn from Poisson distribution with lambda = rate * dt for each timestep and element. The output represents event counts per timestep (0, 1, 2, ...).

Parameters:

Name Type Description Default
*size int

Shape per timestep (e.g., B, N). Output is (T, *size).

()
rate float | Tensor

Event rate per unit time. Can be scalar or per-element (broadcastable to size).

required
T int

Number of timesteps.

required
dt float

Simulation timestep (scales the rate: lambda = rate * dt).

1.0
device device | None

Device for the output tensor.

None
dtype dtype | None

Dtype for the output (must be floating point).

None
generator Generator | None

Optional RNG generator for deterministic sampling.

None

Returns:

Type Description
Tensor

Event count tensor of shape (T, *size) with dtype float.

Raises:

Type Description
ValueError

If size is empty, T < 0, or dtype is not floating.

ValueError

If rate * dt is negative.

Source code in btorch/datasets/noise.py
def poisson_noise(
    *size: int,
    rate: float | Tensor,
    T: int,
    dt: float = 1.0,
    device: torch.device | None = None,
    dtype: torch.dtype | None = None,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Generate Poisson noise (discrete event counts).

    Samples are drawn from Poisson distribution with lambda = rate * dt
    for each timestep and element. The output represents event counts per
    timestep (0, 1, 2, ...).

    Args:
        *size: Shape per timestep (e.g., ``B, N``). Output is ``(T, *size)``.
        rate: Event rate per unit time. Can be scalar or per-element
            (broadcastable to ``size``).
        T: Number of timesteps.
        dt: Simulation timestep (scales the rate: lambda = rate * dt).
        device: Device for the output tensor.
        dtype: Dtype for the output (must be floating point).
        generator: Optional RNG generator for deterministic sampling.

    Returns:
        Event count tensor of shape ``(T, *size)`` with dtype ``float``.

    Raises:
        ValueError: If ``size`` is empty, ``T < 0``, or dtype is not floating.
        ValueError: If ``rate * dt`` is negative.
    """
    if len(size) == 0:
        raise ValueError("Provide output size for poisson_noise.")
    if T < 0:
        raise ValueError(f"T must be non-negative, got {T}.")

    if dtype is None:
        sample_dtype = torch.get_default_dtype()
    else:
        sample_dtype = dtype
    if not torch.empty((), dtype=sample_dtype).is_floating_point():
        raise ValueError("poisson_noise requires a floating dtype.")

    base = torch.zeros(size, device=device, dtype=sample_dtype)
    lam = torch.as_tensor(rate, device=base.device, dtype=sample_dtype) * float(dt)
    if torch.any(lam < 0):
        raise ValueError("Poisson rate * dt must be non-negative.")
    lam_full = base + lam
    lam_seq = lam_full.unsqueeze(0).expand((T,) + tuple(size))
    return torch.poisson(lam_seq, generator=generator)

poisson_noise_like(like, rate, *, T, dt=1.0, generator=None)

Generate Poisson noise matching a reference tensor's metadata.

Parameters:

Name Type Description Default
like Tensor

Reference tensor defining per-timestep shape (*size). Output will be (T, *like.shape).

required
rate float | Tensor

Event rate per unit time (scalar or broadcastable).

required
T int

Number of timesteps.

required
dt float

Simulation timestep.

1.0
generator Generator | None

Optional RNG generator.

None

Returns:

Type Description
Tensor

Poisson event counts of shape (T, *like.shape).

Source code in btorch/datasets/noise.py
def poisson_noise_like(
    like: Tensor,
    rate: float | Tensor,
    *,
    T: int,
    dt: float = 1.0,
    generator: torch.Generator | None = None,
) -> Tensor:
    """Generate Poisson noise matching a reference tensor's metadata.

    Args:
        like: Reference tensor defining per-timestep shape ``(*size)``.
            Output will be ``(T, *like.shape)``.
        rate: Event rate per unit time (scalar or broadcastable).
        T: Number of timesteps.
        dt: Simulation timestep.
        generator: Optional RNG generator.

    Returns:
        Poisson event counts of shape ``(T, *like.shape)``.
    """
    dtype = like.dtype if like.is_floating_point() else torch.get_default_dtype()
    return poisson_noise(
        *like.shape,
        rate=rate,
        T=T,
        dt=dt,
        device=like.device,
        dtype=dtype,
        generator=generator,
    )