FAQ / Common Pitfalls¶
This page collects the most common mistakes when working with btorch and how to fix them. Content is drawn from the btorch-snn-modelling skill and the test suite.
1. Forgetting the dt Context¶
Symptom: KeyError: 'dt is not found in the context.'
Fix: Wrap every forward pass in environ.context(dt=...):
See The dt Environment for details.
2. Not Resetting State Between Batches¶
Symptom: State from the previous batch leaks into the current one, causing unstable training or validation results.
Fix: Call reset_net before each new batch:
For deterministic reset, initialize random voltages first:
3. Wrong State Names (Dot Notation)¶
Symptom: KeyError when accessing states, or update_state_names not recording the variable you expected.
Fix: Use dotted names that match the module hierarchy:
You can inspect valid names with:
4. Missing Memory Reset Values in Checkpoints¶
Symptom: After loading a checkpoint, neurons reset to factory defaults instead of the trained initialization values.
Fix: Save and restore _memories_rv explicitly:
checkpoint = {
"model_state_dict": model.state_dict(),
"memories_rv": functional.named_memory_reset_values(model),
}
# Load
model.load_state_dict(ckpt["model_state_dict"], strict=False)
functional.set_memory_reset_values(model, ckpt["memories_rv"])
See Tutorial 2: Training an SNN for a complete example.
5. OmegaConf _type_ Usage and CLI Syntax¶
Symptom: TypeError or ValidationError when passing variant configs on the CLI.
Fix: Use the _type_ key to switch union variants:
Or with nested keys:
See the Configuration Guide for the full pattern.
6. torch.compile and Dynamic Buffers¶
Symptom: torch.compile fails on models with circular-buffer-based history.
Fix: Set use_circular_buffer=False in SpikeHistory, DelayedPSC, or HeterSynapsePSC when compiling for training:
from btorch.models.synapse import DelayedPSC, ExponentialPSC
psc = ExponentialPSC(n_neuron=100, tau_syn=5.0, linear=linear)
delayed = DelayedPSC(psc, max_delay_steps=5, use_circular_buffer=False)
This trades memory efficiency for full torch.compile compatibility.