Deep learning with spiking neural networks (SNNs) in PyTorch.
This release features a lot of quality-of-life improvements. Most notably, we started testing for torch.compile
support, which gives us significant speedup. The improvements gained by moving to torch.compile
meant that we could safely remove the C++ code such that Norse now is a Python-only module. That means the installations should be significantly faster.
We also added initialization methods for spatial and temporal receptive fields, added support for NIR, cleaned up the docs, restructured the imports, removed unnecessary (and slow) try-catch clauses, and cleaned up dependencies.
We also added tentative support for a new StateTuple implementation based on PyTorch's pytrees, which makes it easier to operate on parameters. This allows us to cast parameters to devices or
p = LIFParameters(...) # Create a parameter
p.to("cuda:0") # Cast the parameters to a device
p.float() # Cast the parameters to floats.
Note that this is currently only implemented for LIFParameters
and LIFBoxParameters
. Let us know how it works!
Full Changelog: https://github.com/norse/norse/compare/v1.0.0...v1.1.0
This is the first stable release of Norse. We feel like that after close to almost 4 years of development it is time to take this step. The API has stabilised somewhat and while we anticipate some changes in the future, we will try to do them in ways that are easy to accommodate for users. Since the last release we mostly focussed on bugs and worked on performance. We also got some nice additions
We also worked on improvements to documentation, our continuous integration and build tooling.
This release includes prototypical sparse and adjoint equations, neuron models, utilities, and various stability fixes.
Specifically, we included
LConv2d
)RC3 for the sparse and adjoint code. Aims to resolve builds for Windows and Linux
RC2 for the sparse and adjoint code. Aims to resolve builds for Windows and Linux
This release candidate drafts code for sparse activations and adjoint-based optimizations as described in https://arxiv.org/abs/2009.08378
This release features our shiny and new module API, it unifies all Spiking Neuron modules under one common base class thereby eliminating redundant code.
From a user perspective it also means that the API is now consistent across all Neuron types.
This release brings numerous improvements in terms of speed, usability, specializations, documentation and more. In general, we tried to make Norse more user-friendly and applicable for both the die-hard deep-learning expert and neuroscience enthusiasts new to Python. Specifically, this release includes:
MNIST
task.SequentialState
module, which works similar to PyTorch's Sequential
layers in that it allows for seamless composition of PyTorch and Norse modules. Together with the Lift
module, this is an important step towards powerful and simple tools for developing spiking neural networks.norse.torch.models
package for more information.As always, we welcome feedback and are looking forward to hearing how you are using Norse! Happy hacking :partying_face:
This release contains a number of functionality and model additions, as well as improved PyTorch compatibility through the Lift
module. Most notably, we