Bindsnet Versions Save

Simulation of spiking neural networks (SNNs) using PyTorch.

0.3.1

2 years ago

This release summarizes the last changes and improvements in 0.3.1

  1. Fix WeightDependentPostPre Post-synaptic update #534
  2. Adding Conv1D Conv3D connection and improving Conv2D #526
  3. Fixed the MaxPool batch size issue #526
  4. Add three local connection classes (1D, 2D, and 3D) supporting multi-channel inputs alongside MNIST example files #536
  5. Changing execution order in Izhikevich neuron and inject Vmem in network forward #530
  6. Improve installation scripts with Poetry #518, #520
  7. Code improvements #515,
  8. Documentation improvements #517, #532

Thanks for everyone involved with this release! @danielgafni, @ArefAz , @hafezgh, @amirHossein-Ebrahimi,

0.3.0

2 years ago

This release summarizes the last changes and improvements in 0.3.0

Changes:

  1. New environment for RL experiments - dot tracing #507
  2. Improve encoding performance #484
  3. Improve nodes trace values and network input spikes format assert. #501
  4. Added ability to use tensors for wmin/wmax on synaptic connections #509
  5. Fixing issue with demos:
    • Fix an issue with accuracy reporting #482
    • fix dimensions issues with layers with different shape #488
    • Fix dimensions size issue at breakout baseline network #489
    • Improve documentation for reservoir #492
    • Fix typos #501, #492
  6. Updating to PyTorch 1.9 #499
  7. Switch to Poetry installation (#513, #517,)
  8. Adding isort and autoflake to the commit workflow #518

Thanks for everyone involved with this release! @het-25 @mahbodnr @petermarathas @cearlUmass @kamue1a @SimonInParis @danielgafni

0.2.9

3 years ago

This release summarizes the last changes and improvements in 0.2.9

Changes:

  1. Performance optimization of Monitors object (#446)
  2. Optimizing variables in connection and neurons objects (#428)
  3. Performance increases of PostPre update rule and BoostedLIF (#429)
  4. Implement Cumulative Spike Response Model Nodes (#443)
  5. Import code from sister project (#438)
  6. Update to PyTorch 1.8.1 (#477, #478)
  7. Fix misc issues with BindsNET examples (#437, #457, #458, #478, #474 )

Thanks for all the contributors!

0.2.8

3 years ago

This release summarizes the last changes and improvements in 0.2.8

Changes:

  1. Runtime optimization speed up - core functions (#384) .
  2. Installation scripts - added python 3.8 and PyTorch 1.6 (#392, #400, #404)
  3. Examples - code readability, graphs, and reproducibility (#386, #387, #396, #411).
  4. When using GPU, some variables (GYM, reward STDP, and graph related), accidentally stayed on the CPU, now moved to GPU (#388, #403, #406, #409, #412, #420) .
  5. More flexibility when building network. Adding the ability to build Network without a designated input layer (#416), now every layer can get external input using a volt injection or spike injection.

We know we have some open issues, feel free to give a hand.

0.2.7

3 years ago

This release emphasizes performance enhancements, reordering the examples, and several bug fixes.

0.2.1

5 years ago

This release accompanies our draft submission to Frontiers in Neuroinformatics. It features a number of bug fixes and example scripts used in drafting the paper.

0.1.4

5 years ago

This small release features:

  • A current-based leaky integrate-and-fire neuron model (CurrentLIFNodes)
  • Lots of code refactoring to conform (a little bit closer) to PEP standards
  • Making things look and read nicer

0.1.3

5 years ago

Notes

After a few missteps in the PyPI distribution process, we are proud to annouce the release of BindsNET v0.1! We will likely follow up with a series of incremental releases (v0.1.x) to address bugs found by users, or add small-scale features that we may have missed.

Features

This release features the network core functionality of the package, which enables the construction and simulation of spiking neural networks (SNNs). The Network object may be composed of any number of Nodes, Connections, and / or Monitors, of which there several varieties. Learning on Connection objects is implemented by specifying functions from the learning module. Popular machine learning (ML) datasets may be loaded using datasets, which can be converted into spike trains (like any other numerical data) with encoding.

An interface into the Open AI gym reinforcement learning (RL) library is implemented using the environments module, allowing for the first time easy experimentation with SNNs on RL problems.

To eliminate messy implementation details, a Pipeline object is provided (in the pipeline module) which simulates altogether the interaction between a spiking neural network and a dataset or environments. This saves users from having to write long scripts to run experiments on supported datasets or RL environments.

Plotting functionality is available in the analysis.plotting and analysis.visualization modules. The former is typically used for plotting "online" during simulation, and the latter, "offline", for studying long-term network behavior or making figures.

Other modules exist in a developmental or low-user / low-priority state.

Future work?

This depends largely on the users and in particular the needs of the BINDS lab. Some things we would personally like to see include:

  • Tighter integration with PyTorch. This likely means using more functionality from the torch.nn.functional module (e.g., convolution, pooling, activation functions, etc.), or conforming our network API to that of torch's neural network API.
  • Automatic smoothing of SNNs: Recent work has shown that it's possible to convert trained deep learning NNs to SNNs without much loss in accuracy. Conversion of PyTorch models or models specified in the ONNX format may be supported in BindsNET in the future!
  • More features! Nodes (neuron) types, Connection types, Datasets, learning functions, and more. In particular, we want to take steps towards making SNNs robust for ML / RL.

Cheers, @djsaunde