MemTorch Versions Save

A Simulation Framework for Memristive Deep Learning Systems

v1.1.6

2 years ago

Added

  1. The random_crossbar_init argument to memtorch.bh.Crossbar. If true, this is used to initialize crossbars to random device conductances in between 1/Ron and 1/Roff.
  2. CUDA_device_idx to setup.py to allow users to specify the CUDA device to use when installing MemTorch from source.
  3. Implementations of CUDA accelerated passive crossbar programming routines for the 2021 Data-Driven model.
  4. A BiBTeX entry, which can be used to cite the corresponding OSP paper.

Fixed

  1. In the getting started tutorial, Section 4.1 was a code cell. This has since been converted to a markdown cell.
  2. OOM errors encountered when modeling passive inference routines of crossbars.

Enhanced

  1. Templated quantize bindings and fixed semantic error in memtorch.bh.nonideality.FiniteConductanceStates.
  2. The memory consumption when modeling passive inference routines.
  3. The sparse factorization method used to solve sparse linear matrix systems.
  4. The naive_program routine for crossbar programming. The maximum number of crossbar programming iterations is now configurable.
  5. Updated ReadTheDocs documentation for memtorch.bh.Crossbar.
  6. Updated the version of PyTorch used to build Python wheels from 1.9.0 to 1.10.0.

v1.1.5

2 years ago

Added

  1. Partial support for the groups argument for convolutional layers.

Fixed

  1. Patching procedure in memtorch.mn.module.patch_model and memtorch.bh.nonideality.apply_nonidealities to fix semantic error in Tutorial.ipynb.
  2. Import statement in Exemplar_Simulations.ipynb.

Enhanced

  1. Further modularized patching logic in memtorch.bh.nonideality.NonIdeality and memtorch.mn.Module.
  2. Modified default number of worker in memtorch.utils from 2 to 1.

v1.1.4

2 years ago

Added

  1. Added Patching Support for torch.nn.Sequential containers.
  2. Added support for modeling source and line resistances for passive crossbars/tiles.
  3. Added C++ and CUDA bindings for modeling source and line resistances for passive crossbars/tiles*.
  4. Added a new MemTorch logo to README.md
  5. Added the set_cuda_malloc_heap_size routine to patched torch.mn modules.
  6. Added unit tests for source and line resistance modeling.
  7. Relaxed requirements for programming passive crossbars/tiles.

*Note it is strongly suggested to set cuda_malloc_heap_size using m.set_cuda_malloc_heap_size manually when simulating source and line resistances using CUDA bindings.

Enhanced

  1. Modularized patching logic in memtorch.bh.nonideality.NonIdeality and memtorch.mn.Module.
  2. Updated ReadTheDocs documentation.
  3. Transitioned from Gitter to GitHub Discussions for general discussion.

v1.1.3

2 years ago

Added

  1. Added another version of the Data Driven Model defined using memtorch.bh.memrsitor.Data_Driven2021.
  2. Added CPU- and GPU-bound C++ bindings for gen_tiles.
  3. Exposed use_bindings.
  4. Added unit tests for use_bindings.
  5. Added exemptAssignees tag to scale.yml.
  6. Created memtorch.map.Input to encapsulate customizable input scaling methods.
  7. Added the force_scale input argument to the default scaling method to specify whether inputs are force scaled if they do not exceed max_input_voltage.
  8. Added CPU and GPU bindings for tiled_inference.

Enhanced

  1. Modularized input scaling logic for all layer types.
  2. Modularized tile_inference for all layer types.
  3. Updated ReadTheDocs documentation.

Fixed

  1. Fixed GitHub Action Workflows for external pull requests.
  2. Fixed error raised by memtorch.map.Parameter when p_l is defined.
  3. Fixed semantic error in memtorch.cpp.gen_tiles.

v1.1.2

2 years ago

Added

  1. C++ and CUDA bindings for memtorch.bh.crossbar.Tile.tile_matmul.

Using an NVIDIA GeForce GTX 1080, a tile shape of (25, 25), and two tensors of size (500, 500), the runtime of tile_matmul without quantization support is reduced by 2.45x and 5.48x, for CPU-bound and GPU-bound operation, respectively. With an ADC resolution of 4 bits and an overflow rate of 0.0, the runtime of tile_matmul with quantization support is reduced by 2.30x and 105.27x, for CPU-bound and GPU-bound operation, respectively.

Implementation Runtime Without Quantization Support (s) Runtime With Quantization Support (s)
Pure Python (Previous) 6.917784 27.099764
C++ (CPU-bound) 2.822265 11.736974
CUDA (GPU-bound) 1.262861 0.2574267
  1. Eigen integration with C++ and CUDA bindings.
  2. Additional unit tests.

Enhanced

  1. Modularized C++ and CUDA quantize bindings.
  2. Enhanced functionality of naive_progam and added additional input arguments to dictate logic for stuck devices.

Fixed

  1. Removed debugging code from naive_progam.

v1.1.0

3 years ago

Added

  1. Unit tests and removed system CUDA dependency;
  2. Support for Conv1d and Conv3d Layers;
  3. Legacy support;
  4. MANIFEST.in and resolved header dependency;
  5. Native toggle for forward_legacy and size arguments to tune;
  6. codecov integration;
  7. Support for all torch.distributions;
  8. 1R programming routine and non-linear device simulation during inference;
  9. Stanford PKU and A Data Driven Verilog a ReRAM memristor models;
  10. Modular crossbar tile support;
  11. ADC and variable input voltage range support, and modularized all memtorch.mn modules;
  12. cibuildwheel integration to automatically generate build wheels.

Enhanced

  1. Mapping functionality;
  2. Reduced pooling memory usage with maxtasksperchild;
  3. Programming routine;
  4. set_conductance;
  5. apply_cycle_variability.

Fixed

  1. Dimension mismatch error for convolutional layers with non-zero padding;
  2. reg.coef_ and reg.intercept_ extraction process for N-dimensional arrays;
  3. Various semantic errors.

v1.0.0

4 years ago