Brain Dynamics Programming in Python
This release provides several new features, including:
MLIR
registered operator customization interface in brainpy.math.XLACustomOp
.operator customized with taichi.ipynb
by @Routhleck in https://github.com/brainpy/BrainPy/pull/612
brainpylib>=0.2.6
for jax>=0.4.24
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/622
brainpy.tools.compose
and brainpy.tools.pipe
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/624
taichi
and numba
by @Routhleck in https://github.com/brainpy/BrainPy/pull/635
clear_buffer_memory()
support clearing array
, compilation
, and names
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/639
brainpy.math.surrogate..Surrogate
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/638
jax.jit
etc. directly by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/625
brainpylib>=0.3.0
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/645
cupy
by @Routhleck in https://github.com/brainpy/BrainPy/pull/653
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.5.0...V2.6.0
This release contains many new features and fixes. It is the first release with a mature solution for Brain Dynamics Operator Customization on both CPU and GPU platforms.
brainpy.dyn.HalfProjDelta
and brainpy.dyn.FullProjDelta
.brainpy.math.exprel
, and change the code in the corresponding HH neuron models to improve numerical computation accuracy. These changes can significantly improve the numerical integration accuracy of HH-like models under x32 computation.brainpy.reset_level()
decorator so that the state resetting order can be customized by users.brainpy.math.ein_rearrange
, brainpy.math.ein_reduce
, and brainpy.math.ein_repeat
functionsbrainpy.math.scan
transformation.brainpylib>=0.2.4
has been released, supporting operator customization through the Taichi compiler. The supported backends include Linux, Windows, MacOS Intel, and MacOS M1 platforms. Tutorials please see https://brainpy.readthedocs.io/en/latest/tutorial_advanced/operator_custom_with_taichi.html
operator_custom_with_taichi.ipynb
of documentations by @Routhleck in https://github.com/brainpy/BrainPy/pull/546
brainpy.math.defjvp
and brainpy.math.XLACustomOp.defjvp
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/554
brainpy.math.ifelse
bugs by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/556
brainpy.math.exprel
, and change the code in the corresponding HH neuron models to improve numerical computation accuracy by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/557
brainpy.math.functional_vector_grad
and brainpy.reset_level()
decorator by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/561
brainpy.math.random.truncated_normal
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/574
brainpy.math.softplus
and brainpy.dnn.SoftPlus
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/581
TruncatedNormal
to initialize.py
by @charlielam0615 in https://github.com/brainpy/BrainPy/pull/583
_format_shape
in random_inits.py
by @charlielam0615 in https://github.com/brainpy/BrainPy/pull/584
truncated_normal
by @charlielam0615 in https://github.com/brainpy/BrainPy/pull/585
brainpy.math.unflatten
and brainpy.dnn.Unflatten
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/588
ein_rearrange
, ein_reduce
, and ein_repeat
functions by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/590
clear_input
in the step_run
function. by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/601
brainpy.math.scan
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/604
disable_ jit
support in brainpy.math.scan
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/606
taichi.init()
print by @Routhleck in https://github.com/brainpy/BrainPy/pull/609
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.6...V2.5.0
This release contains more than 130 commit updates, and has provided several new features.
New instances can be used to compute the surrogate gradients. For example:
import brainpy.math as bm
fun = bm.surrogate.Sigmoid()
# forward function
spk = fun(membrane_potential)
# backward function
dV = fun.surrogate_grad(1., membrane_potential)
# surrogate forward function
surro_spk = fun.surrogate_fun(membrane_potential)
brainpy.math.eval_shape
for evaluating the all dynamical variables used in the target function.This function is similar to jax.eval_shape
which has no FLOPs, while it can extract all variables used in the target function. For example:
net = ... # any dynamical system
inputs = ... # inputs to the dynamical system
variables, outputs= bm.eval_shape(net, inputs)
# "variables" are all variables used in the target "net"
In future, this function will be used everywhere to transform all jax transformations into brainpy's oo transformations.
For a single object:
.reset_state()
defines the state resetting of all local variables in this node..load_state()
defines the state loading from external disks (typically, a dict is passed into this .load_state()
function)..save_state()
defines the state saving to external disks (typically, the .save_state()
function generates a dict containing all variable values).Here is an example to define a full class of brainpy.DynamicalSystem
.
import brainpy as bp
class YouDynSys(bp.DynamicalSystem):
def __init__(self, ): # define parameters
self.par1 = ....
self.num = ...
def reset_state(self, batch_or_mode=None): # define variables
self.a = bp.init.variable_(bm.zeros, (self.num,), batch_or_mode)
def load_state(self, state_dict): # load states from an external dict
self.a.value = bm.as_jax(state_dict['a'])
def save_state(self): # save states as an external dict
return {'a': self.a.value}
For a complex network model, brainpy provide unified state managment interface for initializing, saving, and loading states.
brainpy.reset_state()
defines the state resetting of all variables in this node and its children nodes.brainpy.load_state()
defines the state loading from external disks of all variables in the node and its children.brainpy.save_state()
defines the state saving to external disks of all variables in the node and its children.brainpy.clear_input()
defines the clearing of all input variables in the node and its children.The same model used in brain simulation can be easily transformed into the one used for brain-inspired computing for training. For example,
class EINet(bp.DynSysGroup):
def __init__(self):
super().__init__()
self.N = bp.dyn.LifRefLTC(4000, V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
V_initializer=bp.init.Normal(-55., 2.))
self.delay = bp.VarDelay(self.N.spike, entries={'I': None})
self.E = bp.dyn.ProjAlignPost1(
comm=bp.dnn.EventCSRLinear(bp.conn.FixedProb(0.02, pre=3200, post=4000), weight=bp.init.Normal(0.6, 0.01)),
syn=bp.dyn.Expon(size=4000, tau=5.),
out=bp.dyn.COBA(E=0.),
post=self.N
)
self.I = bp.dyn.ProjAlignPost1(
comm=bp.dnn.EventCSRLinear(bp.conn.FixedProb(0.02, pre=800, post=4000), weight=bp.init.Normal(6.7, 0.01)),
syn=bp.dyn.Expon(size=4000, tau=10.),
out=bp.dyn.COBA(E=-80.),
post=self.N
)
def update(self, input):
spk = self.delay.at('I')
self.E(spk[:3200])
self.I(spk[3200:])
self.delay(self.N(input))
return self.N.spike.value
# used for brain simulation
with bm.environment(mode=bm.nonbatching_mode):
net = EINet()
# used for brain-inspired computing
# define the `membrane_scaling` parameter
with bm.environment(mode=bm.TrainingMode(128), membrane_scaling=bm.Scaling.transform([-60., -50.])):
net = EINet()
brainpy.math.XLACustomOp
.Starting from this release, brainpy introduces Taichi for operator customization. Now, users can write CPU and GPU operators through numba and taichi syntax on CPU device, and taichi syntax on GPu device. Particularly, to define an operator, user can use:
import numba as nb
import taichi as ti
import numpy as np
import jax
import brainpy.math as bm
@nb.njit
def numba_cpu_fun(a, b, out_a, out_b):
out_a[:] = a
out_b[:] = b
@ti.kernel
def taichi_gpu_fun(a, b, out_a, out_b):
for i in range(a.size):
out_a[i] = a[i]
for i in range(b.size):
out_b[i] = b[i]
prim = bm.XLACustomOp(cpu_kernel=numba_cpu_fun, gpu_kernel=taichi_gpu_fun)
a2, b2 = prim(np.random.random(1000), np.random.random(1000),
outs=[jax.ShapeDtypeStruct(1000, dtype=np.float32),
jax.ShapeDtypeStruct(1000, dtype=np.float32)])
See https://github.com/brainpy/BrainPy/blob/master/brainpy/_src/dyn/projections/tests/test_STDP.py
jax==0.4.16
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/511
taichi
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/531
brainpy.reset_state()
and brainpy.clear_input()
for more consistent and flexible state managements by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/538
save_state
, load_state
, reset_state
, and clear_input
helpers by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/542
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.5...V2.4.6
brainpylib==0.1.10
has been released. In this release, we have fixed some bugs of brainpy dedicated GPU operators. Users can freely use them in any application.brainpy.math
have been refined..tracing_variable()
has been created to support tracing Variable
s during computations and compilations. Example usage please see #472brainpy.math.random.split_keys()
.brainpy.dnn.AllToAll
modulebrainpy.math.cond
and brainpy.math.while_loop
when variables are used in both branchesbrainpylib>=0.1.10
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/468
Variable
during computation and compilation by using tracing_variable()
function by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/472
simulating a brain dynamics model
with new APIs by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/483
brainpy.mixin.SupportOnline
and brainpy.mixin.SupportOffline
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/489
brainpy.dyn
module & add synaptic plasticity module by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/492
cond
and while_loop
when same variables are used in both branches by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/494
DynamicalSystem
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/501
brainpy.dyn
module by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/506
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.4...V2.4.5
This release has fixed several bugs and updated the sustainable documentation.
brainpy.mixin.ReceiveInputProj
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/428
jax=0.4.14
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/431
FixedTotalNum
class and fix bugs by @Routhleck in https://github.com/brainpy/BrainPy/pull/434
brainpy.dyn.Alpha
synapse model by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/459
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.3...V2.4.4
This release has standardized the modeling of DNN and SNN models by two intercorrelated packages: brainpy.dnn
and brainpy.dyn
.
Overall, the modeling of brain dynamics in this release has the following advantages:
DynamicalSystem
interface
class HH(bp.dyn.CondNeuGroup):
def __init__(self, size):
super().__init__(size)
self.k = bp.dyn.PotassiumFixed(size)
self.ca = bp.dyn.CalciumFirstOrder(size)
self.kca = bp.dyn.mix_ions(self.k, self.ca) # Ion that mixing Potassium and Calcium
self.kca.add_elem(ahp=bp.dyn.IAHP_De1994v2(size)) # channel that relies on both Potassium and Calcium
.update()
function in brainpy.DynamicalSystem
which resolves all compatible issues. Starting from this version, all update()
no longer needs to receive a global shared argument such as tdi
.
class YourDynSys(bp.DynamicalSystem):
def update(self, x):
t = bp.share['t']
dt = bp.share['dt']
i = bp.share['i']
...
Optimize the connection-building process when using brainpy.conn.ScaleFreeBA
, brainpy.conn.ScaleFreeBADual
, brainpy.conn.PowerLaw
New dual exponential model brainpy.dyn.DualExponV2
can be aligned with post dimension.
More synaptic projection abstractions, including
brainpy.dyn.VanillaProj
brainpy.dyn.ProjAlignPostMg1
brainpy.dyn.ProjAlignPostMg2
brainpy.dyn.ProjAlignPost1
brainpy.dyn.ProjAlignPost2
brainpy.dyn.ProjAlignPreMg1
brainpy.dyn.ProjAlignPreMg2
Fix compatible issues, fix unexpected bugs, and improve the model tests.
connect.base.py
's require
function by @Routhleck in https://github.com/brainpy/BrainPy/pull/413
DynamicalSystem.update()
function by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/422
lif
model bugs and support two kinds of spike reset: soft
and hard
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/423
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.2...V2.4.3
We are very excited to release this new version of BrainPy V2.4.2. In this new update, we cover several exciting features:
brainpy.dyn
for dynamics models and brainpy.dnn
for the ANN layer and connection structures.brainpy.pnn
for auto parallelization of brain models by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/385
jax.disable_jit()
for debugging by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/389
brainpy.init.DOGDecay
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/390
how to debug
and common gotchas
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/397
VariableDelay
and DataDelay
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/409
brainpy.dyn
module by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/410
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.1...V2.4.2
brainpy.math.Array
during compilationbrainpy.math.event
, brainpy.math.sparse
and brainpy.math.jitconn
module, needs brainpylib >= 0.1.9
brainpy.layers.FromFlax
and brainpy.layer.ToFlaxRNNCell
brainpy.connect.FixedProb
bugbrainpy.layers.FromFlax
and brainpy.layer.ToFlaxRNNCell
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/374
brainpy.connect.FixedProb
bug by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/376
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.4.0...V2.4.1
This branch of releases (brainpy==2.4.x
) are going to support the large-scale modeling for brain dynamics.
As the start, this release provides support for automatic object-oriented (OO) transformations.
Automatic OO transformations on longer need to take dyn_vars
or child_objs
information.
These transformations are capable of automatically inferring the underlying dynamical variables.
Specifically, they include:
brainpy.math.grad
and other autograd functionalitiesbrainpy.math.jit
brainpy.math.for_loop
brainpy.math.while_loop
brainpy.math.ifelse
brainpy.math.cond
Update documentation
Fix several bugs
brainpy.math
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/357
brainpy.math.Variable
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/358
jax.disable_jit
mode by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/359
static_argnums
and static_argnames
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/360
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.3.8...V2.4.0
This release continues to add support for improving the usability of BrainPy.
NodeList
and NodeDict
for a list/tuple/dict of BrainPyObject
instances.ListVar
and DictVar
for a list/tuple/dict of brainpy data.Clip
transformation for brainpy initializers.brainpylib
operators are accessible in brainpy.math
module. Especially there are some dedicated operators for scaling up the million-level neuron networks. For an example, see example in Simulating 1-million-neuron networks with 1GB GPU memory
DSRunner(..., memory_efficient=True)
. This setting can usually reduce so much memory usage.brainpylib
wheels on the Linux platform support the GPU operators. Users can install GPU version of brainpylib
(require brainpylib>=0.1.7
) directly by pip install brainpylib
. @ztqakitaListVar
and DictVar
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/345
brainpylib
call bug by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/354
DSRunner
by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/355
Array
transform bug by @chaoming0625 in https://github.com/brainpy/BrainPy/pull/356
Full Changelog: https://github.com/brainpy/BrainPy/compare/V2.3.7...V2.3.8