A higher-level Neural Network library for microcontrollers.
Notes
It has been a year since the last release. Thanks to the people that contributed to this project, there are many accumulated bug fixing, and changes. It is now the time to make a final release on version 0.4.x. The main branch will become stable-0.4.x.
It does not mean that the project will end here. on the contrary, the new 0.5.x will come and there will be major changes in interfaces. This is how the version tag works. 0.5.x will not fully compatible with 0.4.x. So we can drop deprecated and clean up the lib.
Major updates:
malloc()
nnom_set_static_buf()
to be call before any model initialization.NNOM_USING_STATIC_MEMORY
in nnom_port.h
to control memory type.bug fixed:
ROUND()
perform incorrectly, see https://github.com/ARM-software/CMSIS_5/issues/1047
minor
nnom_memcpy()
and nnom_memset()
in nnom_port.h
Major updates:
Calibrations changes because of RNN layers.
Support model with multiple outputs:
Add RNNoise like Voice Enhancement example:
Depthwise Conv layers are now supported depth_multiplier arguments.
Bugs fixed:
Minors:
The major change for this version is to support the RNN layers.
New RNN layers
return_sequence
, stateful
and go_backewards
New Activations
slope
, threshold
and max
. Also those predefine ReLU such as ReLU6New Examples
uci-har-rnn
demonstrate the usage of new RNN layers.Minor:
model_io_format()
to print the layer io info.Bugs:
Important: This update will not support your previous weights.h
generated by the old scripts, only because an argument, dilation
was added to Conv2D and DW_Conv2D.
You shall use the new script to regenerate your old model.
Major changes
_s
suffix. This is a set of API taking a C structure as arguments for a layer.nnom.py
to generate the nnom model based on NNoM structured API.Keras
to Tensorflow.Keras
. Support TF1.14+ and TF2.1+Layer updates
Others
nnom_shape_t
to nnom_3d_shaped_t
LAYER_BUF_*
macros to NNOM_TENSOR_BUF_*
for clarity.Bug fixed:
tensor_chw2hwc()
upsampling_build()
global_pooling_builtd()
nnom_out_shape.c
and all *_out_shape()
now changed to *_build()
and placed in each layer-based files.nnom_predict()
now provide probability as output. It now supports single neural output.auto-test
, for Travis CI and all PC platforms.Known issues:
Batch normalisation after depthwise convolution is not working properly. Temporary solution: use batch normalisation after the pointwise convolution (in a depthwise-pointwise structure).
The script does not support implicitly defined activation. e.g. Dense(32, activation='relu')
.
Temporary solution: use explicitly activation. e.g.
Dense(32)
ReLU()
Note: KLD quantisation. Ref: http://on-demand.gputechconf.com/gtc/2017/presentation/s7310-8-bit-inference-with-tensorrt.pdf
Update Log
LOG
-> NNOM_LOG
)nnom_utils.py
to convert from Keras directly.layer_callback()
interface, which called after every layer has finished its operation.Fixed memory buffer calculation mistakes.