:twisted_rightwards_arrows: Neural Network (NN) Streamer, Stream Processing Paradigm for Neural Network Apps/Devices.
Pre-release for Tizen 9.0 : Tizen 9.0 M1 release (sync to Tizen 7.0 and 8.0)
This is a LTS release 2.0.1, including bugfixes for 2.0 LTS release.
Full Changelog: https://github.com/nnstreamer/nnstreamer/compare/v2.0.0...v2.0.1
2.1.0 -> 2.1.1 - Tizen 7.0 M1 RCx preparation and NNStreamer Mini Summit 2022-04 release.
- NNStreamer-Edge refactoring (module for Among-Device AI (a.k.a. Edge-AI))
- Ongoing effort of nnstreamer-edge separation from nnstreamer.
- In the future, nnstreamer-edge will provide among-device AI functions and nnstreamer will provide gstreamer plugins for such functions. Non-gstreamer systems may connect to nnstreamer-edge based pipelines without gstreamer as clients.
- NNStreamer-Edge will be using AITT as its default backend, leaving protocol issues to AITT.
- In the future, nnstreamer-edge will be compatible with non-Linux ultra-lightweight systems (e.g., Tizen-RT)
- ML-Service API preparation is going on at api.git.
- Major features
- MQTT timestamping w/ NTP. (later will be migrated to nnstreamer-edge & aitt)
- Query (later will be migrated to nnstreamer-edge & aitt): robustness support, mqtt-hybrid protocol, performance fixes for multi-clients.
- More coverage for SNPE support: quantized model support, SNPE dimension bug workaround, fixes from/for production team.
- Flexible tensor support w/ decoder, converter, flatbuffer.
- Minor features
- MQTT unittest basis, generic stream support, android support, timeout handling, ... (and many!)
- Utility functions exported for plugin writers.
- Tensorflow-lite delegation refactored for generality: may use XNNPACK more easily.
- Tensorflow-lite multi-lib support.
- PyTorch: support complex output tensor formats.
- NNStreamer multi-lib support.
- Decoder: boundingbox-yolov5
- Filter: TRIx-Engine support. (NPUs of Samsung 2022 TV)
- Docker support refactored and cleaned up.
- Fixes
- ARMNN build errors.
- Android errors
- Build errors with recent compiler updates. (gcc 11)
- Fixes upstreamed from productions
- Errors w/ library updates: Lua >= 5.3, GLib >= 2.68
- Regression fixes: openvino, edgetpu, tensorrt
- Memory leaks in C++ subplugin infra.
- Known issues: PPA/Launchpad build broken!
2.0.0 -> 2.1.0 - 2.1.0 is a devel version for 2.2.0 release, which is planned to be the LTS release of 2022.
g_socket_listener_set_backlog
by @anyj0527 in https://github.com/nnstreamer/nnstreamer/pull/3581
Full Changelog: https://github.com/nnstreamer/nnstreamer/compare/v2.0.0...v2.1.1
This is the LTS release of 2022, version 2.0.0.
The key features of 2.0 release include:
other/tensor''' (single tensor), will be obsoleted. Please use
other/tensors''' with ```num_tensors=1''' instead.For more information, please refer to https://github.com/nnstreamer/nnstreamer/wiki/Release-Note-v2.0.0
1.7.2 is the second devel-unstable release for 1.8 RC. Note that 1.7.1+a is released with Tizen 6.5 M1.
1.7.1 -> 1.7.2 (includes a huge amount of changes)
- NNStreamer for Edge-AI project started.
- Main festures of 1.8.0 release and its immediate successors will be "Edge-AI", which allows distributed on-device AI inferences.
- The new stream type, "Flex-Tensor", is introduced. Dimensions and types of tensor stream may vary per frame without cap-renegotiations.
- Many nnstreamer's tensor-* elements support Flex-Tensor.
- You may use tensor-converter to convert between flex-tensor and (static) tensor.
- MQTT-SINK and MQTT-SRC elements are added for edge-AI systems with MQTT pub/sub streams.
- MQTT streams support "ANY" capabilities.
- Assuming that clocks of nodes are synchronized by NTP or other mechanisms, pipeline users may send timestamp related info via MQTT streams for multi-source synchronization.
- Tensor-crop, a new nnstreamer-gstreamer element.
- Basic feature only (cropping a tensor stream with information of another tensor stream)
- Major features
- GSTPipeline to PBTXT parser. You can use PBTXT-pipeline visualization tools with the parsed results.
- FlexBuffers support.
- TVM support
- Tensor-IF with custom (user code plugged at run-time) conditions
- Tensorflow-lite delegation designation is generalized.
- Tensorflow2-lite XNNPACK delegation
- NNTrainer-inference can be attached as a filter along with both API sets.
- CAPI: updated documentation, added new enums for recent nnstreamer features, ...
- API interface and implementation is separated to another git repository for better architecture.
- Tensor-converter and Tensor-decoder support custom ops.
- Minor features
- Filter subplugin priority with ini file configuration.
- Decoder/Bounding-Box improved: output tensor mapping, clamp bounding box locations, labeling issues, more options.
- Decoder/Pose-Estimation improved: proper labeling.
- Testcases added for gRPC, Android, Tensor-rate, ...
- Refactoring (reduce complexity, remove duplicity, build options, ...)
- Android build & release upgraded.
- Converter usability upgrade: property to list subplugins, subplugin naming/install rules.
- Pytorch: exception handling, Android build
- gRPC: per-IDL packaging, interface updates, common-code revise, async mode, ...
- Support Tensorflow 2.4 API (TF has broken backward compatibility again)
- Tensor-transform: may operate on chosen tensor or channel only.
- Fixes
- Android resource leak.
- CAPI timing, header issues, seg-faults, memory leaks, ...
- MacOS build errors.
- TensorRT dependency bugs
- Edge-TPU compatibility issues.
- Unit test fixes (memory leaks, resource leaks, skip disabled features, ...)
- Fixed reported issues (security, memory leaks, query-caps, ...)
- Extra
- Support for Python 2.x is dropped.
- Automated doc-page generation with Hotdoc.
- Android build now includes GST-Shark for performance profiling.
1.7.1 is the first devel-unstable release after 1.6.0 LTS release.
1.7.0 -> 1.7.1
- Major features
- Tensor-IF, a new element. It allows to create conditional branches based on tensor values.
- Join, a new element. It merges output sinks from src pads of different elements with the same GST-Cap.
- Tensor-rate, a new element. It allows throttling by generating QoS messages.
- TensorRT support
- TF1-lite and TF2-lite coexistance
- TFx-lite NNAPI, GPU Delegation
- Minor features
- hw-accel options for tensor-filters are refactored
- python3-embed enabled if python3 >= 3.8
- Subplugin initialization optimization.
- Docker scripts for Ubuntu developers.
- Fixes
- flatbuf dependency related with tensorflow-lite.
- tensor-decoder configures framerate.
- Dynamic dimension related API issues fixed.
- MacOS, Yocto compatibility issues fixed. (A few Yocto known issues are still remaining.)
- License mismatches resolved.
- A few Test cases fixed.
- Packaging issues fixed and style cleaned-up.
- Extra
- A lot of interesting sample applications are added.
Linux Foundation AI Announcement
NNStreamer 1.6.0 is the next LTS version.
NNStreamer 1.6.0 targets Tizen 6.0 M2 release and next-year Android products.
Release Note of NNStreamer 1.6.0
We will attach binary packages as soon as CD system publishes them.
For Tizen 5.5 Mx long-term stable maintance, we release NNStreamer 1.0.y LTS v1.0.1. Commits for 1.0.y LTS is managed in review.tizen.org (tizen_5.5 branch) and will be mirrored back to github.com/nnstreamer/nnstreamer.
In 1.0.y series, we will add critical hotfixes for 1.0 and additional requirements for Tizen 5.5 Mx only.
Changes from 1.0.0 to 1.0.1
RPM binaries are from download.tizen.org (reference build of Tizen 5.5 M3)
Release of NNStreamer 1.3.0
1.3.0 (odd-mid-version) is a development version.