OAID Tengine Versions Save

Tengine is a lite, high performance, modular inference engine for embedded device

lite-v1.2-pre

3 years ago
  • Tengine-Lite 将开放第一个 NPU 版本给到广大的开发者试用
    • 目前我们 Pre-Release 的是 Amlogic 的一颗带 NPU 的芯片 A311D,合作方式是在 Single Board Computer(SBC)-Khadas vim3上预装此试用版本-传送门;
    • 为了配合预装版本的发布,需要在开源社区开放相应的模型转换工具和模型量化工具;
    • 由于涉及第三方知识产权的问题,暂时还无法开源相关源代码;
    • 欢迎大家尝试在 Khadas VIM3(311D) 上试用 Tengine Lite 所支持的 NPU 最新特性;
    • 闲来大佬打造的真-异构计算(闲来大佬甚至还自己做了个3D打印外壳-传送门);
    • 欢迎大家提出宝贵的建议,可以来我们 QQ 群交流!
  • 我们也在和更多的开源开发板、SBC 公司合作,会有其他开发板,敬请期待;
  • 我们也在同更多的 NPU 厂商合作,欢迎感兴趣的小伙伴咨询或加入我们共建 Tengine 开源生态!

lite-v1.0

3 years ago

Release v1.0

New feature

  • Dynamic graph segmentation

  • C++ API (experiment)

  • Python API (experiment)

  • support ARM-Mali GPU with ACL

  • support others GPU with Vulkan (experiment)

  • support fp16 inference with armv8.2 (experiment)

New network support

  • landmark
  • yolact
  • openpose
  • yolov4

New operator support

  • uint8 reference op (experiment)

  • mish activation op

Performance

  • update the performance of openmp

lite-v0.1

3 years ago

Initial Tengine Lite release v0.1

v1.12.0

4 years ago

v1.9.0

4 years ago

v1.3.2

5 years ago

Separate cpu operator implementation and the framework into two so. Add serializer for TFLite, and reference implementation on TFLite op. Add RNN/GRU/LSTM reference implementation

v1.0.0

5 years ago

With the new API 2.0 and a few new features and bug fixes.

v0.8.0

5 years ago

Android build to run ACL MSSD can use GPU to accelerate Android build with c++_shared instead of gnustl_shared

v0.7.2

5 years ago

Support GPU fp16. Only works with ACL 18.05 More tensorflow model and onnx model support

v0.5.0

5 years ago

This is a first version which implements many basic features for an inference engine