Deep Learning and Reinforcement Learning Library for Scientists and Engineers
Dear all,
It is our great honour to pre-released TensorLayer 3.0.0-alpha. It supports TensorFlow and MindSpore backends, and supports some PaddlePaddle operator backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.
In the next step, we support TensorFlow, MindSpore, PaddlePaddle, and PyTorch backends in the future. Feel free to use it and make suggestions.
TensorLayer 3.0.0-alpha is a maintenance release.
TensorLayer 2.2.3 is a maintenance release. It contains numerous bug fixes.
TensorLayer 2.2.0 is a maintenance release. It contains numerous API improvement and bug fixes. This release is compatible with TensorFlow 2 RC1.
SpatialTransform2dAffine
auto in_channels
tf.models.Model._construct_graph
for list of outputs, e.g. STN case (PR #1010)in_channels
exception raise. (PR #1015)private_method
decorator (#PR 1025)trainable_weights
and nontrainable_weights
when initializing ModelLayer
(#PR 1026)trainable_weights
and nontrainable_weights
when initializing LayerList
(#PR 1029)model.all_layers
(#PR 1029)tf.image.resize_image_with_crop_or_pad
with tf.image.resize_with_crop_or_pad
(#PR 1032)ResNet50
static model (#PR 1041)Dear All,
Three things need to be mentioned for this release.
model.conf
is almost stable, the AIoT team from Sipeed is now working hard to support TL model on the AI Chips.Enjoy!
TensorLayer Team
Hello, we want to tell you some GOOD NEWS. Today, AI chip is anywhere, from our phone to our car, however, it still hard for us to have our own AI chip. To end this, TensorLayer team starts to work on AIoT and will soon support to run the TensorLayer models on the low-cost AI chip (e.g., K210) and microcontrollers (e.g., STM32). Details in the following:
If you are interested in AIoT, feel free to discuss in Slack.
TensorLayer, Sipeed, NNoM teams
=======
Maintain release, recommended to update.
Maintain release, recommended to update.
tl.layers.initialize_global_variables(sess)
(PR #931)trainable_weights
(PR #966)InstanceNorm
, InstanceNorm1d
, InstanceNorm2d
, InstanceNorm3d
(PR #963)tl.layers.initialize_global_variables(sess)
(PR #931)tl.layers.core
, tl.models.core
(PR #966)weights
into all_weights
, trainable_weights
, nontrainable_weights
BatchNorm
, keep dimensions of mean and variance to suit channels first
(PR #963)Dear all,
It is our great honour to release TensorLayer 2.0.0. In the past few months, we have refactored all layers to support TensorFlow 2.0.0-alpha0 and the dynamic mode! The new API designs allow you to customize layers easily, compared with other libraries.
We would like to thanks all contributors especially our core members from Peking University and Imperial College London, they are @zsdonghao @JingqingZ @ChrisWu1997 @warshallrho. All contributions are listed in the following.
In the next step, we are interested in supporting more advanced features for 3D Vision, such as PointCNN and GraphCNN. Also, we still have some remaining examples that need to be updated, such as A3C and distributed training. If you are interested in joining the development team, feel free to contact us: [email protected]
Enjoy coding!
TensorLayer Team
All contribution can be found as follows:
dilation_rate
instead. (🀄️remember to change CN docs)tutorial_mnist_simple.py
@ChrisWu1997 2019/04/17Some testing codes can be removed.
All save/load methods are also wrapped as class method in model core.