A flexible framework of neural networks for deep learning
This is the release note of v6.6.0. See here for the complete list of solved issues and merged PRs.
max_pooling_2d
(#8329)optimizer_hooks.GradientHardClipping
for scalar array (#8372)F.negative_sampling
in fp32 for fp16 inputs (#8309)optimizer_hooks.GradientHardClipping
for ChainerX (#8377, thanks @kshitij12345!)/examples/seq2seq/README.md
(#8404, thanks @tanaken0515!)type_check
errors (#8456)LinkTestCase
for L.GroupNormalization
(#8355)CHAINER_CI
in Travis CI (#8373)CHAINER_CI
in ChainerX tests in Jenkins (#8375)CHAINER_CI
in Chainer tests in FlexCI (#8381)FunctionTest
modified input error (#8388)TestTriplet
(#8396)fix_random
in xfail backward tests (#8457)Convolution2D
tests for older numpy versions (#8478)_modified_xlogx
(#8486)This is the release note of v7.0.0rc1. See here for the complete list of solved issues and merged PRs.
This time, we will keep the current branches for active development (master
for v7.x, v6
for v6.x) after the RC. We will maintain v6.x series until Python2 EOL, so we do not cut the new development version for now to avoid increasing the number of branches to maintain. New features will be included directly into v7 for a while, and maintenance changes will be backported to v6.
ONNX-Chainer which used to be a separate project has now been integrated to the Chainer repository and made more accessible to existing Chainer users (#8229). You can easily export Chainer model as ONNX format like this:
import onnx_chainer
onnx_chainer.export(chainer_model, pseudo_input, filename='model.onnx')
For a more detailed description on how to get started, please refer to the ONNX-Chainer section in the official documentation.
ChainerMN now works with ChainerX. In this release, the MNIST example has also been updated to demonstrate the usage. (#7844)
UpsamplingDeconvFilter
and DownsamplingConvFilter
initializer (#5290, thanks @knorth55!)chainerx.meshgrid
(#6668, thanks @kshitij12345!)chainerx.hsplit
(#7030, thanks @ishanrai05!)linalg.cholesky
to ChainerX (#7329, thanks @IvanYashchuk!)linalg.eigh
, linalg.eigvalsh
to ChainerX (#7503, thanks @IvanYashchuk!)force_equal_length=False
(#8071)RandomState
instance (#8081, thanks @mr4msm!)chainerx.hinge
(#8168)chainerx::SoftmaxCrossEntropy
and chainerx.softmax_cross_entropy
(#8250)chainermn.testing.to_device
function (#8279)chainerx.copyto
(#8314, thanks @kshitij12345!)TabularDataset.as_tuple/as_dict
to TabularDataset.astuple/asdict
(#7788)DeviceResident.to_gpu
/to_cpu
/to_intel64
(#8058)generate_matrix
(#8167)chainerx.take
(#8197)*GradState
classes (#8224)gradient_check
(#8236)F.batch_normalization
(#8266)device
argument from chainerx.diag
and chainerx.diagflat
(#8275)gradient_check
(#8290)output_grad
support on fake_as_funcnode
(#8298)F.negative_sampling
in fp32 for fp16 inputs (#8300)mode
and align_corners
arguments in F.resize_image
keyword-only (#8009)weights
and keepdims
arguments in Variable.mean
keyword-only (#8010)WeightStandardization
keyword-only (#8011)call_before_training
argument of Trainer.extend
keyword-only (#8064)
ObservationAggregator
and MultiNodeEarlyStoppingTrigger
keyword-only (#8065)force_equal_length
argument in scatter_dataset
and scatter_index
keyword-only (#8066)size
argument of tabular.from_data
keyword-only (#8067)chainerx::Take
faster (#8295)F.batch_normalization
with mixed dtype (#8149)__str__
of parameterized class (#8169)x
and gamma
/beta
have different dtypes in F.batch_normalization
(#8175)copy
to __deepcopy__
in ChainerMN batch_normalization
and replace to_gpu
(#8185)Allocator
(#8215)chainerx.ascontiguousarray
(#8262)global_kernel_registry
(#8265)gpu_id=0
in ChainerMN testing get_device
(#8304)setup.cfg
(#8180)AveragePoolPadMode
enum (#8214)setup.py
(#8218){Max,Average}PoolForwardBackward
(#8223)readability-avoid-const-params-in-decls
(#8225)gradient_check
(#8238)F.softmax_cross_entropy
(#8253)CreateSubgraph
(#8310)resize_images
documentation to reflect recent code changes (#8221, thanks @zu3st!)chainerx.ravel
(#8233)chainerx.sigmoid_cross_entropy
(#8249)libchainerx_base.a
to link chainerx statically (#8247)generate.py
in examples/wavenet
(#8172, thanks @dhgrs!)F.scale
test (#6969, thanks @ishanrai05!)test_n_step_rnn
(#7483)TestAccuracy
: Randomly reduce testing parameters (#7820)chx.linalg.solve
(#7997)TestQR
(#8114)pytest.skip()
in combination with testing.repeat
/retry
(#8174)DummySerializer
and DummyDeserializer
from iterators_tests
(#8176)BatchNormalization
backward test tolerances (#8189)protobuf>=3.8
(#8190)CHAINER_TEST_PAIRWISE_PARAMETERIZATION
and enable it only in Travis CI (#8211)attrs
package version (#8219)HDF5Serializer
test for h5py<2.9 (#8220)TestBatchNormalization
(#8230)"jenkins"
extras (#8241)clang-format-6.0
if possible and track the version of clang-format
(#8242)DeprecationWarning
filter from test_multi_node_chain_list
(#8246)chainex_tests
/unit_tests
/routines_tests
/test_linalg.py::Inverse
(#8255)TestHuberLoss
(#8271)ImportWarning
just a warning in tests (#8291)gtest
linkage (#8292, thanks @cloudhan!)test_average
is slow in FlexCI (#8303)test_mnist
in chainermn_tests
(#8305)communicator_test
for ChainerX+ChainerMN (#8313)ImportWarning
ignore entry (#8186)WIN32_LEAN_AND_MEAN
definition (#8205, thanks @cloudhan!)This is the release note of v6.5.0. See here for the complete list of solved issues and merged PRs.
print_runtime_info
(#7860)__str__
of parameterized class (#8184)BatchNormalization
backward test tolerances (#8196)L.BatchRenormalization
and adjust tolerances (#8200)TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8201)attrs
version (#8222)HDF5Serializer
test for h5py<2.9 (#8256)This is the release note of v7.0.0b4. See here for the complete list of solved issues and merged PRs.
Many updates to ChainerX including new routines and support for loss scaling.
F.n_step_rnn
and F.n_step_birnn
(#5808)chainerx.vsplit
to ChainerX (#7032, thanks @ishanrai05!)chainerx.linalg.qr
to ChainerX (#7379, thanks @IvanYashchuk!)chainerx.accuracy
(#7526, thanks @aksub99!)chainerx.{remainder/mod}
(#7675, thanks @sky58!)F.zeta
(#8059, thanks @UmashankarTriforce!)testing.generate_matrix
to get matrices of given singular values (#8077)chainerx.fmod
(#8110)chainerx.nonzero
(#8124)chainerx::ArrayRepr
for large inputs (#7708)FutureWarning
on GPU-to-GPU transfer in StandardUpdater
(#7952)typeid
of kernels in libchainerx
(#7970)variable.Parameter
objects (#8022)ScanKernel
(#8103)chainerx::Absolute
device implementation (#7319)MultiprocessIterator
and MultiprocessParallelUpdater
(#7511)mixed16
/float16
GroupNormalization
(#7965)chx::Device
object on ndarray
pickling (#7988)chainerx::Dot
edge cases with empty arrays (#8020)AddAt
implementation for float16 arrays (#8055)fill_value
in constant initializer (#8089)ArrayReprImpl
(#7699)F.batch_normalization
and ChainerMN backend implementations (#8039)-Wabsolute-value
for clang (#8045)NativeCumsumKernel
(#8053)-Wbraced-scalar-init
for clang (#8076)arithmetic.{h,cc}
(#8128)backend.copyto
(#7832)chainerx.to_numpy
(#7984)chainerx.take
indices dtype (#7998)CHAINERX_ENABLE_{BLAS,LAPACK}
(#8099)chainerx.minimum
(#8146)chainerx.maximum
doc (#8147)cblas.h
and modified CMakeLists.txt
(#8052, thanks @okdshin!)CHAINERX_ENABLE_LAPACK=0
causes error (#8086, thanks @cloudhan!)DeprecationWarning
in test_maniplation.py
(#7824)F.max_pooling_2d
test (#7924)negative_sampling
(#7975)F.lstm
test parameterization (#7987)gradient_check
test (#7989)TrueDiv
tolerances (#8047)L.BatchRenormalization
and adjust tolerances (#8080)h5py.File
mode
(#8090)np.empty
(#8096)PseudoInverse
test (#8102)test_normal.py
(#8111)ignore::ImportWarning
to setup.cfg
(#8131)fix_random
decorator to be used with OpTest
(#8136)NStepRNN
and NStepBiRNN
(#8142)empty
in F.cast
test that can cause overflow warning (#8152)TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8163)setup.cfg
(#8154)This is the release note of v6.4.0. See here for the complete list of solved issues and merged PRs.
GroupNormalization
(#8113)MultiprocessIterator
and MultiprocessParallelUpdater
(#8126)deepcopy
for chain parameters (#8150)backend.copyto
(#8056)DecorrelatedBatchNormalizationTest
and add stable input (#7940)F.batch_inv
test (#7981)F.squared_error
test (#8012)negative_sampling
(#8019)gradient_check
test (#8021)h5py.File
mode
(#8107)Contrastive.backward
(#8108)test_normal.py
(#8117)im2col
test (#8135)This is the release note of v7.0.0b3. See here for the complete list of solved issues and merged PRs.
Due to the end-of-life (EOL) of Python 2 in January 2020, Python 2 support has been dropped in this release. Chainer v6.x continues to support Python 2. See the blog post for details.
F.max_pooling_2d
refactoringImplementation of F.max_pooling_2d
has been merged to F.max_pooling_nd
. The behavior is unchanged, so ordinary users should not be affected by this change. However, the FunctionNode
class recorded in the computational graph corresponding to F.max_pooling_2d
has changed from MaxPooling2D
to MaxPoolingND
. The code explicitly depending on this class will need a fix.
chainerx.repeat
(#7223, thanks @durswd!)TabularDataset.slice
(#7251)chainer.dataset.tabular.DelegateDataset
(#7276)ObservationAggregator
extension to ChainerMN (#7302)scatter_dataset
as well as scatter_index
(#7327)chainer.dataset.tabular.from_data
(#7361)linalg.svd
, linalg.pinv
to ChainerX (#7411, thanks @IvanYashchuk!)TabularDataset.convert/with_converter
(#7428)linalg.solve
, linalg.inv
to ChainerX (#7474, thanks @IvanYashchuk!)Converter
class (#7489)chainerx.sigmoid_cross_entropy
(#7524, thanks @aksub99!)chainerx.cumsum
(#7558, thanks @aksub99!)chainerx.nansum
(#7719, thanks @aksub99!)chainerx.nanargmax
and chainerx.nanargmin
(#7755, thanks @aksub99!)tri*
routines to ChainerX (#7791, thanks @IvanYashchuk!)CommunicatorBase
class (#7814)numerical_grad_dtype
to FunctionTestCase
and LinkTestCase
(#7817)tabular.from_data
(#7847)chainerx.count_nonzero
(#7852, thanks @aksub99!)chainerx.flatten
(#7901, thanks @aksub99!)chainerx.ravel
(#7904, thanks @aksub99!)roi_{average|max}_{pooling|align}_2d.py
(#5636, thanks @knorth55!)Link.to_gpu
unless compatible with to_device
(#5762)F.dropout
to use cuDNN by default (#7185, thanks @crcrpar!)F.average
as accurate as backend (#7758)PureNcclCommunicator
(#7793)type_check
error message on evaluating bool expression (#7795)type_check
(#7803)chx.leaky_relu
/elu
(#7816)None
inputs to gradient check and generating None
gradients in FunctionTestCase
(#7831)print_runtime_info
(#7833)F.clip
for NumPy 1.17 (#7843)rtol * abs(b)
in allclose
output (#7848)TypeError
in max_pooling_2d
(#6835, thanks @ishanrai05!)PureNcclCommunicator
(#7600)create_mnbn_model()
bug (#7718)optimizer_hooks.GradientHardClipping
for scalar array (#7760)backends.copyto
from chainerx to non-chainerx (#7835)split_axis
for intel64 when grad_ouputs
contains None
(#7836)CommunicatorBase
(#7888)DeprecationWarning
to initializer of BuildingBlock
(#7909)Link.serialize
and optimizers.Adam
(#7918)F.max_pooling_2d
(#7922)_fallback_workarounds
in SpectralNormalization
(#7539)links.rnn
and functions.rnn
(#7725)batched_copy
to all Communicators
(#7761)axis
(#7799)linalg.svd
python bindings layer in ChainerX (#7866, thanks @IvanYashchuk!)n_layer
with n_layers
for consistency (#7871)pooling_nd
functions (#7938)F.max_pooling_2d
into F.max_pooling_nd
(#7939)static_graph
docs code examples (#7875)scatter
to doc (#7897)F.max_pooling_2d
test (#6836, thanks @ishanrai05!)F.lstm
test (#7808, thanks @dido1998!)F.slstm
test (#7805, thanks @dido1998!)F.n_step_rnn
test (#7804, thanks @dido1998!)F.n_step_lstm
test (#7807, thanks @dido1998!)F.n_step_gru
test (#7806, thanks @dido1998!)F.embed_id
test (#7903, thanks @dido1998!)point_to_point
communications (#7637)pseudo_connect
(#7638)TestConv*TensorCore
(#7710)chx.reshape
(#7762)TestHuberLoss
(#7837)F.average_pooling_2d
test (#7841)F.clipped_relu
test for NumPy 1.17 (#7842)test_accuracy.py
to the list of slow test files (#7851)BatchNorm
flaky of ChainerX (#7857)test_TrilTriu
(#7865)chainerx.logsumexp
test tolerance (#7867)F.tree_lstm
test for ChainerX (#7881, thanks @dido1998!)ndarray.data
access and fix wrong test (#7890)TrueDiv
test (#7917)F.cast
from negative floating-point to unsigned (#7920)L.CRF1d
test (#7926)DecorrelatedBatchNormalizationTest
and add stable input (#7932)chainerx.power
test (#7950)TestContrastive
(#7953)F.batch_inv
test (#7971)This is the release note of v6.3.0. See here for the complete list of solved issues and merged PRs.
F.average
as accurate as backend (#7782)type_check
error message on evaluating bool expression (#7801)type_check
(#7810)F.clip
for NumPy 1.17 (#7855)Parameter.dtype
for uninitialized parameter (#7749)UpdateRule.use_fp32_update
for uninitialized parameter (#7751)PureNcclCommunicator
(#7787)TypeError
in max_pooling_2d
(#7789, thanks @ishanrai05!)create_mnbn_model()
bug (#7846)split_axis
for intel64 when grad_ouputs
contains None
(#7931)F.max_pooling_2d
(#7933)backends.copyto
from/to chainerx (#7934)Link.serialize
and optimizers.Adam
(#7941)static_graph
docs code examples (#7884)chx.reshape
(#7792)test_communicator
(#7822)F.clipped_relu
test for NumPy 1.17 (#7854)TestHuberLoss
(#7869)F.average_pooling_2d
test (#7870)chainerx.logsumexp
test tolerance (#7889)ndarray.data
access and fix wrong test (#7913)F.cast
from negative floating-point to unsigned (#7944)TestContrastive
(#7959)TrueDiv
test (#7972)L.CRF1d
test (#7977)This is the release note of v7.0.0b2. See here for the complete list of solved issues and merged PRs.
ChainerX has several new backproppable ops such as ELU and softplus activation functions and loss functions including absolute error, squared error, Huber loss and Gaussian KL divergence. ChainerX is also supported in all OptimizerHook
s when used through Chainer. TabularDataset
has also been improved with new features.
Variable.grad
getter now raises an error when it is called before calling cleargrad
, zerograd
, or setting the gradient directly. (#7146)BatchRenormalization
(usage of epsilon) is fixed. It affects the inference behavior. (#7202)HierarchicalCommunicator
, SingleNodeCommunicator
and TwoDimensionalCommunicator
and are no longer necessary as NCCL now supports inter-node communication. (#7697)WeightStandardization
link hook (#6678, thanks @hitsgub!)chainerx.dsplit
(#7031, thanks @ishanrai05!)chainerx.left_shift
and chainerx.right_shift
(#7339, thanks @sky58!)chainerx.elu
(#7439, thanks @aksub99!)TabularDataset
(#7493)TabluarDataset.__iter__
(#7601)Variable.mean
(#7670)chainerx.softplus
(#7679, thanks @aksub99!)top_data
as -np.inf
and argmax_data
as -1
in F.roi_max_pooling_2d
(#6237, thanks @knorth55!)cleargrad
(#7146)chainerx.grad
from chainer.grad
(#7464)ImportError
(#7518)device
argument a keyword only argument. (#7537, thanks @kshitij12345!)Array::At
and __getitem__
(#7561)chainerx.ndarray._is_chained
(#7565)squared_difference
and fix docs (#7582)allreduce_grad()
and functions related with it (#7604)IndexError
if the index __getitem__
takes is out of bounds (#7614)six.integer_types
for axis check in F.concat
(#7632, thanks @knorth55!)optimizer_hooks.GradientClipping
for ChainerX (#7641)optimizer_hooks.GradientHardClipping
for ChainerX (#7656, thanks @kshitij12345!)IntervalTrigger.__str__
(#7664, thanks @ktns!)GradientLARS
optimizer hook working with ChainerX (#7669)absl::Span
and related helpers instead of gsl::span
(#7671)six.integer_types
for axis checks (#7713)CHAINERX_BUILD_CUDA
is set (#7752)None
array in FunctionNode
NaN check (#6283)CupyMemoryProfiler
(#7003)running_var
of F.batch_renormalization
(#7202)MultiprocessIterator
(#7486)initializers.Identity
for ideep backend (#7548)chainermn.links.create_mnbn_model
(#7603)PickleDataset
crash when using multiprocessing (#7625, thanks @zaltoprofen!)AMSGrad
with intel64 backend (#7661)chainer.grad
for multiple devices (#7692)chainerx::Flip
(#7727)Parameter.dtype
for uninitialized parameter (#7735)UpdateRule.use_fp32_update
for uninitialized parameter (#7736)backend.get_array_module
not cuda.get_array_module
(#7514, thanks @crcrpar!)squared_difference
alias of squared_error
(#7547)Optimizer
and GradientMethod
(#7585)chainerx.clipped_relu
in F.clipped_relu
(#7588)CMakeList.txt
(#7647)Link
s (#6512)CHAINERX_CUDNN_USE_CUPY
(#7574)ResNet
prepare method (#7577)BackwardContext
comment (#7595, thanks @crcrpar!)expand_dims.py
(#7602)FunctionNode
docs. (#7622)chainer/functions/math/average.py
(#7653, thanks @ktns!)F.squeeze
documentation (#7682)examples/vae/train_vae.py
(#7578, thanks @m4saka!)F.polygamma
test (#6970, thanks @ishanrai05!)F.cast
test (#7034)y_shape
not used in tests (#7610)optimizer_hooks.Lasso
for ChainerX (#7657, thanks @kshitij12345!)GroupNormalization
tests (#7684)optimizer_hooks.GradientNoise
for ChainerX (#7709, thanks @kshitij12345!)protobuf
(#7715)optimizer_hooks.WeightDecay
for ChainerX (#7716, thanks @kshitij12345!)atol
/rtol
of chainerx.erf
float16 test (#7721)TestHuberLoss
(#7723)Contrastive.backward
(#7745)TestContrastive
(#7747)third-party.cmake
(#7643)This is the release note of v6.2.0. See here for the complete list of solved issues and merged PRs.
six.integer_types
for axis check in F.concat
(#7712, thanks @knorth55!)six.integer_types
for axis checks (#7770)chainermn.links.create_mnbn_model
(#7618)CupyMemoryProfiler
(#7639)None
array in FunctionNode
NaN check (#7642)AMSGrad
with intel64 backend (#7689)PickleDataset
crash when using multiprocessing (#7729, thanks @zaltoprofen!)MultiprocessIterator
(#7742)chainer.grad
for multiple devices (#7746)backend.get_array_module
not cuda.get_array_module
(#7619, thanks @crcrpar!)Optimizer
and GradientMethod
(#7644)chainer.get_device
to doc (#6831)shape
in generate_array
(#7576)expand_dims.py
(#7608)Link
s (#7628)BackwardContext
comment (#7636, thanks @crcrpar!)FunctionNode
docs. (#7659)F.squeeze
documentation (#7688)examples/vae/train_vae.py
(#7580, thanks @m4saka!)y_shape
not used in tests (#7612)GroupNormalization
tests (#7700)TestContrastive
(#7765)This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.
Power
for ChainerX (#6496, thanks @dido1998!)chainerx.hstack
, chainerx.vstack
and chainerx.atleast_2d
(#6886, thanks @kshitij12345!)TabularDataset
(#7115)TabularDataset.concat/join
(#7116)chainerx.expm1
and chainerx.exp2
(#7126, thanks @aksub99!)chainerx.log2
(#7139)TabularDataset.{transform/transform_batch}
(#7150)chainerx.log1p
(#7161, thanks @sky58!)chainerx::AsContiguous
as a public C++ API (#7166)chainerx
import in debug mode (#7178)chainer.as_array
for consistency with chainer.as_variable
(#7252, thanks @tkerola!)chainerx.moveaxis
(#7265, thanks @kshitij12345!)chainerx.leaky_relu
(#7351, thanks @aksub99!)chainerx.dstack
and chainerx.atleast_3d
(#7353, thanks @kshitij12345!)__abs__
with chainerx.ndarray
(#7364)chainerx.erf
(#7404, thanks @aksub99!)align_corners
option to resize_images
(#7429)resize_images
(#7443)input_device
to StandardUpdater
(#7472)is_array_supported
method on backend.Device
(#7487)roi_max_align_2d
and roi_average_align_2d
(#6405, thanks @knorth55!)MPI_Status
. (#6696, thanks @y1r!)F.copy
(#6982)F.batch_renormalization
, and related fixes (#7104)Variable.addgrad
(#7132)cuda.DummyDevice
inheritance (#7147)Device.name
property (#7149)Link.serialize
to support ChainerX (#7175)Variable.backward
(#7196)require_grad()
on ChainerX Variable.grad
setter (#7198)FunctionNode.unchain
and raise error in ChainerX fallback mode (#7216)Variable.copydata
(#7226)MultiprocessParallelUpdater
to support new devices (#7245)StackVector<int64_t, kMaxNdim>
to Dims
(#7258)chainerx::{Max,Min}imum
(#7261)chx.backward
not cause error even if backprop is not required (#7287)None
arguments in chainerx.clip
and chainerx.ndarray.clip
(#7296)chainerx::Where
(#7325)F.clip
function with None
parameter to min
/max
(#7333)Array::ToNative()
(#7394)Variable
(#7400)get_device
error message when ChainerX is not available (#7401)get_device
to raise a more correct error types (#7421)EXEPECT_ARRAY_*
macros able to used outside ChainerX (#7434)F.convolution_2d
(#7448)F.deconvolution_2d
(#7449)F.copy
between non-ChainerX and ChainerX devices only if backprop is not required (#7473)FunctionNode
ChainerX fallback, reuse ChainerxDevice
taken from inputs to create outputs (#7397)F.where
(#6872)Bernoulli.log_prob
(#7064, thanks @seiyab!)MultiNodeBatchNormalization
(#7106)MultiNodeChainList
should not assume float32 (#7165)L.Linear
when called with n_batch_axes
(#7167)L.BatchRenormalization
(#7256)F.absolute_error
for ChainerX (#7281, thanks @crcrpar!)_values_to_dicts
so it works with unicode of python 2 too (#7316)chainerx.square
(#7321)WeightDecay
aware of loss scale (#7491)GradientMethod
ChainerX fallback for uninitialized parameters (#7492)cuda.DummyDevice
and cuda.get_device_from_array
(#7148)math.cc
(#7171)logic.cc
(#7176)testing.backend.BackendConfig
(#7212)math.cc
(#7222)xp
when possible (#7234)AMax
and AMin
to statistics routines (#7269)math.cc
(#7270)_
for private classes under chainer.dataset.tabular
(#7275)math.cc
(#7298)math.cc
(#7317)FindCuDNN.cmake
(#7419)const&
(#7453)cuda_fp16.h
instead of cuda_fp16.hpp
(#7480)math.h
(#7501)AsTypeKernel
(#7522, thanks @kshitij12345!)F.normalize
documentation (#7062, thanks @crcrpar!)F.copy
view behavior (#7135)backend.get_device_from_array
(#7163)chainerx.md
(#7179)optimizers.MSVAG
to documentation (#7183)F.relu
in doc (#7188)CommunicatorBase.allgather
(#7192)chainer.utils.type_check
(#7249, thanks @ktns!)observe_value
and observe_lr
trigger interval (#7266)robots.txt
to allow indexing root (#7306)F.normalize
documentation (#7371, thanks @crcrpar!)static_graph.rst
(#7389)test_iter.epoch
manually in the tutorial of training loop (#7405)shape
in generate_array
(#7450)tabular_dataset.py
(#7495, thanks @nai62!)CUDNN_LIBNAME
to be specified by environment variable (#7243)$MAKEFLAGS
instead if set in Travis CI script (#7331)FindCuDNN.cmake
, prioritize explicit variables over environment variables (#7441)typing == 3.6.6
(#7562)typing
requirements (#7564)predict.py
) (#7206)PlotReport.available()
check in glance example (#7209)reset
method in the PTB example (#7533)F.tensordot
test (#6968, thanks @ishanrai05!)F.cumprod
test (#6978, thanks @hikjik!)F.average
test (#6995, thanks @hikjik!)test_cuda.py
to backends_tests
(#7144)chainerx.swapaxes
test (#7184, thanks @kshitij12345!)Variable.grad
and Variable.grad_var
tests (#7191)Variable.zerograd
test (#7199)chainerx.conv
and chainerx.conv_transpose
(#7203)TestTanh
from test_math.py
to test_trigonometric_hyperbolic.py (#7207)Variable.copydata
test (#7224)CUDA_VISIBLE_DEVICES
in ChainerX tests (#7290)chainer.as_array
test (#7318)StandardUpdater
tests with pytest style assertion (#7326)0
to 0.0
for python2 (#7373)dstack
to invalid_shape
test (#7457, thanks @kshitij12345!)pytest.mark.xfail
instead of unittest.expectedFailure
(#7488)