An Open Source Machine Learning Framework for Everyone
The tf.keras.optimizers.Optimizer
base class now points to the new Keras optimizer, while the old optimizers have been moved to the tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
tf.keras.optimizer.legacy.XXX
(e.g. tf.keras.optimizer.legacy.Adam
).tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX
. We highly recommend migrating your workflow to TF2 for stable support and new features.tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.tf.keras.optimizers.schedules.LearningRateSchedule
, the new optimizer's learning_rate
property returns the current learning rate value instead of a LearningRateSchedule
object as before. If you need to access the LearningRateSchedule
object, please use optimizer._learning_rate
.tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first apply_gradients()
or minimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables)
before the training loop.The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor
) will only be implemented based on the new tf.keras.optimizers.Optimizer
base class.
tensorflow/python/keras
code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import of tensorflow.python.keras
and use the public API with from tensorflow import keras
or import tensorflow as tf; tf.keras
.
tf.lite
:
tf.math.unsorted_segment_sum
, tf.atan2
and tf.sign
.tfl.mul
now supports complex32 inputs.tf.experimental.StructuredTensor
:
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.tf.keras
:
get_metrics_result()
method to tf.keras.models.Model
.
tf.keras.layers.GroupNormalization
.weight_decay
argument.tf.keras.optimizers.Adafactor
.warmstart_embedding_matrix
to tf.keras.utils
.
tf.Variable
:
CompositeTensor
as a base class to ResourceVariable
.
tf.Variable
s to be nested in tf.experimental.ExtensionType
s.experimental_enable_variable_lifting
to tf.Variable
, defaulting to True
.
False
, the variable won't be lifted out of tf.function
; thus it can be used as a tf.function
-local variable: during each execution of the tf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under @tf.function(jit_compile=False)
).TF SavedModel:
fingerprint.pb
to the SavedModel directory. The fingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.TF pip:
tensorflow
or tensorflow-cpu
would install Intel's tensorflow-intel
package. These packages are provided on an as-is basis. TensorFlow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.tf.image
:
return_index_map
to tf.image.ssim
, which causes the returned value to be the local SSIM map instead of the global mean.TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor
, as inputs.experimental_follow_type_hints
for tf.function
has been deprecated. Please use input_signature
or reduce_retracing
to minimize retracing.tf.SparseTensor
:
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape
.DumpScreen2RGB
at all.DynamicStitch
due to missing validation (CVE-2022-41883)tf.keras.losses.poisson
(CVE-2022-41887)ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880)ndarray_tensor_bridge
(CVE-2022-41884)FusedResizeAndPadConv2D
(CVE-2022-41885)ImageProjectiveTransformV2
(CVE-2022-41886)tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888)pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889)CHECK
fail in BCast
(CVE-2022-41890)TensorListConcat
(CVE-2022-41891)CHECK_EQ
fail in TensorListResize
(CVE-2022-41893)CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894)MirrorPadGrad
(CVE-2022-41895)Mfcc
(CVE-2022-41896)FractionalMaxPoolGrad
(CVE-2022-41897)CHECK
fail in SparseFillEmptyRowsGrad
(CVE-2022-41898)CHECK
fail in SdcaOptimizer
(CVE-2022-41899)FractionalAvgPool
and FractionalMaxPool
(CVE-2022-41900)CHECK_EQ
in SparseMatrixNNZ
(CVE-2022-41901)ResizeNearestNeighborGrad
(CVE-2022-41907)CHECK
fail in PyFunc
(CVE-2022-41908)CompositeTensorVariantToComponents
(CVE-2022-41909)QuantizeAndDequantizeV2
(CVE-2022-41910)CHECK
failure in SobolSample
via missing validation (CVE-2022-35935)CHECK
fail in TensorListScatter
and TensorListScatterV2
in eager mode (CVE-2022-35935)This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
This release introduces several vulnerability fixes:
DynamicStitch
due to missing validation (CVE-2022-41883)tf.keras.losses.poisson
(CVE-2022-41887)ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880)ndarray_tensor_bridge
(CVE-2022-41884)FusedResizeAndPadConv2D
(CVE-2022-41885)ImageProjectiveTransformV2
(CVE-2022-41886)tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888)pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889)CHECK
fail in BCast
(CVE-2022-41890)TensorListConcat
(CVE-2022-41891)CHECK_EQ
fail in TensorListResize
(CVE-2022-41893)CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894)MirrorPadGrad
(CVE-2022-41895)Mfcc
(CVE-2022-41896)FractionalMaxPoolGrad
(CVE-2022-41897)CHECK
fail in SparseFillEmptyRowsGrad
(CVE-2022-41898)CHECK
fail in SdcaOptimizer
(CVE-2022-41899)FractionalAvgPool
and FractionalMaxPool
(CVE-2022-41900)CHECK_EQ
in SparseMatrixNNZ
(CVE-2022-41901)ResizeNearestNeighborGrad
(CVE-2022-41907)CHECK
fail in PyFunc
(CVE-2022-41908)CompositeTensorVariantToComponents
(CVE-2022-41909)QuantizeAndDequantizeV2
(CVE-2022-41910)CHECK
failure in SobolSample
via missing validation (CVE-2022-35935)CHECK
fail in TensorListScatter
and TensorListScatterV2
in eager mode (CVE-2022-35935)This release introduces several vulnerability fixes:
tf.keras.losses.poisson
(CVE-2022-41887)ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880)ndarray_tensor_bridge
(CVE-2022-41884)FusedResizeAndPadConv2D
(CVE-2022-41885)ImageProjectiveTransformV2
(CVE-2022-41886)tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888)pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889)CHECK
fail in BCast
(CVE-2022-41890)TensorListConcat
(CVE-2022-41891)CHECK_EQ
fail in TensorListResize
(CVE-2022-41893)CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894)MirrorPadGrad
(CVE-2022-41895)Mfcc
(CVE-2022-41896)FractionalMaxPoolGrad
(CVE-2022-41897)CHECK
fail in SparseFillEmptyRowsGrad
(CVE-2022-41898)CHECK
fail in SdcaOptimizer
(CVE-2022-41899)FractionalAvgPool
and FractionalMaxPool
(CVE-2022-41900)CHECK_EQ
in SparseMatrixNNZ
(CVE-2022-41901)ResizeNearestNeighborGrad
(CVE-2022-41907)CHECK
fail in PyFunc
(CVE-2022-41908)CompositeTensorVariantToComponents
(CVE-2022-41909)QuantizeAndDequantizeV2
(CVE-2022-41910)CHECK
failure in SobolSample
via missing validation (CVE-2022-35935)CHECK
fail in TensorListScatter
and TensorListScatterV2
in eager mode (CVE-2022-35935)This release introduces several vulnerability fixes:
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880)ndarray_tensor_bridge
(CVE-2022-41884)FusedResizeAndPadConv2D
(CVE-2022-41885)ImageProjectiveTransformV2
(CVE-2022-41886)tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888)pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889)CHECK
fail in BCast
(CVE-2022-41890)TensorListConcat
(CVE-2022-41891)CHECK_EQ
fail in TensorListResize
(CVE-2022-41893)CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894)MirrorPadGrad
(CVE-2022-41895)Mfcc
(CVE-2022-41896)FractionalMaxPoolGrad
(CVE-2022-41897)CHECK
fail in SparseFillEmptyRowsGrad
(CVE-2022-41898)CHECK
fail in SdcaOptimizer
(CVE-2022-41899)FractionalAvgPool
and FractionalMaxPool
(CVE-2022-41900)CHECK_EQ
in SparseMatrixNNZ
(CVE-2022-41901)ResizeNearestNeighborGrad
(CVE-2022-41907)CHECK
fail in PyFunc
(CVE-2022-41908)CompositeTensorVariantToComponents
(CVE-2022-41909)QuantizeAndDequantizeV2
(CVE-2022-41910)CHECK
failure in SobolSample
via missing validation (CVE-2022-35935)CHECK
fail in TensorListScatter
and TensorListScatterV2
in eager mode (CVE-2022-35935)tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
tf.keras.optimizer.legacy.XXX
(e.g. tf.keras.optimizer.legacy.Adam
).tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX
. We highly recommend to migrate your workflow to TF2 for stable support and new features.tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old
optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.LearningRateSchedule
, The new optimizer's learning_rate
property returns the current learning rate value instead of a LearningRateSchedule
object as before. If you need to access the LearningRateSchedule
object, please use optimizer._learning_rate
.tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first apply_gradients()
or minimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please call optimizer.build(model.trainable_variables)
before the training loop.The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor
) will only be implemented based on tf.keras.optimizers.Optimizer
, the new base class.
tensorflow/python/keras
code is a legacy copy of Keras since 2.7 release, and will be deleted in 2.12 release. Please remove any import of tensorflow.python.keras
and use public API with from tensorflow import keras
or import tensorflow as tf; tf.keras
.
tf.lite
:
tf.unsortedsegmentmin
, tf.atan2
and tf.sign
.tfl.mul
now supports complex32 inputs.tf.experimental.StructuredTensor
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.tf.keras
:
get_metrics_result()
method to tf.keras.models.Model
.
tf.keras.layers.GroupNormalization
.tf.keras.optimizers.Adafactor
.warmstart_embedding_matrix
to tf.keras.utils
.
tf.Variable
:
CompositeTensor
as a baseclass to ResourceVariable
.
tf.Variable
s to be nested in tf.experimental.ExtensionType
s.experimental_enable_variable_lifting
to tf.Variable
, defaulting to True.
False
, the variable won't be lifted out of tf.function
, thus it can be used as a tf.function
-local variable: during each execution of the tf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under @tf.function(jit_compile=False)
).TF SavedModel:
fingerprint.pb
to the SavedModel directory. The fingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.TF pip:
tensorflow
or tensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.tf.image
return_index_map
to tf.image.ssim
which causes the returned value to be the local SSIM map instead of the global mean.TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor
, as inputs.experimental_follow_type_hints
for tf.function
has been deprecated. Please use input_signature
or reduce_retracing
to minimize retracing.tf.SparseTensor
:
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape
.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
tf.keras.optimizer.legacy.XXX
(e.g. tf.keras.optimizer.legacy.Adam
).tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
.
We highly recommend to migrate your workflow to TF2 for stable support and new features.tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer.
These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.LearningRateSchedule
, The new optimizer's learning_rate
property returns the
current learning rate value instead of a LearningRateSchedule
object as before. If you need to access the LearningRateSchedule
object,
please use optimizer._learning_rate
.tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
an issue in the Keras GitHub repo.Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
or minimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages,
please call optimizer.build(model.trainable_variables)
before the training loop.The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on tf.keras.optimizers.Optimizer
, the new base class.
tf.lite
:
tf.unsortedsegmentmin
, tf.atan2
and tf.sign
.tfl.mul
now supports complex32 inputs.tf.experimental.StructuredTensor
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
buffers or pandas dataframes.tf.keras
:
get_metrics_result()
method to tf.keras.models.Model
.
tf.keras.layers.GroupNormalization
.tf.keras.optimizers.Adafactor
.warmstart_embedding_matrix
to tf.keras.utils
.
tf.Variable
:
CompositeTensor
as a baseclass to ResourceVariable
.
tf.Variable
s to be nested in tf.experimental.ExtensionType
s.experimental_enable_variable_lifting
to tf.Variable
, defaulting to True.
False
, the variable won't be lifted out of tf.function
, thus it can be used as a tf.function
-local variable: during each
execution of the tf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
Currently, experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under @tf.function(jit_compile=False)
).TF SavedModel:
fingerprint.pb
to the SavedModel directory. The fingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See
the RFC for more details regarding its design and properties.TF pip:
tensorflow
or tensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.tf.image
return_index_map
to tf.image.ssim
which causes the returned value to be the local SSIM map instead of the global
mean.TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor
, as inputs.experimental_follow_type_hints
for tf.function has been deprecated. Please use input_signature
or reduce_retracing
to minimize retracing.tf.SparseTensor
:
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape
.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
tf.keras.optimizer.legacy.XXX
(e.g. tf.keras.optimizer.legacy.Adam
).tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
.
We highly recommend to migrate your workflow to TF2 for stable support and new features.tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer.
These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.LearningRateSchedule
, The new optimizer's learning_rate
property returns the
current learning rate value instead of a LearningRateSchedule
object as before. If you need to access the LearningRateSchedule
object,
please use optimizer._learning_rate
.tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
an issue in the Keras GitHub repo.Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
or minimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages,
please call optimizer.build(model.trainable_variables)
before the training loop.The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on tf.keras.optimizers.Optimizer
, the new base class.
tf.lite
:
tf.unsortedsegmentmin
, tf.atan2
and tf.sign
.tfl.mul
now supports complex32 inputs.tf.experimental.StructuredTensor
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
buffers or pandas dataframes.tf.keras
:
get_metrics_result()
method to tf.keras.models.Model
.
tf.keras.layers.GroupNormalization
.tf.keras.optimizers.Adafactor
.warmstart_embedding_matrix
to tf.keras.utils
.
tf.Variable
:
CompositeTensor
as a baseclass to ResourceVariable
.
tf.Variable
s to be nested in tf.experimental.ExtensionType
s.experimental_enable_variable_lifting
to tf.Variable
, defaulting to True.
False
, the variable won't be lifted out of tf.function
, thus it can be used as a tf.function
-local variable: during each
execution of the tf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
Currently, experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under @tf.function(jit_compile=False)
).TF SavedModel:
fingerprint.pb
to the SavedModel directory. The fingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See
the RFC for more details regarding its design and properties.TF pip:
tensorflow
or tensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.tf.image
return_index_map
to tf.image.ssim
which causes the returned value to be the local SSIM map instead of the global
mean.TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor
, as inputs.experimental_follow_type_hints
for tf.function has been deprecated. Please use input_signature
or reduce_retracing
to minimize retracing.tf.SparseTensor
:
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape
.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
keras.layers.Attention
and keras.layers.AdditiveAttention
is now specified in the call()
method via the use_causal_mask
argument (rather than in the constructor), for consistency with other layers.tensorflow/python/training
have been moved to tensorflow/python/tracking
and tensorflow/python/checkpoint
. Please update your imports accordingly, the old files will be removed in Release 2.11.tf.keras.optimizers.experimental.Optimizer
will graduate in Release 2.11, which means tf.keras.optimizers.Optimizer
will be an alias of tf.keras.optimizers.experimental.Optimizer
. The current tf.keras.optimizers.Optimizer
will continue to be supported as tf.keras.optimizers.legacy.Optimizer
, e.g.,tf.keras.optimizers.legacy.Adam
. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer to tf.keras.optimizers.legacy.Optimizer
.tf.keras.initializers
. Keras initializers will now use stateless random ops to generate random numbers.
seed=None
), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds).tensorflow::Code
and tensorflow::Status
will become aliases of respectively absl::StatusCode
and absl::Status
in some future release.
tensorflow::OkStatus()
instead of tensorflow::Status::OK()
.Status
objects from tensorflow::error::Code
.tensorflow::errors::Code
fields. Accessing tensorflow::error::Code
fields is fine.
tensorflow::errors:InvalidArgument
to create status using an error code without accessing it.tensorflow::errors::IsInvalidArgument
if needed.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)
or static_cast<int>(code)
for comparisons.tensorflow::StatusOr
will also become in the future alias to absl::StatusOr
, so use StatusOr::value
instead of StatusOr::ConsumeValueOrDie
.tf.lite
:
tf.keras
:
EinsumDense
layer is moved from experimental to core. Its import path is moved from tf.keras.layers.experimental.EinsumDense
to tf.keras.layers.EinsumDense
.tf.keras.utils.audio_dataset_from_directory
utility to easily generate audio classification datasets from directories of .wav
files.subset="both"
support in tf.keras.utils.image_dataset_from_directory
,tf.keras.utils.text_dataset_from_directory
, and audio_dataset_from_directory
, to be used with the validation_split
argument, for returning both dataset splits at once, as a tuple.tf.keras.utils.split_dataset
utility to split a Dataset
object or a list/tuple of arrays into two Dataset
objects (e.g. train/test).BackupAndRestore
callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing.tf.keras.dtensor.experimental.optimizers.AdamW
. This optimizer is similar as the existing keras.optimizers.experimental.AdamW
, and works in the DTensor training use case.tf.keras.layers.MultiHeadAttention
.
query
, key
and value
inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with any attention_mask
passed in directly when calling the layer. This can be used with tf.keras.layers.Embedding
with mask_zero=True
to automatically infer a correct padding mask.use_causal_mask
call time arugment to the layer. Passing use_causal_mask=True
will compute a causal attention mask, and optionally combine it with any attention_mask
passed in directly when calling the layer.ignore_class
argument in the loss SparseCategoricalCrossentropy
and metrics IoU
and MeanIoU
, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class).tf.keras.models.experimental.SharpnessAwareMinimization
. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.tf.data
:
dataset_id
to tf.data.experimental.service.register_dataset
. If provided, tf.data
service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should call register_dataset
with the same dataset_id
.inject_prefetch
, to tf.data.experimental.OptimizationOptions
. If it is set to True
,tf.data
will now automatically add a prefetch
transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, set inject_prefetch=True
in tf.data.experimental.OptimizationOptions
.tf.data.Options.autotune.autotune_algorithm
: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage.tf.data.experimental.from_list
, a new API for creating Dataset
s from lists of elements.tf.distribute
:
tf.distribute.experimental.PreemptionCheckpointHandler
to handle worker preemption/maintenance and cluster-wise consistent error reporting for tf.distribute.MultiWorkerMirroredStrategy
. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.tf.math
:
tf.math.approx_max_k
and tf.math.approx_min_k
which are the optimized alternatives to tf.math.top_k
on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.tf.train
:
tf.train.TrackableView
which allows users to inspect the TensorFlow Trackable object (e.g. tf.Module
, Keras Layers and models).tf.vectorized_map
:
warn
. This parameter controls whether or not warnings will be printed when operations in the provided fn
fall back to a while loop.XLA:
CPU performance optimizations:
auto_mixed_precision_mkl
to auto_mixed_precision_onednn_bfloat16
. See example usage here.pip install tensorflow
).
TF_ENABLE_ONEDNN_OPTS=1
to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations.New argument experimental_device_ordinal
in LogicalDeviceConfiguration
to control the order of logical devices. (GPU only)
tf.keras
:
tf.keras.callbacks.TensorBoard
callback, so that summaries logged automatically for model weights now include either a /histogram
or /image
suffix in their tag names, in order to prevent tag name collisions across summary types.When running on GPU (with cuDNN version 7.6.3 or later),tf.nn.depthwise_conv2d
backprop to filter
(and therefore also tf.keras.layers.DepthwiseConv2D
) now operate deterministically (and tf.errors.UnimplementedError
is no longer thrown) when op-determinism has been enabled via tf.config.experimental.enable_op_determinism
. This closes issue 47174.
tf.random
tf.random.experimental.stateless_shuffle
, a stateless version of tf.random.shuffle
.CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934)CHECK
failure in SobolSample
caused by missing validation (CVE-2022-35935)Gather_nd
op in TF Lite (CVE-2022-35937)CHECK
failure in TensorListReserve
caused by missing validation (CVE-2022-35960)Scatter_nd
op in TF Lite (CVE-2022-35939)RaggedRangeOp
(CVE-2022-35940)CHECK
failure in AvgPoolOp
(CVE-2022-35941)CHECK
failures in UnbatchGradOp
(CVE-2022-35952)CHECK
failures in AvgPool3DGrad
(CVE-2022-35959)CHECK
failures in FractionalAvgPoolGrad
(CVE-2022-35963)BlockLSTMGradV2
(CVE-2022-35964)LowerBound
and UpperBound
(CVE-2022-35965)QuantizedAvgPool
(CVE-2022-35966)QuantizedAdd
(CVE-2022-35967)CHECK
fail in AvgPoolGrad
(CVE-2022-35968)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35969)QuantizedInstanceNorm
(CVE-2022-35970)CHECK
fail in FakeQuantWithMinMaxVars
(CVE-2022-35971)Requantize
(CVE-2022-36017)QuantizedBiasAdd
(CVE-2022-35972)CHECK
fail in FakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019)QuantizedMatMul
(CVE-2022-35973)QuantizeDownAndShrinkRange
(CVE-2022-35974)QuantizedRelu
and QuantizedRelu6
(CVE-2022-35979)CHECK
fail in FractionalMaxPoolGrad
(CVE-2022-35981)CHECK
fail in RaggedTensorToVariant
(CVE-2022-36018)CHECK
fail in QuantizeAndDequantizeV3
(CVE-2022-36026)SparseBincount
(CVE-2022-35982)CHECK
fail in Save
and SaveSlices
(CVE-2022-35983)CHECK
fail in ParameterizedTruncatedNormal
(CVE-2022-35984)CHECK
fail in LRNGrad
(CVE-2022-35985)RaggedBincount
(CVE-2022-35986)CHECK
fail in DenseBincount
(CVE-2022-35987)CHECK
fail in tf.linalg.matrix_rank
(CVE-2022-35988)CHECK
fail in MaxPool
(CVE-2022-35989)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35999)CHECK
fail in EmptyTensorList
(CVE-2022-35998)CHECK
fail in tf.sparse.cross
(CVE-2022-35997)Conv2D
(CVE-2022-35996)CHECK
fail in AudioSummaryV2
(CVE-2022-35995)CHECK
fail in CollectiveGather
(CVE-2022-35994)CHECK
fail in SetSize
(CVE-2022-35993)CHECK
fail in TensorListFromTensor
(CVE-2022-35992)CHECK
fail in TensorListScatter
and TensorListScatterV2
(CVE-2022-35991)CHECK
fail in FakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990)CHECK
fail in FakeQuantWithMinMaxVarsGradient
(CVE-2022-36005)CHECK
fail in tf.random.gamma
(CVE-2022-36004)CHECK
fail in RandomPoissonV2
(CVE-2022-36003)CHECK
fail in Unbatch
(CVE-2022-36002)CHECK
fail in DrawBoundingBoxes
(CVE-2022-36001)CHECK
fail in Eig
(CVE-2022-36000)mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013)mlir::tfg::TFOp::nameAttr
(CVE-2022-36014)CHECK
-fail in tensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016)Gather_nd
op in TF Lite Micro (CVE-2022-35938)This release contains contributions from many people at Google, as well as:
Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang
This releases introduces several vulnerability fixes:
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934)CHECK
failure in SobolSample
caused by missing validation (CVE-2022-35935)Gather_nd
op in TF Lite (CVE-2022-35937)CHECK
failure in TensorListReserve
caused by missing validation (CVE-2022-35960)Scatter_nd
op in TF Lite (CVE-2022-35939)RaggedRangeOp
(CVE-2022-35940)CHECK
failure in AvgPoolOp
(CVE-2022-35941)CHECK
failures in UnbatchGradOp
(CVE-2022-35952)CHECK
failures in AvgPool3DGrad
(CVE-2022-35959)CHECK
failures in FractionalAvgPoolGrad
(CVE-2022-35963)BlockLSTMGradV2
(CVE-2022-35964)LowerBound
and UpperBound
(CVE-2022-35965)QuantizedAvgPool
(CVE-2022-35966)QuantizedAdd
(CVE-2022-35967)CHECK
fail in AvgPoolGrad
(CVE-2022-35968)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35969)QuantizedInstanceNorm
(CVE-2022-35970)CHECK
fail in FakeQuantWithMinMaxVars
(CVE-2022-35971)Requantize
(CVE-2022-36017)QuantizedBiasAdd
(CVE-2022-35972)CHECK
fail in FakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019)QuantizedMatMul
(CVE-2022-35973)QuantizeDownAndShrinkRange
(CVE-2022-35974)QuantizedRelu
and QuantizedRelu6
(CVE-2022-35979)CHECK
fail in FractionalMaxPoolGrad
(CVE-2022-35981)CHECK
fail in RaggedTensorToVariant
(CVE-2022-36018)CHECK
fail in QuantizeAndDequantizeV3
(CVE-2022-36026)SparseBincount
(CVE-2022-35982)CHECK
fail in Save
and SaveSlices
(CVE-2022-35983)CHECK
fail in ParameterizedTruncatedNormal
(CVE-2022-35984)CHECK
fail in LRNGrad
(CVE-2022-35985)RaggedBincount
(CVE-2022-35986)CHECK
fail in DenseBincount
(CVE-2022-35987)CHECK
fail in tf.linalg.matrix_rank
(CVE-2022-35988)CHECK
fail in MaxPool
(CVE-2022-35989)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35999)CHECK
fail in EmptyTensorList
(CVE-2022-35998)CHECK
fail in tf.sparse.cross
(CVE-2022-35997)Conv2D
(CVE-2022-35996)CHECK
fail in AudioSummaryV2
(CVE-2022-35995)CHECK
fail in CollectiveGather
(CVE-2022-35994)CHECK
fail in SetSize
(CVE-2022-35993)CHECK
fail in TensorListFromTensor
(CVE-2022-35992)CHECK
fail in TensorListScatter
and TensorListScatterV2
(CVE-2022-35991)CHECK
fail in FakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990)CHECK
fail in FakeQuantWithMinMaxVarsGradient
(CVE-2022-36005)CHECK
fail in tf.random.gamma
(CVE-2022-36004)CHECK
fail in RandomPoissonV2
(CVE-2022-36003)CHECK
fail in Unbatch
(CVE-2022-36002)CHECK
fail in DrawBoundingBoxes
(CVE-2022-36001)CHECK
fail in Eig
(CVE-2022-36000)mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013)mlir::tfg::TFOp::nameAttr
(CVE-2022-36014)CHECK
-fail in tensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016)Gather_nd
op in TF Lite Micro (CVE-2022-35938)This releases introduces several vulnerability fixes:
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934)CHECK
failure in SobolSample
caused by missing validation (CVE-2022-35935)Gather_nd
op in TF Lite (CVE-2022-35937)CHECK
failure in TensorListReserve
caused by missing validation (CVE-2022-35960)Scatter_nd
op in TF Lite (CVE-2022-35939)RaggedRangeOp
(CVE-2022-35940)CHECK
failure in AvgPoolOp
(CVE-2022-35941)CHECK
failures in UnbatchGradOp
(CVE-2022-35952)CHECK
failures in AvgPool3DGrad
(CVE-2022-35959)CHECK
failures in FractionalAvgPoolGrad
(CVE-2022-35963)BlockLSTMGradV2
(CVE-2022-35964)LowerBound
and UpperBound
(CVE-2022-35965)QuantizedAvgPool
(CVE-2022-35966)QuantizedAdd
(CVE-2022-35967)CHECK
fail in AvgPoolGrad
(CVE-2022-35968)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35969)QuantizedInstanceNorm
(CVE-2022-35970)CHECK
fail in FakeQuantWithMinMaxVars
(CVE-2022-35971)Requantize
(CVE-2022-36017)QuantizedBiasAdd
(CVE-2022-35972)CHECK
fail in FakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019)QuantizedMatMul
(CVE-2022-35973)QuantizeDownAndShrinkRange
(CVE-2022-35974)QuantizedRelu
and QuantizedRelu6
(CVE-2022-35979)CHECK
fail in FractionalMaxPoolGrad
(CVE-2022-35981)CHECK
fail in RaggedTensorToVariant
(CVE-2022-36018)CHECK
fail in QuantizeAndDequantizeV3
(CVE-2022-36026)SparseBincount
(CVE-2022-35982)CHECK
fail in Save
and SaveSlices
(CVE-2022-35983)CHECK
fail in ParameterizedTruncatedNormal
(CVE-2022-35984)CHECK
fail in LRNGrad
(CVE-2022-35985)RaggedBincount
(CVE-2022-35986)CHECK
fail in DenseBincount
(CVE-2022-35987)CHECK
fail in tf.linalg.matrix_rank
(CVE-2022-35988)CHECK
fail in MaxPool
(CVE-2022-35989)CHECK
fail in Conv2DBackpropInput
(CVE-2022-35999)CHECK
fail in EmptyTensorList
(CVE-2022-35998)CHECK
fail in tf.sparse.cross
(CVE-2022-35997)Conv2D
(CVE-2022-35996)CHECK
fail in AudioSummaryV2
(CVE-2022-35995)CHECK
fail in CollectiveGather
(CVE-2022-35994)CHECK
fail in SetSize
(CVE-2022-35993)CHECK
fail in TensorListFromTensor
(CVE-2022-35992)CHECK
fail in TensorListScatter
and TensorListScatterV2
(CVE-2022-35991)CHECK
fail in FakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990)CHECK
fail in FakeQuantWithMinMaxVarsGradient
(CVE-2022-36005)CHECK
fail in tf.random.gamma
(CVE-2022-36004)CHECK
fail in RandomPoissonV2
(CVE-2022-36003)CHECK
fail in Unbatch
(CVE-2022-36002)CHECK
fail in DrawBoundingBoxes
(CVE-2022-36001)CHECK
fail in Eig
(CVE-2022-36000)mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013)mlir::tfg::TFOp::nameAttr
(CVE-2022-36014)CHECK
-fail in tensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016)Gather_nd
op in TF Lite Micro (CVE-2022-35938)