An Open Source Machine Learning Framework for Everyone
tf.Tensor
tf.Tensor
has changed, and there are now explicit EagerTensor
and SymbolicTensor
classes for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g. type(t) == tf.Tensor
) will need to update their code to use isinstance(t, tf.Tensor)
. The tf.is_symbolic_tensor
helper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.tf.compat.v1.Session
tf.compat.v1.Session.partial_run
and tf.compat.v1.Session.partial_run_setup
will be deprecated in the next release.tf.lite
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
tf.lite
tf.py_function
and tf.numpy_function
can now be used as function decorators for clearer code:
@tf.py_function(Tout=tf.float32)
def my_fun(x):
print("This always executes eagerly.")
return x+1
tf.lite
UINT32
.tf.config.experimental.enable_tensor_float_32_execution
tf.config.experimental.enable_tensor_float_32_execution(False)
will cause TPUs to use float32 precision for such ops instead of bfloat16.tf.experimental.dtensor
dtensor.relayout_like
, for relayouting a tensor according to the layout of another tensor.dtensor.get_default_mesh
, for retrieving the current default mesh under the dtensor context.tf.experimental.strict_mode
strict_mode
, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag --define=tf_force_rtti=true
to Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues.
tf.ones
, tf.zeros
, tf.fill
, tf.ones_like
, tf.zeros_like
now take an additional Layout argument that controls the output layout of their results.
tf.nest
and tf.data
now support user defined classes implementing __tf_flatten__
and __tf_unflatten__
methods. See nest_util code examples for an example.
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
tf.keras
Model.compile
now support steps_per_execution='auto'
as a parameter, allowing automatic tuning of steps per execution during Model.fit
, Model.predict
, and Model.evaluate
for a significant performance boost.This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, georgiie, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, Learning-To-Play, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Tensorflow Jenkins, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang
tf.lite
cast
.experimental_disable_delegate_clustering
to turn-off delegate clustering.exp
mirror_pad
space_to_batch_nd
and batch_to_space_nd
less
, greater_than
, equal
floor_div
and floor_mod
.bitcast
.bitwise_xor
gather
and gather_nd
.right_shift
add
.mul
.add_op
supports broadcasting up to 6 dimensions.top_k
.tf.function
tf.types.experimental.ConcreteFunction
) as generated through get_concrete_function
now performs holistic input validation similar to calling tf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).tf.nn
tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
now support ids and weights described by tf.RaggedTensor
s.allow_fast_lookup
to tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e. Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supports tf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" using dataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).tf.math
tf.nn.top_k
now supports specifying the output index type via parameter index_type
. Supported types are tf.int16
, tf.int32
(default), and tf.int64
.tf.SavedModel
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct a Fingerprint
object directly from a protobuf.tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.tf.Variable
tf.compat.v2.Variable
instead of tf.compat.v1.Variable
. Some checks for isinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.tf.distribute
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.tf.experimental.dtensor
dtensor.run_on
in favor of dtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh.dtensor.Layout
and dtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably, dtensor.Layout.serialized_string
is removed.tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Python tuple
as the type annotation of its fields.tf.nest
tf.nest.is_sequence
has now been deleted. Please use tf.nest.is_nested
instead.Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
KerasClassifier
and KerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead.model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a .keras
extension. If this breaks you, simply add save_format="h5"
to your .save()
call to revert back to the prior behavior.keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code.keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were using import keras
and you used keras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keras keras.src
, but keep in mind the src
namespace is not stable and those APIs may change or be removed in the future.tf.keras.metrics.FBetaScore
, tf.keras.metrics.F1Score
, and tf.keras.metrics.R2Score
.tf.keras.activations.mish
.keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics.tf.keras.optimizers.Lion
optimizer.tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer.SidecarEvaluatorModelExport
callback has been added to Keras as keras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by a SidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement.tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase.tf.distribute ParameterServerStrategy
, via the exact_evaluation_shards
argument in Model.fit
and Model.evaluate
.tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, and tf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs.tf.keras.dtensor.experimental.optimizers
classes have been merged with tf.keras.optimizers
. You can migrate your code to use tf.keras.optimizers
directly. The API namespace for tf.keras.dtensor.experimental.optimizers
will be removed in future releases.class_weight
for 3+ dimensional targets (e.g. image segmentation masks) in Model.fit
.keras.losses.CategoricalFocalCrossentropy
.tf.keras.dtensor.experimental.layout_map_scope()
. You can user the tf.keras.dtensor.experimental.LayoutMap.scope()
instead.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, venkat2469, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
tf.lite
cast
.experimental_disable_delegate_clustering
to turn-off delegate clustering.exp
mirror_pad
space_to_batch_nd
and batch_to_space_nd
less
, greater_than
, equal
floor_div
and floor_mod
.bitcast
.bitwise_xor
gather
and gather_nd
.right_shift
add
.mul
.add_op
supports broadcasting up to 6 dimensions.top_k
.tf.function
tf.types.experimental.ConcreteFunction
) as generated through get_concrete_function
now performs holistic input validation similar to calling tf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).tf.nn
tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
now support ids and weights described by tf.RaggedTensor
s.allow_fast_lookup
to tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e. Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supports tf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" using dataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).tf.math
tf.nn.top_k
now supports specifying the output index type via parameter index_type
. Supported types are tf.int16
, tf.int32
(default), and tf.int64
.tf.SavedModel
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct a Fingerprint
object directly from a protobuf.tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.tf.Variable
tf.compat.v2.Variable
instead of tf.compat.v1.Variable
. Some checks for isinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.tf.distribute
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.tf.experimental.dtensor
dtensor.run_on
in favor of dtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh.dtensor.Layout
and dtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably, dtensor.Layout.serialized_string
is removed.tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Python tuple
as the type annotation of its fields.tf.nest
tf.nest.is_sequence
has now been deleted. Please use tf.nest.is_nested
instead.Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
KerasClassifier
and KerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead.model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a .keras
extension. If this breaks you, simply add save_format="h5"
to your .save()
call to revert back to the prior behavior.keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code.keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were using import keras
and you used keras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keras keras.src
, but keep in mind the src
namespace is not stable and those APIs may change or be removed in the future.tf.keras.metrics.FBetaScore
, tf.keras.metrics.F1Score
, and tf.keras.metrics.R2Score
.tf.keras.activations.mish
.keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics.tf.keras.optimizers.Lion
optimizer.tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer.SidecarEvaluatorModelExport
callback has been added to Keras as keras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by a SidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement.tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase.tf.distribute ParameterServerStrategy
, via the exact_evaluation_shards
argument in Model.fit
and Model.evaluate
.tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, and tf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs.tf.keras.dtensor.experimental.optimizers
classes have been merged with tf.keras.optimizers
. You can migrate your code to use tf.keras.optimizers
directly. The API namespace for tf.keras.dtensor.experimental.optimizers
will be removed in future releases.class_weight
for 3+ dimensional targets (e.g. image segmentation masks) in Model.fit
.keras.losses.CategoricalFocalCrossentropy
.tf.keras.dtensor.experimental.layout_map_scope()
. You can user the tf.keras.dtensor.experimental.LayoutMap.scope()
instead.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, venkat2469, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
tf.lite
cast
.experimental_disable_delegate_clustering
to turn-off delegate clustering.exp
mirror_pad
space_to_batch_nd
and batch_to_space_nd
less
, greater_than
, equal
floor_div
and floor_mod
.bitcast
.bitwise_xor
gather
and gather_nd
.right_shift
add
.mul
.add_op
supports broadcasting up to 6 dimensions.top_k
.tf.function
tf.types.experimental.ConcreteFunction
) as generated through get_concrete_function
now performs holistic input validation similar to calling tf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).tf.nn
tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
now support ids and weights described by tf.RaggedTensor
s.allow_fast_lookup
to tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e. Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supports tf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" using dataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).tf.math
tf.nn.top_k
now supports specifying the output index type via parameter index_type
. Supported types are tf.int16
, tf.int32
(default), and tf.int64
.tf.SavedModel
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct a Fingerprint
object directly from a protobuf.tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.tf.Variable
tf.compat.v2.Variable
instead of tf.compat.v1.Variable
. Some checks for isinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.tf.distribute
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.tf.experimental.dtensor
dtensor.run_on
in favor of dtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh.dtensor.Layout
and dtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably, dtensor.Layout.serialized_string
is removed.tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Python tuple
as the type annotation of its fields.tf.nest
tf.nest.is_sequence
has now been deleted. Please use tf.nest.is_nested
instead.Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
KerasClassifier
and KerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead.model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a .keras
extension. If this breaks you, simply add save_format="h5"
to your .save()
call to revert back to the prior behavior.keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code.keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were using import keras
and you used keras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keras keras.src
, but keep in mind the src
namespace is not stable and those APIs may change or be removed in the future.tf.keras.metrics.FBetaScore
, tf.keras.metrics.F1Score
, and tf.keras.metrics.R2Score
.tf.keras.activations.mish
.keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics.tf.keras.optimizers.Lion
optimizer.tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer.SidecarEvaluatorModelExport
callback has been added to Keras as keras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by a SidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement.tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase.tf.distribute ParameterServerStrategy
, via the exact_evaluation_shards
argument in Model.fit
and Model.evaluate
.tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, and tf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs.tf.keras.dtensor.experimental.optimizers
classes have been merged with tf.keras.optimizers
. You can migrate your code to use tf.keras.optimizers
directly. The API namespace for tf.keras.dtensor.experimental.optimizers
will be removed in future releases.class_weight
for 3+ dimensional targets (e.g. image segmentation masks) in Model.fit
.keras.losses.CategoricalFocalCrossentropy
.tf.keras.dtensor.experimental.layout_map_scope()
. You can user the tf.keras.dtensor.experimental.LayoutMap.scope()
instead.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, venkat2469, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
tf.lite
cast
.experimental_disable_delegate_clustering
to turn-off delegate clustering.exp
mirror_pad
space_to_batch_nd
and batch_to_space_nd
less
, greater_than
, equal
floor_div
and floor_mod
.bitcast
.bitwise_xor
gather
and gather_nd
.right_shift
add
.mul
.add_op
supports broadcasting up to 6 dimensions.top_k
.tf.function
tf.types.experimental.ConcreteFunction
) as generated through get_concrete_function
now performs holistic input validation similar to calling tf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).tf.nn
tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
now support ids and weights described by tf.RaggedTensor
s.allow_fast_lookup
to tf.nn.embedding_lookup_sparse
and tf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e. Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supports full shuffling. To specify that data should be fully shuffled, use dataset = dataset.shuffle(dataset.cardinality())
. This will load the full dataset into memory so that it can be shuffled, so make sure to only use this with datasets of filenames or other small datasets.tf.math
tf.nn.top_k
now supports specifying the output index type via parameter index_type
. Supported types are tf.int16
, tf.int32
(default), and tf.int64
.tf.SavedModel
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct a Fingerprint
object directly from a protobuf.tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.tf.Variable
tf.compat.v2.Variable
instead of tf.compat.v1.Variable
. Some checks for isinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.tf.distribute
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.tf.experimental.dtensor
dtensor.run_on
in favor of dtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh.tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Python tuple
as the type annotation of its fields.tf.nest
tf.nest.is_sequence
has now been deleted. Please use tf.nest.is_nested
instead.Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
tf.keras
KerasClassifier
and KerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead.model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a .keras
extension. If this breaks you, simply add save_format="h5"
to your .save()
call to revert back to the prior behavior.keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code.keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were using import keras
and you used keras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
keras.src
, but keep in mind the src
namespace is not stable and those APIs may change or be removed in the future.tf.keras
tf.keras.metrics.FBetaScore
, tf.keras.metrics.F1Score
, and tf.keras.metrics.R2Score
.tf.keras.activations.mish
.keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics.tf.keras.optimizers.Lion
optimizer.tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer.SidecarEvaluatorModelExport
callback has been added to Keras as keras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by a SidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement.tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase.tf.distribute ParameterServerStrategy
, via the exact_evaluation_shards
argument in Model.fit
and Model.evaluate
.tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, and tf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs.tf.keras.dtensor.experimental.optimizers
classes have been merged with tf.keras.optimizers
. You can migrate your code to use tf.keras.optimizers
directly. The API namespace for tf.keras.dtensor.experimental.optimizers
will be removed in future releases.class_weight
for 3+ dimensional targets (e.g. image segmentation masks) in Model.fit
.keras.losses.CategoricalFocalCrossentropy
.tf.keras.dtensor.experimental.layout_map_scope()
. You can user the tf.keras.dtensor.experimental.LayoutMap.scope()
instead.This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, venkat2469, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
Build, Compilation and Packaging
tensorflow-gpu
and tf-nightly-gpu
. These packages were removed and replaced with packages that direct users to switch to tensorflow
or tf-nightly
respectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.tf.function
:
tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:
functools.wraps
on a function with different signaturefunctools.partial
with an invalid tf.function
inputtf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.tf.function
s are assumed to have an empty input_signature
instead of an undefined one even if the input_signature
is unspecified.tf.types.experimental.TraceType
now requires an additional placeholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.Experimental APIs tf.config.experimental.enable_mlir_graph_optimization
and tf.config.experimental.disable_mlir_graph_optimization
were removed.
Support for Python 3.11 has been added.
Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.
tf.lite
:
fill
.tf.experimental.dtensor
:
dtensor.initialize_accelerator_system
, and enabled by default.tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.tf.data
:
experimental_symbolic_checkpoint
option of tf.data.Options()
.rerandomize_each_iteration
argument for the tf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed
is set and rerandomize_each_iteration=True
, the random()
operation will produce a different (deterministic) sequence of numbers every epoch.rerandomize_each_iteration
argument for the tf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed
is set and rerandomize_each_iteration=True
, the sample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.tf.test
:
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.tf.experimental.dtensor
:
tf.SavedModel
:
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.tf.random
tf.random.split
and tf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.tf.experimental.ExtensionType
experimental.extension_type.as_dict()
, which converts an instance of tf.experimental.ExtensionType
to a dict
representation.stream_executor
stream_executor
directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor
.tf.nn
tf.nn.experimental.general_dropout
, which is similar to tf.random.experimental.stateless_dropout
but accepts a custom sampler function.tf.types.experimental.GenericFunction
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.tf.config.experimental.mlir_bridge_rollout
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridgeKeras is a framework built on top of the TensorFlow. See more details on the Keras website.
tf.keras
:
keras.saving
, for example: keras.saving.load_model
, keras.saving.save_model
, keras.saving.custom_object_scope
, keras.saving.get_custom_objects
, keras.saving.register_keras_serializable
,keras.saving.get_registered_name
and keras.saving.get_registered_object
. The previous API locations (in keras.utils
and keras.models
) will be available indefinitely, but we recommend you update your code to point to the new API locations.tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.2.12
compared to previous versions.tf.keras
:
.keras
) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the .keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas
are disallowed at loading time. If you want to use lambdas
, you can pass safe_mode=False
to the loading method (only do this if you trust the source of the model).model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding.tf.SparseTensor
input support to tf.keras.layers.Embedding
layer. The layer now accepts a new boolean argument sparse
. If sparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.jit_compile
as a settable property to tf.keras.Model
.synchronized
optional parameter to layers.BatchNormalization
.layers.experimental.SyncBatchNormalization
and suggested to use layers.BatchNormalization
with synchronized=True
instead.tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance.tf.keras.layers.Identity
, a placeholder pass-through layer.show_trainable
option to tf.keras.utils.model_to_dot
to display layer trainable status in model plots.tf.keras.utils.FeatureSpace
object, via feature_space.save("myfeaturespace.keras")
, and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras")
.tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.tf.raw_ops.Print
CVE-2023-25660
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
This release also introduces several vulnerability fixes:
tf.raw_ops.Print
CVE-2023-25660
Build, Compilation and Packaging
tensorflow-gpu
and tf-nightly-gpu
packages have been effectively removed and replaced with packages that direct users to switch to tensorflow
or tf-nightly
respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.tf.function
:
tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.functools.wraps
on a function with different signaturefunctools.partial
with an invalid tf.function
inputtf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.tf.function
s are assumed to have an empty input_signature
instead of an undefined one even if the input_signature
is unspecified.tf.types.experimental.TraceType
now requires an additional placeholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.Experimental APIs tf.config.experimental.enable_mlir_graph_optimization
and tf.config.experimental.disable_mlir_graph_optimization
were removed.
tf.keras
:
keras.saving
, i.e. keras.saving.load_model
, keras.saving.save_model
, keras.saving.custom_object_scope
, keras.saving.get_custom_objects
, keras.saving.register_keras_serializable
,keras.saving.get_registered_name
and keras.saving.get_registered_object
. The previous API locations (in keras.utils
and keras.models
) will stay available indefinitely, but we recommend that you update your code to point to the new API locations.tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.2.12
compared to previous versions.tf.lite
:
fill
.tf.keras
:
.keras
) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the .keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas
are disallowed at loading time. If you want to use lambdas
, you can pass safe_mode=False
to the loading method (only do this if you trust the source of the model).model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding.tf.SparseTensor
input support to tf.keras.layers.Embedding
layer. The layer now accepts a new boolean argument sparse
. If sparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.jit_compile
as a settable property to tf.keras.Model
.synchronized
optional parameter to layers.BatchNormalization
.layers.experimental.SyncBatchNormalization
and suggested to use layers.BatchNormalization
with synchronized=True
instead.tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance.tf.keras.layers.Identity
, a placeholder pass-through layer.show_trainable
option to tf.keras.utils.model_to_dot
to display layer trainable status in model plots.tf.keras.utils.FeatureSpace
object, via feature_space.save("myfeaturespace.keras")
, and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras")
.tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.tf.experimental.dtensor
:
dtensor.initialize_accelerator_system
, and enabled by default.tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.tf.data
:
experimental_symbolic_checkpoint
option of tf.data.Options()
.rerandomize_each_iteration
argument for the tf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed
is set and rerandomize_each_iteration=True
, the random()
operation will produce a different (deterministic) sequence of numbers every epoch.rerandomize_each_iteration
argument for the tf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed
is set and rerandomize_each_iteration=True
, the sample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.tf.test
:
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.tf.experimental.dtensor
:
tf.SavedModel
:
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.tf.random
tf.random.split
and tf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.tf.experimental.ExtensionType
experimental.extension_type.as_dict()
, which converts an instance of tf.experimental.ExtensionType
to a dict
representation.stream_executor
stream_executor
directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor
.tf.nn
tf.nn.experimental.general_dropout
, which is similar to tf.random.experimental.stateless_dropout
but accepts a custom sampler function.tf.types.experimental.GenericFunction
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.tf.config.experimental.mlir_bridge_rollout
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridgeThis release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09
Build, Compilation and Packaging
tensorflow-gpu
and tf-nightly-gpu
packages have been effectively removed and replaced with packages that direct users to switch to tensorflow
or tf-nightly
respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.tf.function
:
tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.functools.wraps
on a function with different signaturefunctools.partial
with an invalid tf.function
inputtf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.tf.function
s are assumed to have an empty input_signature
instead of an undefined one even if the input_signature
is unspecified.tf.types.experimental.TraceType
now requires an additional placeholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.Experimental APIs tf.config.experimental.enable_mlir_graph_optimization
and tf.config.experimental.disable_mlir_graph_optimization
were removed.
tf.keras
:
keras.saving
, i.e. keras.saving.load_model
, keras.saving.save_model
, keras.saving.custom_object_scope
, keras.saving.get_custom_objects
, keras.saving.register_keras_serializable
,keras.saving.get_registered_name
and keras.saving.get_registered_object
. The previous API locations (in keras.utils
and keras.models
) will stay available indefinitely, but we recommend that you update your code to point to the new API locations.tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.2.12
compared to previous versions.tf.SavedModel
:
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.tf.lite
:
fill
.tf.keras
:
.keras
) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the .keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas
are disallowed at loading time. If you want to use lambdas
, you can pass safe_mode=False
to the loading method (only do this if you trust the source of the model).model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding.tf.SparseTensor
input support to tf.keras.layers.Embedding
layer. The layer now accepts a new boolean argument sparse
. If sparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.jit_compile
as a settable property to tf.keras.Model
.synchronized
optional parameter to layers.BatchNormalization
.layers.experimental.SyncBatchNormalization
and suggested to use layers.BatchNormalization
with synchronized=True
instead.tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance.tf.keras.layers.Identity
, a placeholder pass-through layer.show_trainable
option to tf.keras.utils.model_to_dot
to display layer trainable status in model plots.tf.keras.utils.FeatureSpace
object, via feature_space.save("myfeaturespace.keras")
, and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras")
.tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.tf.experimental.dtensor
:
dtensor.initialize_accelerator_system
, and enabled by default.tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.tf.data
:
experimental_symbolic_checkpoint
option of tf.data.Options()
.rerandomize_each_iteration
argument for the tf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed
is set and rerandomize_each_iteration=True
, the random()
operation will produce a different (deterministic) sequence of numbers every epoch.rerandomize_each_iteration
argument for the tf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed
is set and rerandomize_each_iteration=True
, the sample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.tf.test
:
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.tf.experimental.dtensor
:
tf.random
tf.random.split
and tf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.tf.experimental.ExtensionType
experimental.extension_type.as_dict()
, which converts an instance of tf.experimental.ExtensionType
to a dict
representation.stream_executor
stream_executor
directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor
.tf.nn
tf.nn.experimental.general_dropout
, which is similar to tf.random.experimental.stateless_dropout
but accepts a custom sampler function.tf.types.experimental.GenericFunction
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.tf.config.experimental.mlir_bridge_rollout
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridgeThis release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09