Release TensorFlow 2.12.0 · tensorflow/tensorflow · GitHub


TensorFlow

Breaking Changes

  • Build, Compilation and Packaging

    • Removed redundant packages tensorflow-gpu and tf-nightly-gpu. These packages were removed and replaced with packages that direct users to switch to tensorflow or tf-nightly respectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
  • tf.function:

    • tf.function now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:
      • Using functools.wraps on a function with different signature
      • Using functools.partial with an invalid tf.function input
    • tf.function now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.
    • Parameterless tf.functions are assumed to have an empty input_signature instead of an undefined one even if the input_signature is unspecified.
    • tf.types.experimental.TraceType now requires an additional placeholder_value method to be defined.
    • tf.function now traces with placeholder values generated by TraceType instead of the value itself.
  • Experimental APIs tf.config.experimental.enable_mlir_graph_optimization and tf.config.experimental.disable_mlir_graph_optimization were removed.

Major Features and Improvements

  • Support for Python 3.11 has been added.

  • Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.

  • tf.lite:

    • Add 16-bit float type support for built-in op fill.
    • Transpose now supports 6D tensors.
    • Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
  • tf.experimental.dtensor:

    • Coordination service now works with dtensor.initialize_accelerator_system, and enabled by default.
    • Add tf.experimental.dtensor.is_dtensor to check if a tensor is a DTensor instance.
  • tf.data:

    • Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the experimental_symbolic_checkpoint option of tf.data.Options().
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.random() operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). If seed is set and rerandomize_each_iteration=True, the random() operation will produce a different (deterministic) sequence of numbers every epoch.
    • Added a new rerandomize_each_iteration argument for the tf.data.Dataset.sample_from_datasets() operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. If seed is set and rerandomize_each_iteration=True, the sample_from_datasets() operation will use a different (deterministic) sequence of numbers every epoch.
  • tf.test:

    • Added tf.test.experimental.sync_devices, which is useful for accurately measuring performance in benchmarks.
  • tf.experimental.dtensor:

    • Added experimental support to ReduceScatter fuse on GPU (NCCL).

Bug Fixes and Other Changes

  • tf.SavedModel:
    • Introduced new class tf.saved_model.experimental.Fingerprint that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details.
    • Introduced API tf.saved_model.experimental.read_fingerprint(export_dir) for reading the fingerprint of a SavedModel.
  • tf.random
    • Added non-experimental aliases for tf.random.split and tf.random.fold_in, the experimental endpoints are still available so no code changes are necessary.
  • tf.experimental.ExtensionType
    • Added function experimental.extension_type.as_dict(), which converts an instance of tf.experimental.ExtensionType to a dict representation.
  • stream_executor
    • Top level stream_executor directory has been deleted, users should use equivalent headers and targets under compiler/xla/stream_executor.
  • tf.nn
    • Added tf.nn.experimental.general_dropout, which is similar to tf.random.experimental.stateless_dropout but accepts a custom sampler function.
  • tf.types.experimental.GenericFunction
    • The experimental_get_compiler_ir method supports tf.TensorSpec compilation arguments.
  • tf.config.experimental.mlir_bridge_rollout
    • Removed enums MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED and MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED which are no longer used by the tf2xla bridge

Keras

Keras is a framework built on top of the TensorFlow. See more details on the Keras website.

Breaking Changes

tf.keras:

  • Moved all saving-related utilities to a new namespace, keras.saving, for example: keras.saving.load_model, keras.saving.save_model, keras.saving.custom_object_scope, keras.saving.get_custom_objects, keras.saving.register_keras_serializable,keras.saving.get_registered_name and keras.saving.get_registered_object. The previous API locations (in keras.utils and keras.models) will be available indefinitely, but we recommend you update your code to point to the new API locations.
  • Improvements and fixes in Keras loss masking:
    • Whether you represent a ragged tensor as a tf.RaggedTensor or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
  • If you use masked losses with Keras the loss values may be different in TensorFlow 2.12 compared to previous versions.
  • In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.

Major Features and Improvements

tf.keras:

  • The new Keras model saving format (.keras) is available. You can start using it via model.save(f"{fname}.keras", save_format="keras_v3"). In the future it will become the default for all files with the .keras extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Python lambdas are disallowed at loading time. If you want to use lambdas, you can pass safe_mode=False to the loading method (only do this if you trust the source of the model).
  • Added a model.export(filepath) API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving).
  • Added keras.export.ExportArchive class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based on tf.function tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving.
  • Added utility tf.keras.utils.FeatureSpace, a one-stop shop for structured data preprocessing and encoding.
  • Added tf.SparseTensor input support to tf.keras.layers.Embedding layer. The layer now accepts a new boolean argument sparse. If sparse is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False.
  • Added jit_compile as a settable property to tf.keras.Model.
  • Added synchronized optional parameter to layers.BatchNormalization.
  • Added deprecation warning to layers.experimental.SyncBatchNormalization and suggested to use layers.BatchNormalization with synchronized=True instead.
  • Updated tf.keras.layers.BatchNormalization to support masking of the inputs (mask argument) when computing the mean and variance.
  • Add tf.keras.layers.Identity, a placeholder pass-through layer.
  • Add show_trainable option to tf.keras.utils.model_to_dot to display layer trainable status in model plots.
  • Add ability to save a tf.keras.utils.FeatureSpace object, via feature_space.save("myfeaturespace.keras"), and reload it via feature_space = tf.keras.models.load_model("myfeaturespace.keras").
  • Added utility tf.keras.utils.to_ordinal to convert class vector to ordinal regression / classification matrix.

Bug Fixes and Other Changes

Security

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian’S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard Hallerbäck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, Vinila S, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09



Source link

Be the first to comment

Leave a Reply

Your email address will not be published.


*