Matt Watson 36a06289b2 Rollback tf distribute change (#20017) 1 day ago
.devcontainer bac3a2a377 Add devcontainer for easier external contribution (#18694) 8 months ago
.github fbf9d170a3 Bump github/codeql-action in the github-actions group (#19942) 2 weeks ago
.kokoro 13c72f9180 Enable distribution test (#19947) 2 weeks ago
benchmarks b6300ed4ff Fixing typos in doc (#19138) 5 months ago
examples b6300ed4ff Fixing typos in doc (#19138) 5 months ago
guides 7ea0f1c22e add on_epoch_begin for dataset (#19958) 1 week ago
integration_tests d7a8c20d10 Correct test case for ImdbLoadDataTest (#20009) 2 days ago
keras 36a06289b2 Rollback tf distribute change (#20017) 1 day ago
shell 8e521f0482 output format changes and errors in github (#19608) 2 months ago
.gitignore 634f6d71f7 Enable codecov flags and coverage carryover for keras_core and keras_core.applications (#841) 10 months ago 35cb465ff9 Update (#19142) 5 months ago
LICENSE 93b5b0d2eb Add license. 1 year ago 13cb10daf9 Use temp dir and abs path in `` (#19533) 3 months ago 87dc083e97 Fix typo 2 months ago 2f846b0cfe Fix our issue with the explicit import in __init__ (#19890) 1 month ago
codecov.yml b9be76a13a Convert Keras Core to Keras 3. 10 months ago 1937d487b0 Generate API (#19530) 3 months ago 1b8f7b7bff Enable cudnn rnns when dropout is set (#19645) 2 months ago
pyproject.toml 75dece378b [GHA] fix several codecov issues (#912) 10 months ago
requirements-common.txt 224de28928 Use `absl.testing.parameterized` for ``. (#19842) 1 month ago
requirements-jax-cuda.txt 4b4fbe1e66 Bump the python group across 1 directory with 3 updates (#19957) 1 week ago
requirements-tensorflow-cuda.txt 4b4fbe1e66 Bump the python group across 1 directory with 3 updates (#19957) 1 week ago
requirements-torch-cuda.txt 4b4fbe1e66 Bump the python group across 1 directory with 3 updates (#19957) 1 week ago
requirements.txt 36a06289b2 Rollback tf distribute change (#20017) 1 day ago
setup.cfg 75dece378b [GHA] fix several codecov issues (#912) 10 months ago 45e1175b78 Exclude benchmarks directory from distribution (#19861) 1 month ago

Keras 3: Deep Learning for Humans

Keras 3 is a multi-backend deep learning framework, with support for JAX, TensorFlow, and PyTorch. Effortlessly build and train models for computer vision, natural language processing, audio processing, timeseries forecasting, recommender systems, etc.

  • Accelerated model development: Ship deep learning solutions faster thanks to the high-level UX of Keras and the availability of easy-to-debug runtimes like PyTorch or JAX eager execution.
  • State-of-the-art performance: By picking the backend that is the fastest for your model architecture (often JAX!), leverage speedups ranging from 20% to 350% compared to other frameworks. Benchmark here.
  • Datacenter-scale training: Scale confidently from your laptop to large clusters of GPUs or TPUs.

Join nearly three million developers, from burgeoning startups to global enterprises, in harnessing the power of Keras 3.


Install with pip

Keras 3 is available on PyPI as keras. Note that Keras 2 remains available as the tf-keras package.

  1. Install keras:

    pip install keras --upgrade
  2. Install backend package(s).

To use keras, you should also install the backend of choice: tensorflow, jax, or torch. Note that tensorflow is required for using certain Keras 3 features: certain preprocessing layers as well as pipelines.

Local installation

Minimal installation

Keras 3 is compatible with Linux and MacOS systems. For Windows users, we recommend using WSL2 to run Keras. To install a local development version:

  1. Install dependencies:

    pip install -r requirements.txt
  2. Run installation command from the root directory.

    python --install
  3. Run API generation script when creating PRs that update keras_export public APIs:


Adding GPU support

The requirements.txt file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also provide a separate requirements-{backend}-cuda.txt for TensorFlow, JAX, and PyTorch. These install all CUDA dependencies via pip and expect a NVIDIA driver to be pre-installed. We recommend a clean python environment for each backend to avoid CUDA version mismatches. As an example, here is how to create a Jax GPU environment with conda:

conda create -y -n keras-jax python=3.10
conda activate keras-jax
pip install -r requirements-jax-cuda.txt
python --install

Configuring your backend

You can export the environment variable KERAS_BACKEND or you can edit your local config file at ~/.keras/keras.json to configure your backend. Available backend options are: "tensorflow", "jax", "torch". Example:

export KERAS_BACKEND="jax"

In Colab, you can do:

import os
os.environ["KERAS_BACKEND"] = "jax"

import keras

Note: The backend must be configured before importing keras, and the backend cannot be changed after the package has been imported.

Backwards compatibility

Keras 3 is intended to work as a drop-in replacement for tf.keras (when using the TensorFlow backend). Just take your existing tf.keras code, make sure that your calls to are using the up-to-date .keras format, and you're done.

If your tf.keras model does not include custom components, you can start running it on top of JAX or PyTorch immediately.

If it does include custom components (e.g. custom layers or a custom train_step()), it is usually possible to convert it to a backend-agnostic implementation in just a few minutes.

In addition, Keras models can consume datasets in any format, regardless of the backend you're using: you can train your models with your existing pipelines or PyTorch DataLoaders.

Why use Keras 3?

  • Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework, e.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow.
  • Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework.
    • You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch.
    • You can take a Keras model and use it as part of a PyTorch-native Module or as part of a JAX-native model function.
  • Make your ML code future-proof by avoiding framework lock-in.
  • As a PyTorch user: get access to power and usability of Keras, at last!
  • As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library.

Read more in the Keras 3 release announcement.