Yuxuan.Zhang 399124f323 Merge pull request #434 from vinthony/patch-1 | 1 day ago | |
---|---|---|
.github | 2 months ago | |
finetune | 1 week ago | |
inference | 2 weeks ago | |
resources | 1 month ago | |
sat | 1 month ago | |
tools | 1 week ago | |
.gitignore | 1 month ago | |
LICENSE | 2 months ago | |
MODEL_LICENSE | 2 months ago | |
README.md | 3 days ago | |
README_ja.md | 1 week ago | |
README_zh.md | 1 week ago | |
pyproject.toml | 2 months ago | |
requirements.txt | 3 weeks ago |
Experience the CogVideoX-5B model online at 🤗 Huggingface Space or 🤖 ModelScope Space
📚 View the paper and user guide
📍 Visit QingYing and API Platform to experience larger-scale commercial video generation models.
## Project Updates - 🔥🔥 **News**: ```2024/10/13```: A more cost-effective fine-tuning framework for `CogVideoX-5B` that works with a single 4090 GPU, [cogvideox-factory](https://github.com/a-r-r-o-w/cogvideox-factory), has been released. It supports fine-tuning with multiple resolutions. Feel free to use it! - 🔥 **News**: ```2024/10/10```: We have updated our technical report. Please click [here](https://arxiv.org/pdf/2408.06072) to view it. More training details and a demo have been added. To see the demo, click [here](https://yzy-thu.github.io/CogVideoX-demo/).- 🔥 **News**: ```2024/10/09```: We have publicly released the [technical documentation](https://zhipu-ai.feishu.cn/wiki/DHCjw1TrJiTyeukfc9RceoSRnCh) for CogVideoX fine-tuning on Feishu, further increasing distribution flexibility. All examples in the public documentation can be fully reproduced. - 🔥 **News**: ```2024/9/19```: We have open-sourced the CogVideoX series image-to-video model **CogVideoX-5B-I2V**. This model can take an image as a background input and generate a video combined with prompt words, offering greater controllability. With this, the CogVideoX series models now support three tasks: text-to-video generation, video continuation, and image-to-video generation. Welcome to try it online at [Experience](https://huggingface.co/spaces/THUDM/CogVideoX-5B-Space). - 🔥 ```2024/9/19```: The Caption model [CogVLM2-Caption](https://huggingface.co/THUDM/cogvlm2-llama3-caption), used in the training process of CogVideoX to convert video data into text descriptions, has been open-sourced. Welcome to download and use it. - 🔥 ```2024/8/27```: We have open-sourced a larger model in the CogVideoX series, **CogVideoX-5B**. We have significantly optimized the model's inference performance, greatly lowering the inference threshold. You can run * *CogVideoX-2B** on older GPUs like `GTX 1080TI`, and **CogVideoX-5B** on desktop GPUs like `RTX 3060`. Please strictly follow the [requirements](requirements.txt) to update and install dependencies, and refer to [cli_demo](inference/cli_demo.py) for inference code. Additionally, the open-source license for the **CogVideoX-2B ** model has been changed to the **Apache 2.0 License**. - 🔥 ```2024/8/6```: We have open-sourced **3D Causal VAE**, used for **CogVideoX-2B**, which can reconstruct videos with almost no loss. - 🔥 ```2024/8/6```: We have open-sourced the first model of the CogVideoX series video generation models, **CogVideoX-2B **. - 🌱 **Source**: ```2022/5/19```: We have open-sourced the CogVideo video generation model (now you can see it in the `CogVideo` branch). This is the first open-source large Transformer-based text-to-video generation model. You can access the [ICLR'23 paper](https://arxiv.org/abs/2205.15868) for technical details. ## Table of Contents Jump to a specific section: - [Quick Start](#Quick-Start) - [SAT](#sat) - [Diffusers](#Diffusers) - [CogVideoX-2B Video Works](#cogvideox-2b-gallery) - [Introduction to the CogVideoX Model](#Model-Introduction) - [Full Project Structure](#project-structure) - [Inference](#inference) - [SAT](#sat) - [Tools](#tools) - [Introduction to CogVideo(ICLR'23) Model](#cogvideoiclr23) - [Citations](#Citation) - [Open Source Project Plan](#Open-Source-Project-Plan) - [Model License](#Model-License) ## Quick Start ### Prompt Optimization Before running the model, please refer to [this guide](inference/convert_demo.py) to see how we use large models like GLM-4 (or other comparable products, such as GPT-4) to optimize the model. This is crucial because the model is trained with long prompts, and a good prompt directly impacts the quality of the video generation. ### SAT **Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.** Follow instructions in [sat_demo](sat/README.md): Contains the inference code and fine-tuning code of SAT weights. It is recommended to improve based on the CogVideoX model structure. Innovative researchers use this code to better perform rapid stacking and development. ### Diffusers **Please make sure your Python version is between 3.10 and 3.12, inclusive of both 3.10 and 3.12.** ``` pip install -r requirements.txt ``` Then follow [diffusers_demo](inference/cli_demo.py): A more detailed explanation of the inference code, mentioning the significance of common parameters. For more details on quantized inference, please refer to [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao/). With Diffusers and TorchAO, quantized inference is also possible leading to memory-efficient inference as well as speedup in some cases when compiled. A full list of memory and time benchmarks with various settings on A100 and H100 has been published at [diffusers-torchao](https://github.com/sayakpaul/diffusers-torchao). ## Gallery ### CogVideoX-5BModel Name | CogVideoX-2B | CogVideoX-5B | CogVideoX-5B-I2V |
---|---|---|---|
Model Description | Entry-level model, balancing compatibility. Low cost for running and secondary development. | Larger model with higher video generation quality and better visual effects. | CogVideoX-5B image-to-video version. |
Inference Precision | FP16*(recommended), BF16, FP32, FP8*, INT8, not supported: INT4 | BF16 (recommended), FP16, FP32, FP8*, INT8, not supported: INT4 | |
Single GPU Memory Usage |
SAT FP16: 18GB diffusers FP16: from 4GB* diffusers INT8 (torchao): from 3.6GB* |
SAT BF16: 26GB diffusers BF16: from 5GB* diffusers INT8 (torchao): from 4.4GB* |
|
Multi-GPU Inference Memory Usage | FP16: 10GB* using diffusers |
BF16: 15GB* using diffusers |
|
Inference Speed (Step = 50, FP/BF16) |
Single A100: ~90 seconds Single H100: ~45 seconds |
Single A100: ~180 seconds Single H100: ~90 seconds |
|
Fine-tuning Precision | FP16 | BF16 | |
Fine-tuning Memory Usage | 47 GB (bs=1, LORA) 61 GB (bs=2, LORA) 62GB (bs=1, SFT) |
63 GB (bs=1, LORA) 80 GB (bs=2, LORA) 75GB (bs=1, SFT) |
78 GB (bs=1, LORA) 75GB (bs=1, SFT, 16GPU) |
Prompt Language | English* | ||
Maximum Prompt Length | 226 Tokens | ||
Video Length | 6 Seconds | ||
Frame Rate | 8 Frames / Second | ||
Video Resolution | 720 x 480, no support for other resolutions (including fine-tuning) | ||
Position Encoding | 3d_sincos_pos_embed | 3d_sincos_pos_embed | 3d_rope_pos_embed + learnable_pos_embed |
Download Link (Diffusers) | 🤗 HuggingFace 🤖 ModelScope 🟣 WiseModel |
🤗 HuggingFace 🤖 ModelScope 🟣 WiseModel |
🤗 HuggingFace 🤖 ModelScope 🟣 WiseModel |
Download Link (SAT) | SAT |
66e4762f53
):
is a fine-tuned model based on CogVideoX, specifically designed for interior design.
+ [xDiT](https://github.com/xdit-project/xDiT): xDiT is a scalable inference engine for Diffusion Transformers (DiTs)
on multiple GPU Clusters. xDiT supports real-time image and video generations services.
+ [cogvideox-factory](https://github.com/a-r-r-o-w/cogvideox-factory): A cost-effective
fine-tuning framework for CogVideoX, compatible with the `diffusers` version model. Supports more resolutions, and fine-tuning CogVideoX-5B can be done with a single 4090 GPU.
## Project Structure
This open-source repository will guide developers to quickly get started with the basic usage and fine-tuning examples
of the **CogVideoX** open-source model.
### Quick Start with Colab
Here provide three projects that can be run directly on free Colab T4 instances:
+ [CogVideoX-5B-T2V-Colab.ipynb](https://colab.research.google.com/drive/1pCe5s0bC_xuXbBlpvIH1z0kfdTLQPzCS?usp=sharing):
CogVideoX-5B Text-to-Video Colab code.
+ [CogVideoX-5B-T2V-Int8-Colab.ipynb](https://colab.research.google.com/drive/1DUffhcjrU-uz7_cpuJO3E_D4BaJT7OPa?usp=sharing):
CogVideoX-5B Quantized Text-to-Video Inference Colab code, which takes about 30 minutes per run.
+ [CogVideoX-5B-I2V-Colab.ipynb](https://colab.research.google.com/drive/17CqYCqSwz39nZAX2YyonDxosVKUZGzcX?usp=sharing):
CogVideoX-5B Image-to-Video Colab code.
+ [CogVideoX-5B-V2V-Colab.ipynb](https://colab.research.google.com/drive/1comfGAUJnChl5NwPuO8Ox5_6WCy4kbNN?usp=sharing):
CogVideoX-5B Video-to-Video Colab code.
### Inference
+ [dcli_demo](inference/cli_demo.py): A more detailed inference code explanation, including the significance of
common parameters. All of this is covered here.
+ [cli_demo_quantization](inference/cli_demo_quantization.py):
Quantized model inference code that can run on devices with lower memory. You can also modify this code to support
running CogVideoX models in FP8 precision.
+ [diffusers_vae_demo](inference/cli_vae_demo.py): Code for running VAE inference separately.
+ [space demo](inference/gradio_composite_demo): The same GUI code as used in the Huggingface Space, with frame
interpolation and super-resolution tools integrated.
This folder contains some tools for model conversion / caption generation, etc.
The official repo for the paper: CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers is on the CogVideo branch
CogVideo is able to generate relatively high-frame-rate videos. A 4-second clip of 32 frames is shown below.
The demo for CogVideo is at https://models.aminer.cn/cogvideo, where you can get hands-on practice on text-to-video generation. The original input is in Chinese.
🌟 If you find our work helpful, please leave us a star and cite our paper.
@article{yang2024cogvideox,
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
journal={arXiv preprint arXiv:2408.06072},
year={2024}
}
@article{hong2022cogvideo,
title={CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers},
author={Hong, Wenyi and Ding, Ming and Zheng, Wendi and Liu, Xinghan and Tang, Jie},
journal={arXiv preprint arXiv:2205.15868},
year={2022}
}
We welcome your contributions! You can click here for more information.
The code in this repository is released under the Apache 2.0 License.
The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the Apache 2.0 License.
The CogVideoX-5B model (Transformers module, include I2V and T2V) is released under the CogVideoX LICENSE.