可生成高清视频的Stable Diffusion。

Nathan Raw 4466032eee Update __init__.py 1 month ago
.github bb1cd1db33 Update python-publish.yml 1 month ago
examples 13b058731a :sparkles: update examples 1 year ago
stable_diffusion_videos 4466032eee Update __init__.py 1 month ago
tests cff9b90295 :art: cleanup and fix tests 2 years ago
.gitignore e5d9c2f8ab :see_no_evil: add music files to gitignore 1 year ago
LICENSE aedfacb243 Initial commit 2 years ago
README.md 97131060fa Update README.md 5 months ago
flax_stable_diffusion_videos.ipynb f20a0b3b33 Created using Colaboratory 1 year ago
packages.txt a1244dc452 :tada: init 2 years ago
pyproject.toml 024b924a18 pkg: convert `setup.py` to standard `pyproject.toml` 5 months ago
stable_diffusion_videos.ipynb 984f950926 Update stable_diffusion_videos.ipynb 3 months ago

README.md

stable-diffusion-videos

Try it yourself in Colab: Open In Colab

Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"

https://user-images.githubusercontent.com/32437151/188721341-6f28abf9-699b-46b0-a72e-fa2a624ba0bb.mp4

Installation

pip install stable_diffusion_videos

Usage

Check out the examples folder for example scripts 👀

Making Videos

Note: For Apple M1 architecture, use torch.float32 instead, as torch.float16 is not available on MPS.

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=3,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    name='animals_test',        # Subdirectory of output_dir where images/videos will be saved
    guidance_scale=8.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Making Music Videos

New! Music can be added to the video by providing a path to an audio file. The audio will inform the rate of interpolation so the videos move to the beat 🎶

from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

# Seconds in the song.
audio_offsets = [146, 148]  # [Start, end]
fps = 30  # Use lower values for testing (5 or 10), higher values for better quality (30 or 60)

# Convert seconds to frames
num_interpolation_steps = [(b-a) * fps for a, b in zip(audio_offsets, audio_offsets[1:])]

video_path = pipeline.walk(
    prompts=['a cat', 'a dog'],
    seeds=[42, 1337],
    num_interpolation_steps=num_interpolation_steps,
    audio_filepath='audio.mp3',
    audio_start_sec=audio_offsets[0],
    fps=fps,
    height=512,  # use multiples of 64 if > 512. Multiples of 8 if < 512.
    width=512,   # use multiples of 64 if > 512. Multiples of 8 if < 512.
    output_dir='dreams',        # Where images/videos will be saved
    guidance_scale=7.5,         # Higher adheres to prompt more, lower lets model take the wheel
    num_inference_steps=50,     # Number of diffusion steps per image generated. 50 is good default
)

Using the UI

from stable_diffusion_videos import StableDiffusionWalkPipeline, Interface
import torch

pipeline = StableDiffusionWalkPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4",
    torch_dtype=torch.float16,
).to("cuda")

interface = Interface(pipeline)
interface.launch()

Credits

This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.

Contributing

You can file any issues/feature requests here

Enjoy 🤗 <!--

Extras

Upsample with Real-ESRGAN

You can also 4x upsample your images with Real-ESRGAN!

It's included when you pip install the latest version of stable-diffusion-videos!

You'll be able to use upsample=True in the walk function, like this:

pipeline.walk(['a cat', 'a dog'], [234, 345], upsample=True)

The above may cause you to run out of VRAM. No problem, you can do upsampling separately.

To upsample an individual image:

from stable_diffusion_videos import RealESRGANModel

model = RealESRGANModel.from_pretrained('nateraw/real-esrgan')
enhanced_image = model('your_file.jpg')

Or, to do a whole folder:

from stable_diffusion_videos import RealESRGANModel

model = RealESRGANModel.from_pretrained('nateraw/real-esrgan')
model.upsample_imagefolder('path/to/images/', 'path/to/output_dir')

-->