Text-to-Video
Diffusers
TuneAVideoPipeline
tune-a-video

Tune-A-Video - Modern Disney

Model Description

This is a diffusers compatible checkpoint. When used with DiffusionPipeline, returns an instance of TuneAVideoPipeline

df-cpt is used to indicate that its a diffusers compatible equivalent of Tune-A-Video-library/mo-di-bear-guitar .

Samples

sample-500 Test prompt: "A princess playing a guitar, modern disney style"

Usage

Loading with a pre-existing Text2Image checkpoint

import torch
from diffusers import TuneAVideoPipeline, DDIMScheduler, UNet3DConditionModel
from diffusers.utils import export_to_video
from PIL import Image

# Use any pretrained Text2Image checkpoint based on stable diffusion
pretrained_model_path = "nitrosocke/mo-di-diffusion"
unet = UNet3DConditionModel.from_pretrained(
    "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", subfolder="unet", torch_dtype=torch.float16
).to("cuda")

pipe = TuneAVideoPipeline.from_pretrained(pretrained_model_path, unet=unet, torch_dtype=torch.float16).to("cuda")

prompt = "A princess playing a guitar, modern disney style"
generator = torch.Generator(device="cuda").manual_seed(42)

video_frames = pipe(prompt, video_length=3, generator=generator, num_inference_steps=50, output_type="np").frames

# Saving to gif.
pil_frames = [Image.fromarray(frame) for frame in video_frames]
duration = len(pil_frames) / 8
pil_frames[0].save(
    "animation.gif",
    save_all=True,
    append_images=pil_frames[1:],  # append rest of the images
    duration=duration * 1000,  # in milliseconds
    loop=0,
)

# Saving to video
video_path = export_to_video(video_frames)

Loading a saved Tune-A-Video checkpoint

import torch
from diffusers import DiffusionPipeline, DDIMScheduler
from diffusers.utils import export_to_video
from PIL import Image

pipe = DiffusionPipeline.from_pretrained(
    "Tune-A-Video-library/df-cpt-mo-di-bear-guitar", torch_dtype=torch.float16
).to("cuda")

prompt = "A princess playing a guitar, modern disney style"
generator = torch.Generator(device="cuda").manual_seed(42)

video_frames = pipe(prompt, video_length=3, generator=generator, num_inference_steps=50, output_type="np").frames

# Saving to gif.
pil_frames = [Image.fromarray(frame) for frame in video_frames]
duration = len(pil_frames) / 8
pil_frames[0].save(
    "animation.gif",
    save_all=True,
    append_images=pil_frames[1:],  # append rest of the images
    duration=duration * 1000,  # in milliseconds
    loop=0,
)

# Saving to video
video_path = export_to_video(video_frames)

Related Papers:

  • Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
  • Stable Diffusion: High-Resolution Image Synthesis with Latent Diffusion Models
Downloads last month
27
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for Tune-A-Video-library/df-cpt-mo-di-bear-guitar

Finetuned
(2)
this model