Skip to content

Commit 5e1427a

Browse files
a-r-r-o-wstevhliu
andauthored
[docs] AnimateDiff FreeNoise (#9414)
* update docs * apply suggestions from review * Update docs/source/en/api/pipelines/animatediff.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/api/pipelines/animatediff.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/api/pipelines/animatediff.md Co-authored-by: Steven Liu <[email protected]> * apply suggestions from review --------- Co-authored-by: Steven Liu <[email protected]>
1 parent b9e2f88 commit 5e1427a

File tree

1 file changed

+83
-0
lines changed

1 file changed

+83
-0
lines changed

docs/source/en/api/pipelines/animatediff.md

+83
Original file line numberDiff line numberDiff line change
@@ -914,6 +914,89 @@ export_to_gif(frames, "animatelcm-motion-lora.gif")
914914
</tr>
915915
</table>
916916

917+
## Using FreeNoise
918+
919+
[FreeNoise: Tuning-Free Longer Video Diffusion via Noise Rescheduling](https://arxiv.org/abs/2310.15169) by Haonan Qiu, Menghan Xia, Yong Zhang, Yingqing He, Xintao Wang, Ying Shan, Ziwei Liu.
920+
921+
FreeNoise is a sampling mechanism that can generate longer videos with short-video generation models by employing noise-rescheduling, temporal attention over sliding windows, and weighted averaging of latent frames. It also can be used with multiple prompts to allow for interpolated video generations. More details are available in the paper.
922+
923+
The currently supported AnimateDiff pipelines that can be used with FreeNoise are:
924+
- [`AnimateDiffPipeline`]
925+
- [`AnimateDiffControlNetPipeline`]
926+
- [`AnimateDiffVideoToVideoPipeline`]
927+
- [`AnimateDiffVideoToVideoControlNetPipeline`]
928+
929+
In order to use FreeNoise, a single line needs to be added to the inference code after loading your pipelines.
930+
931+
```diff
932+
+ pipe.enable_free_noise()
933+
```
934+
935+
After this, either a single prompt could be used, or multiple prompts can be passed as a dictionary of integer-string pairs. The integer keys of the dictionary correspond to the frame index at which the influence of that prompt would be maximum. Each frame index should map to a single string prompt. The prompts for intermediate frame indices, that are not passed in the dictionary, are created by interpolating between the frame prompts that are passed. By default, simple linear interpolation is used. However, you can customize this behaviour with a callback to the `prompt_interpolation_callback` parameter when enabling FreeNoise.
936+
937+
Full example:
938+
939+
```python
940+
import torch
941+
from diffusers import AutoencoderKL, AnimateDiffPipeline, LCMScheduler, MotionAdapter
942+
from diffusers.utils import export_to_video, load_image
943+
944+
# Load pipeline
945+
dtype = torch.float16
946+
motion_adapter = MotionAdapter.from_pretrained("wangfuyun/AnimateLCM", torch_dtype=dtype)
947+
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=dtype)
948+
949+
pipe = AnimateDiffPipeline.from_pretrained("emilianJR/epiCRealism", motion_adapter=motion_adapter, vae=vae, torch_dtype=dtype)
950+
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config, beta_schedule="linear")
951+
952+
pipe.load_lora_weights(
953+
"wangfuyun/AnimateLCM", weight_name="AnimateLCM_sd15_t2v_lora.safetensors", adapter_name="lcm_lora"
954+
)
955+
pipe.set_adapters(["lcm_lora"], [0.8])
956+
957+
# Enable FreeNoise for long prompt generation
958+
pipe.enable_free_noise(context_length=16, context_stride=4)
959+
pipe.to("cuda")
960+
961+
# Can be a single prompt, or a dictionary with frame timesteps
962+
prompt = {
963+
0: "A caterpillar on a leaf, high quality, photorealistic",
964+
40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic",
965+
80: "A cocoon on a leaf, flowers in the backgrond, photorealistic",
966+
120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic",
967+
160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic",
968+
200: "A beautiful butterfly, flying away in a forest, photorealistic",
969+
240: "A cyberpunk butterfly, neon lights, glowing",
970+
}
971+
negative_prompt = "bad quality, worst quality, jpeg artifacts"
972+
973+
# Run inference
974+
output = pipe(
975+
prompt=prompt,
976+
negative_prompt=negative_prompt,
977+
num_frames=256,
978+
guidance_scale=2.5,
979+
num_inference_steps=10,
980+
generator=torch.Generator("cpu").manual_seed(0),
981+
)
982+
983+
# Save video
984+
frames = output.frames[0]
985+
export_to_video(frames, "output.mp4", fps=16)
986+
```
987+
988+
### FreeNoise memory savings
989+
990+
Since FreeNoise processes multiple frames together, there are parts in the modeling where the memory required exceeds that available on normal consumer GPUs. The main memory bottlenecks that we identified are spatial and temporal attention blocks, upsampling and downsampling blocks, resnet blocks and feed-forward layers. Since most of these blocks operate effectively only on the channel/embedding dimension, one can perform chunked inference across the batch dimensions. The batch dimension in AnimateDiff are either spatial (`[B x F, H x W, C]`) or temporal (`B x H x W, F, C`) in nature (note that it may seem counter-intuitive, but the batch dimension here are correct, because spatial blocks process across the `B x F` dimension while the temporal blocks process across the `B x H x W` dimension). We introduce a `SplitInferenceModule` that makes it easier to chunk across any dimension and perform inference. This saves a lot of memory but comes at the cost of requiring more time for inference.
991+
992+
```diff
993+
# Load pipeline and adapters
994+
# ...
995+
+ pipe.enable_free_noise_split_inference()
996+
+ pipe.unet.enable_forward_chunking(16)
997+
```
998+
999+
The call to `pipe.enable_free_noise_split_inference` method accepts two parameters: `spatial_split_size` (defaults to `256`) and `temporal_split_size` (defaults to `16`). These can be configured based on how much VRAM you have available. A lower split size results in lower memory usage but slower inference, whereas a larger split size results in faster inference at the cost of more memory.
9171000

9181001
## Using `from_single_file` with the MotionAdapter
9191002

0 commit comments

Comments
 (0)