You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* begin animatediff img2video and video2video
* revert animatediff to original implementation
* add img2video as pipeline
* update
* add vid2vid pipeline
* update imports
* update
* remove copied from line for check_inputs
* update
* update examples
* add multi-batch support
* fix __init__.py files
* move img2vid to community
* update community readme and examples
* fix
* make fix-copies
* add vid2vid batch params
* apply suggestions from review
Co-Authored-By: Dhruv Nair <[email protected]>
* add test for animatediff vid2vid
* torch.stack -> torch.cat
Co-Authored-By: Dhruv Nair <[email protected]>
* make style
* docs for vid2vid
* update
* fix prepare_latents
* fix docs
* remove img2vid
* update README to :main
* remove slow test
* refactor pipeline output
* update docs
* update docs
* merge community readme from :main
* final fix i promise
* add support for url in animatediff example
* update example
* update callbacks to latest implementation
* Update src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py
Co-authored-by: Patrick von Platen <[email protected]>
* Update src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py
Co-authored-by: Patrick von Platen <[email protected]>
* fix merge
* Apply suggestions from code review
* remove callback and callback_steps as suggested in review
* Update tests/pipelines/animatediff/test_animatediff_video2video.py
Co-authored-by: Patrick von Platen <[email protected]>
* fix import error caused due to unet refactor in #6630
* fix numpy import error after tensor2vid refactor in #6626
* make fix-copies
* fix numpy error
* fix progress bar test
---------
Co-authored-by: Dhruv Nair <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/animatediff.md
+111
Original file line number
Diff line number
Diff line change
@@ -25,13 +25,16 @@ The abstract of the paper is the following:
25
25
| Pipeline | Tasks | Demo
26
26
|---|---|:---:|
27
27
|[AnimateDiffPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py)|*Text-to-Video Generation with AnimateDiff*|
28
+
|[AnimateDiffVideoToVideoPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff_video2video.py)|*Video-to-Video Generation with AnimateDiff*|
28
29
29
30
## Available checkpoints
30
31
31
32
Motion Adapter checkpoints can be found under [guoyww](https://huggingface.co/guoyww/). These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5.
32
33
33
34
## Usage example
34
35
36
+
### AnimateDiffPipeline
37
+
35
38
AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet.
36
39
37
40
The following example demonstrates how to use a *MotionAdapter* checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5.
@@ -98,6 +101,114 @@ AnimateDiff tends to work better with finetuned Stable Diffusion models. If you
98
101
99
102
</Tip>
100
103
104
+
### AnimateDiffVideoToVideoPipeline
105
+
106
+
AnimateDiff can also be used to generate visually similar videos or enable style/character/background or other edits starting from an initial video, allowing you to seamlessly explore creative possibilities.
107
+
108
+
```python
109
+
import imageio
110
+
import requests
111
+
import torch
112
+
from diffusers import AnimateDiffVideoToVideoPipeline, DDIMScheduler, MotionAdapter
alt="closeup of tony stark, robert downey jr, fireworks"
207
+
style="width: 300px;" />
208
+
</td>
209
+
</tr>
210
+
</table>
211
+
101
212
## Using Motion LoRAs
102
213
103
214
Motion LoRAs are a collection of LoRAs that work with the `guoyww/animatediff-motion-adapter-v1-5-2` checkpoint. These LoRAs are responsible for adding specific types of motion to the animations.
0 commit comments