Skip to content

Commit 2e3541d

Browse files
authored
[Community Pipeline] Unclip Image Interpolation (#2400)
* unclip img interpolation poc * Added code sample and refactoring.
1 parent 2b4f849 commit 2e3541d

File tree

2 files changed

+538
-0
lines changed

2 files changed

+538
-0
lines changed

Diff for: examples/community/README.md

+45
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ Stable Diffusion v1.1-1.4 Comparison | Run all 4 model checkpoints for Stable Di
2828
MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt | [MagicMix](#magic-mix) | - | [Partho Das](https://github.com/daspartho) |
2929
| Stable UnCLIP | Diffusion Pipeline for combining prior model (generate clip image embedding from text, UnCLIPPipeline `"kakaobrain/karlo-v1-alpha"`) and decoder pipeline (decode clip image embedding to image, StableDiffusionImageVariationPipeline `"lambdalabs/sd-image-variations-diffusers"` ). | [Stable UnCLIP](#stable-unclip) | - |[Ray Wang](https://wrong.wang) |
3030
| UnCLIP Text Interpolation Pipeline | Diffusion Pipeline that allows passing two prompts and produces images while interpolating between the text-embeddings of the two prompts | [UnCLIP Text Interpolation Pipeline](#unclip-text-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
31+
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
3132

3233

3334

@@ -989,3 +990,47 @@ The resulting images in order:-
989990
![result_3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_3.png)
990991
![result_4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_4.png)
991992
![result_5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPTextInterpolationSamples/resolve/main/lion_to_cub_5.png)
993+
994+
### UnCLIP Image Interpolation Pipeline
995+
996+
This Diffusion Pipeline takes two images or an image_embeddings tensor of size 2 and interpolates between their embeddings using spherical interpolation ( slerp ). The input images/image_embeddings are converted to image embeddings by the pipeline's image_encoder and the interpolation is done on the resulting image_embeddings over the number of steps specified. Defaults to 5 steps.
997+
998+
```python
999+
import torch
1000+
from diffusers import DiffusionPipeline
1001+
from PIL import Image
1002+
1003+
device = torch.device("cpu" if not torch.cuda.is_available() else "cuda")
1004+
dtype = torch.float16 if torch.cuda.is_available() else torch.bfloat16
1005+
1006+
pipe = DiffusionPipeline.from_pretrained(
1007+
"kakaobrain/karlo-v1-alpha-image-variations",
1008+
torch_dtype=dtype,
1009+
custom_pipeline="unclip_image_interpolation"
1010+
)
1011+
pipe.to(device)
1012+
1013+
images = [Image.open('./starry_night.jpg'), Image.open('./flowers.jpg')]
1014+
#For best results keep the prompts close in length to each other. Of course, feel free to try out with differing lengths.
1015+
generator = torch.Generator(device=device).manual_seed(42)
1016+
1017+
output = pipe(image = images ,steps = 6, generator = generator)
1018+
1019+
for i,image in enumerate(output.images):
1020+
image.save('starry_to_flowers_%s.jpg' % i)
1021+
```
1022+
The original images:-
1023+
1024+
![starry](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_night.jpg)
1025+
![flowers](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/flowers.jpg)
1026+
1027+
The resulting images in order:-
1028+
1029+
![result0](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_0.png)
1030+
![result1](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_1.png)
1031+
![result2](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_2.png)
1032+
![result3](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_3.png)
1033+
![result4](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_4.png)
1034+
![result5](https://huggingface.co/datasets/NagaSaiAbhinay/UnCLIPImageInterpolationSamples/resolve/main/starry_to_flowers_5.png)
1035+
1036+

0 commit comments

Comments
 (0)