Skip to content

Commit bdf252e

Browse files
Joqsanpatrickvonplaten
authored and
Jimmy
committed
[Community Pipelines] EDICT pipeline implementation (huggingface#3153)
* EDICT pipeline initial commit - Starting point taking from https://github.com/Joqsan/edict-diffusion * refactor __init__() method * minor refactoring * refactor scheduler code - remove scheduler and move its methods to the EDICTPipeline class * make CFG optional - refactor encode_prompt(). - include optional generator for sampling with vae. - minor variable renaming * add EDICT pipeline description to README.md * replace preprocess() with VaeImageProcessor * run make style and make quality commands --------- Co-authored-by: Patrick von Platen <[email protected]>
1 parent b27acc1 commit bdf252e

File tree

2 files changed

+350
-0
lines changed

2 files changed

+350
-0
lines changed

examples/community/README.md

+86
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ MagicMix | Diffusion Pipeline for semantic mixing of an image and a text prompt
3232
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - |[Aengus (Duc-Anh)](https://github.com/aengusng8) |
3333
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
3434
| TensorRT Stable Diffusion Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - |[Asfiya Baig](https://github.com/asfiyab-nvidia) |
35+
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
36+
3537

3638

3739
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
@@ -1161,3 +1163,87 @@ prompt = "a beautiful photograph of Mt. Fuji during cherry blossom"
11611163
image = pipe(prompt).images[0]
11621164
image.save('tensorrt_mt_fuji.png')
11631165
```
1166+
1167+
### EDICT Image Editing Pipeline
1168+
1169+
This pipeline implements the text-guided image editing approach from the paper [EDICT: Exact Diffusion Inversion via Coupled Transformations](https://arxiv.org/abs/2211.12446). You have to pass:
1170+
- (`PIL`) `image` you want to edit.
1171+
- `base_prompt`: the text prompt describing the current image (before editing).
1172+
- `target_prompt`: the text prompt describing with the edits.
1173+
1174+
```python
1175+
from diffusers import DiffusionPipeline, DDIMScheduler
1176+
from transformers import CLIPTextModel
1177+
import torch, PIL, requests
1178+
from io import BytesIO
1179+
from IPython.display import display
1180+
1181+
def center_crop_and_resize(im):
1182+
1183+
width, height = im.size
1184+
d = min(width, height)
1185+
left = (width - d) / 2
1186+
upper = (height - d) / 2
1187+
right = (width + d) / 2
1188+
lower = (height + d) / 2
1189+
1190+
return im.crop((left, upper, right, lower)).resize((512, 512))
1191+
1192+
torch_dtype = torch.float16
1193+
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
1194+
1195+
# scheduler and text_encoder param values as in the paper
1196+
scheduler = DDIMScheduler(
1197+
num_train_timesteps=1000,
1198+
beta_start=0.00085,
1199+
beta_end=0.012,
1200+
beta_schedule="scaled_linear",
1201+
set_alpha_to_one=False,
1202+
clip_sample=False,
1203+
)
1204+
1205+
text_encoder = CLIPTextModel.from_pretrained(
1206+
pretrained_model_name_or_path="openai/clip-vit-large-patch14",
1207+
torch_dtype=torch_dtype,
1208+
)
1209+
1210+
# initialize pipeline
1211+
pipeline = DiffusionPipeline.from_pretrained(
1212+
pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4",
1213+
custom_pipeline="edict_pipeline",
1214+
revision="fp16",
1215+
scheduler=scheduler,
1216+
text_encoder=text_encoder,
1217+
leapfrog_steps=True,
1218+
torch_dtype=torch_dtype,
1219+
).to(device)
1220+
1221+
# download image
1222+
image_url = "https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg"
1223+
response = requests.get(image_url)
1224+
image = PIL.Image.open(BytesIO(response.content))
1225+
1226+
# preprocess it
1227+
cropped_image = center_crop_and_resize(image)
1228+
1229+
# define the prompts
1230+
base_prompt = "A dog"
1231+
target_prompt = "A golden retriever"
1232+
1233+
# run the pipeline
1234+
result_image = pipeline(
1235+
base_prompt=base_prompt,
1236+
target_prompt=target_prompt,
1237+
image=cropped_image,
1238+
)
1239+
1240+
display(result_image)
1241+
```
1242+
1243+
Init Image
1244+
1245+
![img2img_init_edict_text_editing](https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1.jpeg)
1246+
1247+
Output Image
1248+
1249+
![img2img_edict_text_editing](https://huggingface.co/datasets/Joqsan/images/resolve/main/imagenet_dog_1_cropped_generated.png)

examples/community/edict_pipeline.py

+264
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,264 @@
1+
from typing import Optional
2+
3+
import torch
4+
from PIL import Image
5+
from tqdm.auto import tqdm
6+
from transformers import CLIPTextModel, CLIPTokenizer
7+
8+
from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, UNet2DConditionModel
9+
from diffusers.image_processor import VaeImageProcessor
10+
from diffusers.utils import (
11+
deprecate,
12+
)
13+
14+
15+
class EDICTPipeline(DiffusionPipeline):
16+
def __init__(
17+
self,
18+
vae: AutoencoderKL,
19+
text_encoder: CLIPTextModel,
20+
tokenizer: CLIPTokenizer,
21+
unet: UNet2DConditionModel,
22+
scheduler: DDIMScheduler,
23+
mixing_coeff: float = 0.93,
24+
leapfrog_steps: bool = True,
25+
):
26+
self.mixing_coeff = mixing_coeff
27+
self.leapfrog_steps = leapfrog_steps
28+
29+
super().__init__()
30+
self.register_modules(
31+
vae=vae,
32+
text_encoder=text_encoder,
33+
tokenizer=tokenizer,
34+
unet=unet,
35+
scheduler=scheduler,
36+
)
37+
38+
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
39+
self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
40+
41+
def _encode_prompt(
42+
self, prompt: str, negative_prompt: Optional[str] = None, do_classifier_free_guidance: bool = False
43+
):
44+
text_inputs = self.tokenizer(
45+
prompt,
46+
padding="max_length",
47+
max_length=self.tokenizer.model_max_length,
48+
truncation=True,
49+
return_tensors="pt",
50+
)
51+
52+
prompt_embeds = self.text_encoder(text_inputs.input_ids.to(self.device)).last_hidden_state
53+
54+
prompt_embeds = prompt_embeds.to(dtype=self.text_encoder.dtype, device=self.device)
55+
56+
if do_classifier_free_guidance:
57+
uncond_tokens = "" if negative_prompt is None else negative_prompt
58+
59+
uncond_input = self.tokenizer(
60+
uncond_tokens,
61+
padding="max_length",
62+
max_length=self.tokenizer.model_max_length,
63+
truncation=True,
64+
return_tensors="pt",
65+
)
66+
67+
negative_prompt_embeds = self.text_encoder(uncond_input.input_ids.to(self.device)).last_hidden_state
68+
69+
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
70+
71+
return prompt_embeds
72+
73+
def denoise_mixing_layer(self, x: torch.Tensor, y: torch.Tensor):
74+
x = self.mixing_coeff * x + (1 - self.mixing_coeff) * y
75+
y = self.mixing_coeff * y + (1 - self.mixing_coeff) * x
76+
77+
return [x, y]
78+
79+
def noise_mixing_layer(self, x: torch.Tensor, y: torch.Tensor):
80+
y = (y - (1 - self.mixing_coeff) * x) / self.mixing_coeff
81+
x = (x - (1 - self.mixing_coeff) * y) / self.mixing_coeff
82+
83+
return [x, y]
84+
85+
def _get_alpha_and_beta(self, t: torch.Tensor):
86+
# as self.alphas_cumprod is always in cpu
87+
t = int(t)
88+
89+
alpha_prod = self.scheduler.alphas_cumprod[t] if t >= 0 else self.scheduler.final_alpha_cumprod
90+
91+
return alpha_prod, 1 - alpha_prod
92+
93+
def noise_step(
94+
self,
95+
base: torch.Tensor,
96+
model_input: torch.Tensor,
97+
model_output: torch.Tensor,
98+
timestep: torch.Tensor,
99+
):
100+
prev_timestep = timestep - self.scheduler.config.num_train_timesteps / self.scheduler.num_inference_steps
101+
102+
alpha_prod_t, beta_prod_t = self._get_alpha_and_beta(timestep)
103+
alpha_prod_t_prev, beta_prod_t_prev = self._get_alpha_and_beta(prev_timestep)
104+
105+
a_t = (alpha_prod_t_prev / alpha_prod_t) ** 0.5
106+
b_t = -a_t * (beta_prod_t**0.5) + beta_prod_t_prev**0.5
107+
108+
next_model_input = (base - b_t * model_output) / a_t
109+
110+
return model_input, next_model_input.to(base.dtype)
111+
112+
def denoise_step(
113+
self,
114+
base: torch.Tensor,
115+
model_input: torch.Tensor,
116+
model_output: torch.Tensor,
117+
timestep: torch.Tensor,
118+
):
119+
prev_timestep = timestep - self.scheduler.config.num_train_timesteps / self.scheduler.num_inference_steps
120+
121+
alpha_prod_t, beta_prod_t = self._get_alpha_and_beta(timestep)
122+
alpha_prod_t_prev, beta_prod_t_prev = self._get_alpha_and_beta(prev_timestep)
123+
124+
a_t = (alpha_prod_t_prev / alpha_prod_t) ** 0.5
125+
b_t = -a_t * (beta_prod_t**0.5) + beta_prod_t_prev**0.5
126+
next_model_input = a_t * base + b_t * model_output
127+
128+
return model_input, next_model_input.to(base.dtype)
129+
130+
@torch.no_grad()
131+
def decode_latents(self, latents: torch.Tensor):
132+
latents = 1 / self.vae.config.scaling_factor * latents
133+
image = self.vae.decode(latents).sample
134+
image = (image / 2 + 0.5).clamp(0, 1)
135+
return image
136+
137+
@torch.no_grad()
138+
def prepare_latents(
139+
self,
140+
image: Image.Image,
141+
text_embeds: torch.Tensor,
142+
timesteps: torch.Tensor,
143+
guidance_scale: float,
144+
generator: Optional[torch.Generator] = None,
145+
):
146+
do_classifier_free_guidance = guidance_scale > 1.0
147+
148+
image = image.to(device=self.device, dtype=text_embeds.dtype)
149+
latent = self.vae.encode(image).latent_dist.sample(generator)
150+
151+
latent = self.vae.config.scaling_factor * latent
152+
153+
coupled_latents = [latent.clone(), latent.clone()]
154+
155+
for i, t in tqdm(enumerate(timesteps), total=len(timesteps)):
156+
coupled_latents = self.noise_mixing_layer(x=coupled_latents[0], y=coupled_latents[1])
157+
158+
# j - model_input index, k - base index
159+
for j in range(2):
160+
k = j ^ 1
161+
162+
if self.leapfrog_steps:
163+
if i % 2 == 0:
164+
k, j = j, k
165+
166+
model_input = coupled_latents[j]
167+
base = coupled_latents[k]
168+
169+
latent_model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input
170+
171+
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeds).sample
172+
173+
if do_classifier_free_guidance:
174+
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
175+
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
176+
177+
base, model_input = self.noise_step(
178+
base=base,
179+
model_input=model_input,
180+
model_output=noise_pred,
181+
timestep=t,
182+
)
183+
184+
coupled_latents[k] = model_input
185+
186+
return coupled_latents
187+
188+
@torch.no_grad()
189+
def __call__(
190+
self,
191+
base_prompt: str,
192+
target_prompt: str,
193+
image: Image.Image,
194+
guidance_scale: float = 3.0,
195+
num_inference_steps: int = 50,
196+
strength: float = 0.8,
197+
negative_prompt: Optional[str] = None,
198+
generator: Optional[torch.Generator] = None,
199+
output_type: Optional[str] = "pil",
200+
):
201+
do_classifier_free_guidance = guidance_scale > 1.0
202+
203+
image = self.image_processor.preprocess(image)
204+
205+
base_embeds = self._encode_prompt(base_prompt, negative_prompt, do_classifier_free_guidance)
206+
target_embeds = self._encode_prompt(target_prompt, negative_prompt, do_classifier_free_guidance)
207+
208+
self.scheduler.set_timesteps(num_inference_steps, self.device)
209+
210+
t_limit = num_inference_steps - int(num_inference_steps * strength)
211+
fwd_timesteps = self.scheduler.timesteps[t_limit:]
212+
bwd_timesteps = fwd_timesteps.flip(0)
213+
214+
coupled_latents = self.prepare_latents(image, base_embeds, bwd_timesteps, guidance_scale, generator)
215+
216+
for i, t in tqdm(enumerate(fwd_timesteps), total=len(fwd_timesteps)):
217+
# j - model_input index, k - base index
218+
for k in range(2):
219+
j = k ^ 1
220+
221+
if self.leapfrog_steps:
222+
if i % 2 == 1:
223+
k, j = j, k
224+
225+
model_input = coupled_latents[j]
226+
base = coupled_latents[k]
227+
228+
latent_model_input = torch.cat([model_input] * 2) if do_classifier_free_guidance else model_input
229+
230+
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=target_embeds).sample
231+
232+
if do_classifier_free_guidance:
233+
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
234+
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
235+
236+
base, model_input = self.denoise_step(
237+
base=base,
238+
model_input=model_input,
239+
model_output=noise_pred,
240+
timestep=t,
241+
)
242+
243+
coupled_latents[k] = model_input
244+
245+
coupled_latents = self.denoise_mixing_layer(x=coupled_latents[0], y=coupled_latents[1])
246+
247+
# either one is fine
248+
final_latent = coupled_latents[0]
249+
250+
if output_type not in ["latent", "pt", "np", "pil"]:
251+
deprecation_message = (
252+
f"the output_type {output_type} is outdated. Please make sure to set it to one of these instead: "
253+
"`pil`, `np`, `pt`, `latent`"
254+
)
255+
deprecate("Unsupported output_type", "1.0.0", deprecation_message, standard_warn=False)
256+
output_type = "np"
257+
258+
if output_type == "latent":
259+
image = final_latent
260+
else:
261+
image = self.decode_latents(final_latent)
262+
image = self.image_processor.postprocess(image, output_type=output_type)
263+
264+
return image

0 commit comments

Comments
 (0)