Skip to content

Commit 28e2e4a

Browse files
asfiyab-nvidiaJimmy
authored and
Jimmy
committed
add stable diffusion tensorrt img2img pipeline (huggingface#3419)
* add stable diffusion tensorrt img2img pipeline Signed-off-by: Asfiya Baig <[email protected]> * update docstrings Signed-off-by: Asfiya Baig <[email protected]> --------- Signed-off-by: Asfiya Baig <[email protected]>
1 parent b9c4abd commit 28e2e4a

File tree

3 files changed

+1102
-7
lines changed

3 files changed

+1102
-7
lines changed

examples/community/README.md

100644100755
+41-3
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,10 @@ If a community doesn't work as expected, please open an issue and ping the autho
3131
| UnCLIP Image Interpolation Pipeline | Diffusion Pipeline that allows passing two images/image_embeddings and produces images while interpolating between their image-embeddings | [UnCLIP Image Interpolation Pipeline](#unclip-image-interpolation-pipeline) | - | [Naga Sai Abhinay Devarinti](https://github.com/Abhinay1997/) |
3232
| DDIM Noise Comparative Analysis Pipeline | Investigating how the diffusion models learn visual concepts from each noise level (which is a contribution of [P2 weighting (CVPR 2022)](https://arxiv.org/abs/2204.00227)) | [DDIM Noise Comparative Analysis Pipeline](#ddim-noise-comparative-analysis-pipeline) | - | [Aengus (Duc-Anh)](https://github.com/aengusng8) |
3333
| CLIP Guided Img2Img Stable Diffusion Pipeline | Doing CLIP guidance for image to image generation with Stable Diffusion | [CLIP Guided Img2Img Stable Diffusion](#clip-guided-img2img-stable-diffusion) | - | [Nipun Jindal](https://github.com/nipunjindal/) |
34-
| TensorRT Stable Diffusion Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
34+
| TensorRT Stable Diffusion Text to Image Pipeline | Accelerates the Stable Diffusion Text2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Text to Image Pipeline](#tensorrt-text2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
3535
| EDICT Image Editing Pipeline | Diffusion pipeline for text-guided image editing | [EDICT Image Editing Pipeline](#edict-image-editing-pipeline) | - | [Joqsan Azocar](https://github.com/Joqsan) |
3636
| Stable Diffusion RePaint | Stable Diffusion pipeline using [RePaint](https://arxiv.org/abs/2201.0986) for inpainting. | [Stable Diffusion RePaint](#stable-diffusion-repaint ) | - | [Markus Pobitzer](https://github.com/Markus-Pobitzer) |
37-
38-
37+
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [TensorRT Stable Diffusion Image to Image Pipeline](#tensorrt-image2image-stable-diffusion-pipeline) | - | [Asfiya Baig](https://github.com/asfiyab-nvidia) |
3938

4039
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
4140
```py
@@ -1282,3 +1281,42 @@ pipe = pipe.to("cuda")
12821281
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
12831282
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
12841283
```
1284+
1285+
### TensorRT Image2Image Stable Diffusion Pipeline
1286+
1287+
The TensorRT Pipeline can be used to accelerate the Image2Image Stable Diffusion Inference run.
1288+
1289+
NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1290+
1291+
```python
1292+
import requests
1293+
from io import BytesIO
1294+
from PIL import Image
1295+
import torch
1296+
from diffusers import DDIMScheduler
1297+
from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
1298+
1299+
# Use the DDIMScheduler scheduler here instead
1300+
scheduler = DDIMScheduler.from_pretrained("stabilityai/stable-diffusion-2-1",
1301+
subfolder="scheduler")
1302+
1303+
1304+
pipe = StableDiffusionImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-2-1",
1305+
custom_pipeline="stable_diffusion_tensorrt_img2img",
1306+
revision='fp16',
1307+
torch_dtype=torch.float16,
1308+
scheduler=scheduler,)
1309+
1310+
# re-use cached folder to save ONNX models and TensorRT Engines
1311+
pipe.set_cached_folder("stabilityai/stable-diffusion-2-1", revision='fp16',)
1312+
1313+
pipe = pipe.to("cuda")
1314+
1315+
url = "https://pajoca.com/wp-content/uploads/2022/09/tekito-yamakawa-1.png"
1316+
response = requests.get(url)
1317+
input_image = Image.open(BytesIO(response.content)).convert("RGB")
1318+
1319+
prompt = "photorealistic new zealand hills"
1320+
image = pipe(prompt, image=input_image, strength=0.75,).images[0]
1321+
image.save('tensorrt_img2img_new_zealand_hills.png')
1322+
```

0 commit comments

Comments
 (0)