|
| 1 | +<!--Copyright 2023 The HuggingFace Team. All rights reserved. |
| 2 | + |
| 3 | +Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
| 4 | +the License. You may obtain a copy of the License at |
| 5 | + |
| 6 | +http://www.apache.org/licenses/LICENSE-2.0 |
| 7 | + |
| 8 | +Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
| 9 | +an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
| 10 | +specific language governing permissions and limitations under the License. |
| 11 | +--> |
| 12 | + |
| 13 | +# Text-to-Image Generation with Adapter Conditioning |
| 14 | + |
| 15 | +## Overview |
| 16 | + |
| 17 | +[T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.08453) by Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, Xiaohu Qie. |
| 18 | + |
| 19 | +Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details. |
| 20 | + |
| 21 | +The abstract of the paper is the following: |
| 22 | + |
| 23 | +*The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate structure control is needed. In this paper, we aim to ``dig out" the capabilities that T2I models have implicitly learned, and then explicitly use them to control the generation more granularly. Specifically, we propose to learn simple and small T2I-Adapters to align internal knowledge in T2I models with external control signals, while freezing the original large T2I models. In this way, we can train various adapters according to different conditions, and achieve rich control and editing effects. Further, the proposed T2I-Adapters have attractive properties of practical value, such as composability and generalization ability. Extensive experiments demonstrate that our T2I-Adapter has promising generation quality and a wide range of applications.* |
| 24 | + |
| 25 | +## Available Pipelines: |
| 26 | + |
| 27 | +| Pipeline | Tasks | Demo |
| 28 | +|---|---|:---:| |
| 29 | +| [StableDiffusionAdapterPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py) | *Text-to-Image Generation with T2I-Adapter Conditioning* | - |
| 30 | + |
| 31 | +## Usage example |
| 32 | + |
| 33 | +In the following we give a simple example of how to use a *T2IAdapter* checkpoint with Diffusers for inference. |
| 34 | +The inference pipeline is the same for all pipelines: |
| 35 | + |
| 36 | + 1. Take an image and run it through a pre-conditioning processor to obtain *control image*. |
| 37 | + 2. Run the pre-processed *control image* and *prompt* through the [`StableDiffusionAdapterPipeline`]. |
| 38 | + |
| 39 | +Let's have a look at a simple example using the [Color Adapter](https://huggingface.co/RzZ/sd-v1-4-adapter-color). |
| 40 | + |
| 41 | +```python |
| 42 | +from diffusers.utils import load_image |
| 43 | + |
| 44 | +image = load_image("https://huggingface.co/RzZ/sd-v1-4-adapter-color/resolve/main/color_ref.png") |
| 45 | +``` |
| 46 | + |
| 47 | + |
| 48 | + |
| 49 | + |
| 50 | +Then we can create our color palette by simply resize it to 8 by 8 pixels then scale it back to original size. |
| 51 | + |
| 52 | +```python |
| 53 | +from PIL import Image |
| 54 | + |
| 55 | +color_palette = image.resize((8, 8)) |
| 56 | +color_palette = color_palette.resize((512, 512), resample=Image.Resampling.NEAREST) |
| 57 | +``` |
| 58 | + |
| 59 | +Let's take a look at the processed image. |
| 60 | + |
| 61 | + |
| 62 | + |
| 63 | + |
| 64 | +After we having `color_palette` in hand, we can create the [`StableDiffusionAdapterPipeline`] with pretrained checkpoint. |
| 65 | + |
| 66 | +```py |
| 67 | +import torch |
| 68 | +from diffusers import StableDiffusionAdapterPipeline, T2IAdapter |
| 69 | + |
| 70 | +adapter = T2IAdapter.from_pretrained("RzZ/sd-v1-4-adapter-color") |
| 71 | +pipe = StableDiffusionAdapterPipeline.from_pretrained( |
| 72 | + "CompVis/stable-diffusion-v1-4", |
| 73 | + adapter=adapter, |
| 74 | + torch_dtype=torch.float16, |
| 75 | +) |
| 76 | +pipe.to("cuda") |
| 77 | +``` |
| 78 | + |
| 79 | +And finally we feed the data to the pipelien and wait for the result! |
| 80 | + |
| 81 | +```py |
| 82 | +# fix the random seed, so you will get the same result as the example |
| 83 | +generator = torch.manual_seed(7) |
| 84 | + |
| 85 | +out_image = pipe( |
| 86 | + ["At night, glowing cubes in front of the beach"], |
| 87 | + image=[color_palette], |
| 88 | + generator=generator, |
| 89 | +).images[0] |
| 90 | +``` |
| 91 | + |
| 92 | +This should take only few seconds on GPU (depending on hardware). The output image then looks as follows: |
| 93 | + |
| 94 | + |
| 95 | + |
| 96 | + |
| 97 | +**Note**: To see how to run all other Adapter checkpoints, please have a look at [T2I-Adapter with Stable Diffusion 1.4](#t2i-adapter-with-stable-diffusion-1.4) |
| 98 | + |
| 99 | +<!-- TODO: add space --> |
| 100 | + |
| 101 | +## Available checkpoints |
| 102 | + |
| 103 | +Adapter requires a *control image* in addition to the text-to-image *prompt*. |
| 104 | +Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more. |
| 105 | + |
| 106 | +All official checkpoints can be found under the authors' namespace [TencentARC/T2I-Adapter](https://huggingface.co/TencentARC/T2I-Adapter/tree/main/models). |
| 107 | + |
| 108 | +### T2I-Adapter with Stable Diffusion 1.4 |
| 109 | + |
| 110 | +| Model Name | Control Image Overview| Control Image Example | Generated Image Example | |
| 111 | +|---|---|---|---| |
| 112 | +|[RzZ/sd-v1.4-adapter-color](https://huggingface.co/RzZ/sd-v1-4-adapter-color/)<br/> *Trained with spatial color palette* | A image with 8x8 color palette.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-color/resolve/main/sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/RzZ/sd-v1-4-adapter-color/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-color/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-color/resolve/main/sample_output.png"/></a>| |
| 113 | +|[RzZ/sd-v1.4-adapter-canny](https://huggingface.co/RzZ/sd-v1-4-adapter-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-canny/resolve/main/sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/RzZ/sd-v1-4-adapter-canny/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-canny/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-canny/resolve/main/sample_output.png"/></a>| |
| 114 | +|[RzZ/sd-v1.4-adapter-sketch](https://huggingface.co/RzZ/sd-v1-4-adapter-sketch)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-sketch/resolve/main/sample_input.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/RzZ/sd-v1-4-adapter-sketch/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-sketch/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-sketch/resolve/main/sample_output.png"/></a>| |
| 115 | +|[RzZ/sd-v1.4-adapter-depth](https://huggingface.co/RzZ/sd-v1-4-adapter-depth)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-depth/resolve/main/sample_input.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-depth/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-depth/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-depth/resolve/main/sample_output.png"/></a>| |
| 116 | +|[RzZ/sd-v1.4-adapter-openpose](https://huggingface.co/RzZ/sd-v1-4-adapter-openpose)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-openpose/resolve/main/sample_input.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-openpose/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-openpose/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-openpose/resolve/main/sample_output.png"/></a>| |
| 117 | +|[RzZ/sd-v1.4-adapter-keypose](https://huggingface.co/RzZ/sd-v1-4-adapter-keypose)<br/> *Trained with mmpose skeleton image* | A [mmpose skeleton](https://github.com/open-mmlab/mmpose) image.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-keypose/resolve/main/sample_input.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-keypose/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-keypose/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-keypose/resolve/main/sample_output.png"/></a>| |
| 118 | +|[RzZ/sd-v1.4-adapter-seg](https://huggingface.co/RzZ/sd-v1-4-adapter-seg)<br/>*Trained with semantic segmentation* | An [custom](https://github.com/TencentARC/T2I-Adapter/discussions/25) segmentation protocol image.|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-seg/resolve/main/sample_input.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-seg/resolve/main/sample_input.png"/></a>|<a href="https://huggingface.co/RzZ/sd-v1-4-adapter-seg/resolve/main/sample_output.png"><img width="64" src="https://huggingface.co/RzZ/sd-v1-4-adapter-seg/resolve/main/sample_output.png"/></a> | |
| 119 | + |
| 120 | +## Mix and match multiple adapters |
| 121 | + |
| 122 | +[`StableDiffusionAdapterPipeline`] also support using multiple type of *control image* at once with combination with [`MultiAdapter`]. |
| 123 | +Here is a example of using keypose adapter for character posture control and depth adapter for outlining background. |
| 124 | + |
| 125 | +Just like the previous example, we will first prepare the *control image* for inference. One big difference when using [`MultiAdapter`] is that the *control image* we will send to pipeline is |
| 126 | +combined from multiple images. In this example we stack two 3 channels RGB image(`cond_keypose`, `cond_depth`) together to create a 6 channels image tensor(`cond`). |
| 127 | + |
| 128 | +```py |
| 129 | +import torch |
| 130 | +from PIL import Image |
| 131 | +from diffusers.utils import load_image |
| 132 | + |
| 133 | +cond_keypose = load_image( |
| 134 | + "https://huggingface.co/RzZ/sd-v1-4-adapter-keypose-depth/resolve/main/sample_input_keypose.png" |
| 135 | +) |
| 136 | +cond_depth = load_image("https://huggingface.co/RzZ/sd-v1-4-adapter-keypose-depth/resolve/main/sample_input_depth.png") |
| 137 | +cond = [[cond_keypose, cond_depth]] |
| 138 | + |
| 139 | +prompt = ["A man waling in an office room with nice view"] |
| 140 | +``` |
| 141 | + |
| 142 | +Two *control image* should look like follows: |
| 143 | + |
| 144 | + |
| 145 | + |
| 146 | + |
| 147 | + |
| 148 | +Now we can using `from_adapters` method combine keypose and depth adapter into one, then pass our newly created [`MultiAdapter`] to |
| 149 | +[`StableDiffusionAdapterPipeline`]. You can also play around the value of `adapter_conditioning_scale` to balance the control between adapters. |
| 150 | + |
| 151 | +```py |
| 152 | +from diffusers import StableDiffusionAdapterPipeline, MultiAdapter |
| 153 | + |
| 154 | +adapters = MultiAdapter( |
| 155 | + [ |
| 156 | + T2IAdapter.from_pretrained("RzZ/sd-v1-4-adapter-keypose"), |
| 157 | + T2IAdapter.from_pretrained("RzZ/sd-v1-4-adapter-depth"), |
| 158 | + ] |
| 159 | +) |
| 160 | +adapters = adapters.to(torch.float16) |
| 161 | + |
| 162 | +pipe = StableDiffusionAdapterPipeline.from_pretrained( |
| 163 | + "CompVis/stable-diffusion-v1-4", |
| 164 | + torch_dtype=torch.float16, |
| 165 | + adapter=adapters, |
| 166 | +) |
| 167 | + |
| 168 | +images = pipe(prompt, cond, adapter_conditioning_scale=[0.8, 0.8]) |
| 169 | +``` |
| 170 | + |
| 171 | +After prompt and image is processed by pipeline we should get the result looks like: |
| 172 | + |
| 173 | + |
| 174 | + |
| 175 | + |
| 176 | +## T2I Adapter vs ControlNet |
| 177 | + |
| 178 | +T2I-Adapter is similar to ControlNet. However, T2i-Adapter uses a smaller auxiliary network which is only run once for the entire diffusion process. T2I-Adapter performs slightly worse than ControlNet. However, T2I-Adapter is cheaper to run and is cheaper to run multiple auxiliary networks. |
| 179 | + |
| 180 | +## StableDiffusionAdapterPipeline |
| 181 | +[[autodoc]] StableDiffusionAdapterPipeline |
| 182 | + - all |
| 183 | + - __call__ |
| 184 | + - enable_attention_slicing |
| 185 | + - disable_attention_slicing |
| 186 | + - enable_vae_slicing |
| 187 | + - disable_vae_slicing |
| 188 | + - enable_xformers_memory_efficient_attention |
| 189 | + - disable_xformers_memory_efficient_attention |
0 commit comments