Skip to content

Commit dbe3316

Browse files
stevhliuhari10599
authored andcommitted
[docs] Adapt a model (huggingface#3326)
* first draft * apply feedback * conv_in.weight thrown away
1 parent 45b86c9 commit dbe3316

File tree

2 files changed

+44
-0
lines changed

2 files changed

+44
-0
lines changed

docs/source/en/_toctree.yml

+2
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,8 @@
6262
title: Overview
6363
- local: training/create_dataset
6464
title: Create a dataset for training
65+
- local: training/adapt_a_model
66+
title: Adapt a model to a new task
6567
- local: training/unconditional_training
6668
title: Unconditional image generation
6769
- local: training/text_inversion
+42
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
# Adapt a model to a new task
2+
3+
Many diffusion systems share the same components, allowing you to adapt a pretrained model for one task to an entirely different task.
4+
5+
This guide will show you how to adapt a pretrained text-to-image model for inpainting by initializing and modifying the architecture of a pretrained [`UNet2DConditionModel`].
6+
7+
## Configure UNet2DConditionModel parameters
8+
9+
A [`UNet2DConditionModel`] by default accepts 4 channels in the [input sample](https://huggingface.co/docs/diffusers/v0.16.0/en/api/models#diffusers.UNet2DConditionModel.in_channels). For example, load a pretrained text-to-image model like [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) and take a look at the number of `in_channels`:
10+
11+
```py
12+
from diffusers import StableDiffusionPipeline
13+
14+
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
15+
pipeline.unet.config["in_channels"]
16+
4
17+
```
18+
19+
Inpainting requires 9 channels in the input sample. You can check this value in a pretrained inpainting model like [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting):
20+
21+
```py
22+
from diffusers import StableDiffusionPipeline
23+
24+
pipeline = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-inpainting")
25+
pipeline.unet.config["in_channels"]
26+
9
27+
```
28+
29+
To adapt your text-to-image model for inpainting, you'll need to change the number of `in_channels` from 4 to 9.
30+
31+
Initialize a [`UNet2DConditionModel`] with the pretrained text-to-image model weights, and change `in_channels` to 9. Changing the number of `in_channels` means you need to set `ignore_mismatched_sizes=True` and `low_cpu_mem_usage=False` to avoid a size mismatch error because the shape is different now.
32+
33+
```py
34+
from diffusers import UNet2DConditionModel
35+
36+
model_id = "runwayml/stable-diffusion-v1-5"
37+
unet = UNet2DConditionModel.from_pretrained(
38+
model_id, subfolder="unet", in_channels=9, low_cpu_mem_usage=False, ignore_mismatched_sizes=True
39+
)
40+
```
41+
42+
The pretrained weights of the other components from the text-to-image model are initialized from their checkpoints, but the input channel weights (`conv_in.weight`) of the `unet` are randomly initialized. It is important to finetune the model for inpainting because otherwise the model returns noise.

0 commit comments

Comments
 (0)