Skip to content

Commit fc85496

Browse files
committed
update docs for masked loss
1 parent 2870be9 commit fc85496

File tree

3 files changed

+73
-1
lines changed

3 files changed

+73
-1
lines changed

README.md

+8
Original file line numberDiff line numberDiff line change
@@ -161,6 +161,10 @@ The majority of scripts is licensed under ASL 2.0 (including codes from Diffuser
161161
- Example: `--network_args "loraplus_unet_lr_ratio=16" "loraplus_text_encoder_lr_ratio=4"` or `--network_args "loraplus_lr_ratio=16" "loraplus_text_encoder_lr_ratio=4"` etc.
162162
- `network_module` `networks.lora` and `networks.dylora` are available.
163163

164+
- The feature to use the transparency (alpha channel) of the image as a mask in the loss calculation has been added. PR [#1223](https://github.com/kohya-ss/sd-scripts/pull/1223) Thanks to u-haru!
165+
- The transparent part is ignored during training. Specify the `--alpha_mask` option in the training script or specify `alpha_mask = true` in the dataset configuration file.
166+
- See [About masked loss](./docs/masked_loss_README.md) for details.
167+
164168
- LoRA training in SDXL now supports block-wise learning rates and block-wise dim (rank). PR [#1331](https://github.com/kohya-ss/sd-scripts/pull/1331)
165169
- Specify the learning rate and dim (rank) for each block.
166170
- See [Block-wise learning rates in LoRA](./docs/train_network_README-ja.md#階層別学習率) for details (Japanese only).
@@ -214,6 +218,10 @@ https://github.com/kohya-ss/sd-scripts/pull/1290) Thanks to frodo821!
214218
- 例:`--network_args "loraplus_unet_lr_ratio=16" "loraplus_text_encoder_lr_ratio=4"` または `--network_args "loraplus_lr_ratio=16" "loraplus_text_encoder_lr_ratio=4"` など
215219
- `network_module``networks.lora` および `networks.dylora` で使用可能です。
216220

221+
- 画像の透明度(アルファチャネル)をロス計算時のマスクとして使用する機能が追加されました。PR [#1223](https://github.com/kohya-ss/sd-scripts/pull/1223) u-haru 氏に感謝します。
222+
- 透明部分が学習時に無視されるようになります。学習スクリプトに `--alpha_mask` オプションを指定するか、データセット設定ファイルに `alpha_mask = true` を指定してください。
223+
- 詳細は [マスクロスについて](./docs/masked_loss_README-ja.md) をご覧ください。
224+
217225
- SDXL の LoRA で階層別学習率、階層別 dim (rank) をサポートしました。PR [#1331](https://github.com/kohya-ss/sd-scripts/pull/1331)
218226
- ブロックごとに学習率および dim (rank) を指定することができます。
219227
- 詳細は [LoRA の階層別学習率](./docs/train_network_README-ja.md#階層別学習率) をご覧ください。

docs/masked_loss_README-ja.md

+9-1
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,17 @@
1919
- マスク画像
2020
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/53e9b0f8-a4bf-49ed-882d-4026f84e8450)
2121

22+
```.toml
23+
[[datasets.subsets]]
24+
image_dir = "/path/to/a_zundamon"
25+
caption_extension = ".txt"
26+
conditioning_data_dir = "/path/to/a_zundamon_mask"
27+
num_repeats = 8
28+
```
29+
2230
マスク画像は、学習画像と同じサイズで、学習する部分を白、無視する部分を黒で描画します。グレースケールにも対応しています(127 ならロス重みが 0.5 になります)。なお、正確にはマスク画像の R チャネルが用いられます。
2331

24-
DreamBooth 方式の dataset で、`conditioning_data_dir` で指定したディレクトリにマスク画像を保存するしてください。ControlNet のデータセットと同じですので、詳細は [ControlNet-LLLite](train_lllite_README-ja.md#データセットの準備) を参照してください。
32+
DreamBooth 方式の dataset で、`conditioning_data_dir` で指定したディレクトリにマスク画像を保存してください。ControlNet のデータセットと同じですので、詳細は [ControlNet-LLLite](train_lllite_README-ja.md#データセットの準備) を参照してください。
2533

2634
### 透明度(アルファチャネル)を使用する方法
2735

docs/masked_loss_README.md

+56
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
## Masked Loss
2+
3+
Masked loss is a feature that allows you to train only part of an image by calculating the loss only for the part specified by the mask of the input image. For example, if you want to train a character, you can train only the character part by masking it, ignoring the background.
4+
5+
There are two ways to specify the mask for masked loss.
6+
7+
- Using a mask image
8+
- Using transparency (alpha channel) of the image
9+
10+
The sample uses the "AI image model training data" from [ZunZunPJ Illustration/3D Data](https://zunko.jp/con_illust.html).
11+
12+
### Using a mask image
13+
14+
This is a method of preparing a mask image corresponding to each training image. Prepare a mask image with the same file name as the training image and save it in a different directory from the training image.
15+
16+
- Training image
17+
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/607c5116-5f62-47de-8b66-9c4a597f0441)
18+
- Mask image
19+
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/53e9b0f8-a4bf-49ed-882d-4026f84e8450)
20+
21+
```.toml
22+
[[datasets.subsets]]
23+
image_dir = "/path/to/a_zundamon"
24+
caption_extension = ".txt"
25+
conditioning_data_dir = "/path/to/a_zundamon_mask"
26+
num_repeats = 8
27+
```
28+
29+
The mask image is the same size as the training image, with the part to be trained drawn in white and the part to be ignored in black. It also supports grayscale (127 gives a loss weight of 0.5). The R channel of the mask image is used currently.
30+
31+
Use the dataset in the DreamBooth method, and save the mask image in the directory specified by `conditioning_data_dir`. It is the same as the ControlNet dataset, so please refer to [ControlNet-LLLite](train_lllite_README.md#Preparing-the-dataset) for details.
32+
33+
### Using transparency (alpha channel) of the image
34+
35+
The transparency (alpha channel) of the training image is used as a mask. The part with transparency 0 is ignored, the part with transparency 255 is trained. For semi-transparent parts, the loss weight changes according to the transparency (127 gives a weight of about 0.5).
36+
37+
![image](https://github.com/kohya-ss/sd-scripts/assets/52813779/0baa129b-446a-4aac-b98c-7208efb0e75e)
38+
39+
※Each image is a transparent PNG
40+
41+
Specify `--alpha_mask` in the training script options or specify `alpha_mask` in the subset of the dataset configuration file. For example, it will look like this.
42+
43+
```toml
44+
[[datasets.subsets]]
45+
image_dir = "/path/to/image/dir"
46+
caption_extension = ".txt"
47+
num_repeats = 8
48+
alpha_mask = true
49+
```
50+
51+
## Notes on training
52+
53+
- At the moment, only the dataset in the DreamBooth method is supported.
54+
- The mask is applied after the size is reduced to 1/8, which is the size of the latents. Therefore, fine details (such as ahoge or earrings) may not be learned well. Some dilations of the mask may be necessary.
55+
- If using masked loss, it may not be necessary to include parts that are not to be trained in the caption. (To be verified)
56+
- In the case of `alpha_mask`, the latents cache is automatically regenerated when the enable/disable state of the mask is switched.

0 commit comments

Comments
 (0)