Skip to content

Commit e3ddd1f

Browse files
committed
update README and format code
1 parent 0640f01 commit e3ddd1f

File tree

2 files changed

+11
-2
lines changed

2 files changed

+11
-2
lines changed

README.md

+4
Original file line numberDiff line numberDiff line change
@@ -177,6 +177,8 @@ https://github.com/kohya-ss/sd-scripts/pull/1290) Thanks to frodo821!
177177

178178
- Fixed a bug that `caption_separator` cannot be specified in the subset in the dataset settings .toml file. [#1312](https://github.com/kohya-ss/sd-scripts/pull/1312) and [#1313](https://github.com/kohya-ss/sd-scripts/pull/1312) Thanks to rockerBOO!
179179

180+
- Fixed a potential bug in ControlNet-LLLite training. PR [#1322](https://github.com/kohya-ss/sd-scripts/pull/1322) Thanks to aria1th!
181+
180182
- Fixed some bugs when using DeepSpeed. Related [#1247](https://github.com/kohya-ss/sd-scripts/pull/1247)
181183

182184
- Added a prompt option `--f` to `gen_imgs.py` to specify the file name when saving. Also, Diffusers-based keys for LoRA weights are now supported.
@@ -219,6 +221,8 @@ https://github.com/kohya-ss/sd-scripts/pull/1290) frodo821 氏に感謝します
219221

220222
- データセット設定の .toml ファイルで、`caption_separator` が subset に指定できない不具合が修正されました。 PR [#1312](https://github.com/kohya-ss/sd-scripts/pull/1312) および [#1313](https://github.com/kohya-ss/sd-scripts/pull/1312) rockerBOO 氏に感謝します。
221223

224+
- ControlNet-LLLite 学習時の潜在バグが修正されました。 PR [#1322](https://github.com/kohya-ss/sd-scripts/pull/1322) aria1th 氏に感謝します。
225+
222226
- DeepSpeed 使用時のいくつかのバグを修正しました。関連 [#1247](https://github.com/kohya-ss/sd-scripts/pull/1247)
223227

224228
- `gen_imgs.py` のプロンプトオプションに、保存時のファイル名を指定する `--f` オプションを追加しました。また同スクリプトで Diffusers ベースのキーを持つ LoRA の重みに対応しました。

sdxl_train_control_net_lllite.py

+7-2
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515

1616
import torch
1717
from library.device_utils import init_ipex, clean_memory_on_device
18+
1819
init_ipex()
1920

2021
from torch.nn.parallel import DistributedDataParallel as DDP
@@ -439,7 +440,9 @@ def remove_model(old_ckpt_name):
439440

440441
# Sample noise, sample a random timestep for each image, and add noise to the latents,
441442
# with noise offset and/or multires noise if specified
442-
noise, noisy_latents, timesteps, huber_c = train_util.get_noise_noisy_latents_and_timesteps(args, noise_scheduler, latents)
443+
noise, noisy_latents, timesteps, huber_c = train_util.get_noise_noisy_latents_and_timesteps(
444+
args, noise_scheduler, latents
445+
)
443446

444447
noisy_latents = noisy_latents.to(weight_dtype) # TODO check why noisy_latents is not weight_dtype
445448

@@ -458,7 +461,9 @@ def remove_model(old_ckpt_name):
458461
else:
459462
target = noise
460463

461-
loss = train_util.conditional_loss(noise_pred.float(), target.float(), reduction="none", loss_type=args.loss_type, huber_c=huber_c)
464+
loss = train_util.conditional_loss(
465+
noise_pred.float(), target.float(), reduction="none", loss_type=args.loss_type, huber_c=huber_c
466+
)
462467
loss = loss.mean([1, 2, 3])
463468

464469
loss_weights = batch["loss_weights"] # 各sampleごとのweight

0 commit comments

Comments
 (0)