Skip to content

Commit 798e171

Browse files
staoxiaostevhliuhlkya-r-r-o-w
authored
Add OmniGen (#10148)
* OmniGen model.py * update OmniGenTransformerModel * omnigen pipeline * omnigen pipeline * update omnigen_pipeline * test case for omnigen * update omnigenpipeline * update docs * update docs * offload_transformer * enable_transformer_block_cpu_offload * update docs * reformat * reformat * reformat * update docs * update docs * make style * make style * Update docs/source/en/api/models/omnigen_transformer.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * update docs * revert changes to examples/ * update OmniGen2DModel * make style * update test cases * Update docs/source/en/api/pipelines/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/using-diffusers/omnigen.md Co-authored-by: Steven Liu <[email protected]> * update docs * typo * Update src/diffusers/models/embeddings.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/models/attention.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/models/transformers/transformer_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update tests/pipelines/omnigen/test_pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update tests/pipelines/omnigen/test_pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py Co-authored-by: hlky <[email protected]> * consistent attention processor * updata * update * check_inputs * make style * update testpipeline * update testpipeline --------- Co-authored-by: Steven Liu <[email protected]> Co-authored-by: hlky <[email protected]> Co-authored-by: Aryan <[email protected]>
1 parent ed4b752 commit 798e171

20 files changed

+2543
-4
lines changed

docs/source/en/_toctree.yml

+6
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,8 @@
8989
title: Kandinsky
9090
- local: using-diffusers/ip_adapter
9191
title: IP-Adapter
92+
- local: using-diffusers/omnigen
93+
title: OmniGen
9294
- local: using-diffusers/pag
9395
title: PAG
9496
- local: using-diffusers/controlnet
@@ -292,6 +294,8 @@
292294
title: LTXVideoTransformer3DModel
293295
- local: api/models/mochi_transformer3d
294296
title: MochiTransformer3DModel
297+
- local: api/models/omnigen_transformer
298+
title: OmniGenTransformer2DModel
295299
- local: api/models/pixart_transformer2d
296300
title: PixArtTransformer2DModel
297301
- local: api/models/prior_transformer
@@ -448,6 +452,8 @@
448452
title: MultiDiffusion
449453
- local: api/pipelines/musicldm
450454
title: MusicLDM
455+
- local: api/pipelines/omnigen
456+
title: OmniGen
451457
- local: api/pipelines/pag
452458
title: PAG
453459
- local: api/pipelines/paint_by_example
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# OmniGenTransformer2DModel
14+
15+
A Transformer model that accepts multimodal instructions to generate images for [OmniGen](https://github.com/VectorSpaceLab/OmniGen/).
16+
17+
## OmniGenTransformer2DModel
18+
19+
[[autodoc]] OmniGenTransformer2DModel
+106
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
-->
15+
16+
# OmniGen
17+
18+
[OmniGen: Unified Image Generation](https://arxiv.org/pdf/2409.11340) from BAAI, by Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, Zheng Liu.
19+
20+
The abstract from the paper is:
21+
22+
*The emergence of Large Language Models (LLMs) has unified language
23+
generation tasks and revolutionized human-machine interaction.
24+
However, in the realm of image generation, a unified model capable of handling various tasks
25+
within a single framework remains largely unexplored. In
26+
this work, we introduce OmniGen, a new diffusion model
27+
for unified image generation. OmniGen is characterized
28+
by the following features: 1) Unification: OmniGen not
29+
only demonstrates text-to-image generation capabilities but
30+
also inherently supports various downstream tasks, such
31+
as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of
32+
OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion
33+
models, it is more user-friendly and can complete complex
34+
tasks end-to-end through instructions without the need for
35+
extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from
36+
learning in a unified format, OmniGen effectively transfers
37+
knowledge across different tasks, manages unseen tasks and
38+
domains, and exhibits novel capabilities. We also explore
39+
the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism.
40+
This work represents the first attempt at a general-purpose image generation model,
41+
and we will release our resources at https:
42+
//github.com/VectorSpaceLab/OmniGen to foster future advancements.*
43+
44+
<Tip>
45+
46+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
47+
48+
</Tip>
49+
50+
This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1).
51+
52+
53+
## Inference
54+
55+
First, load the pipeline:
56+
57+
```python
58+
import torch
59+
from diffusers import OmniGenPipeline
60+
pipe = OmniGenPipeline.from_pretrained(
61+
"Shitao/OmniGen-v1-diffusers",
62+
torch_dtype=torch.bfloat16
63+
)
64+
pipe.to("cuda")
65+
```
66+
67+
For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image.
68+
You can try setting the `height` and `width` parameters to generate images with different size.
69+
70+
```py
71+
prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
72+
image = pipe(
73+
prompt=prompt,
74+
height=1024,
75+
width=1024,
76+
guidance_scale=3,
77+
generator=torch.Generator(device="cpu").manual_seed(111),
78+
).images[0]
79+
image
80+
```
81+
82+
OmniGen supports multimodal inputs.
83+
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image.
84+
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.
85+
86+
```py
87+
prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola."
88+
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")]
89+
image = pipe(
90+
prompt=prompt,
91+
input_images=input_images,
92+
guidance_scale=2,
93+
img_guidance_scale=1.6,
94+
use_input_image_size_as_output=True,
95+
generator=torch.Generator(device="cpu").manual_seed(222)).images[0]
96+
image
97+
```
98+
99+
100+
## OmniGenPipeline
101+
102+
[[autodoc]] OmniGenPipeline
103+
- all
104+
- __call__
105+
106+

0 commit comments

Comments
 (0)