Skip to content

Refactor OmniGen #10771

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 62 commits into from
Feb 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
36eee40
OmniGen model.py
staoxiao Nov 30, 2024
bbe2b98
update OmniGenTransformerModel
staoxiao Nov 30, 2024
b839590
omnigen pipeline
staoxiao Dec 2, 2024
0d04194
omnigen pipeline
staoxiao Dec 2, 2024
85abe5e
update omnigen_pipeline
staoxiao Dec 3, 2024
db92c69
test case for omnigen
staoxiao Dec 3, 2024
308766c
update omnigenpipeline
staoxiao Dec 4, 2024
4c5e8c5
update docs
staoxiao Dec 5, 2024
d9f80fc
update docs
staoxiao Dec 5, 2024
c78d1f4
offload_transformer
staoxiao Dec 6, 2024
236f14b
enable_transformer_block_cpu_offload
staoxiao Dec 8, 2024
6b52547
update docs
staoxiao Dec 8, 2024
4fef9c8
reformat
staoxiao Dec 8, 2024
f2fc182
reformat
staoxiao Dec 8, 2024
5f3148d
reformat
staoxiao Dec 8, 2024
cdd500e
Merge pull request #1 from huggingface/main
staoxiao Dec 8, 2024
178d377
update docs
staoxiao Dec 8, 2024
08c05f9
update docs
staoxiao Dec 8, 2024
286990d
make style
staoxiao Dec 8, 2024
3bb092b
make style
staoxiao Dec 8, 2024
5925cb9
Update docs/source/en/api/models/omnigen_transformer.md
staoxiao Dec 10, 2024
56aa821
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 10, 2024
1e33ca8
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 10, 2024
c81a84d
update docs
staoxiao Dec 10, 2024
3867830
revert changes to examples/
a-r-r-o-w Dec 11, 2024
c8b8173
Merge branch 'main' into main
a-r-r-o-w Dec 11, 2024
af0effa
update OmniGen2DModel
staoxiao Dec 19, 2024
a0cd392
make style
staoxiao Dec 19, 2024
48fd390
update test cases
staoxiao Dec 19, 2024
78431e1
Update docs/source/en/api/pipelines/omnigen.md
staoxiao Dec 20, 2024
d99a9f8
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 20, 2024
61d802a
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 20, 2024
8d6a35e
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 20, 2024
f8e645b
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 20, 2024
85cdeb9
Update docs/source/en/using-diffusers/omnigen.md
staoxiao Dec 20, 2024
3565837
update docs
staoxiao Dec 20, 2024
0ccca15
typo
staoxiao Dec 29, 2024
d014f95
Update src/diffusers/models/embeddings.py
staoxiao Feb 8, 2025
753daec
Update src/diffusers/models/attention.py
staoxiao Feb 8, 2025
3d30a2a
Update src/diffusers/models/transformers/transformer_omnigen.py
staoxiao Feb 8, 2025
9d1580a
Update src/diffusers/models/transformers/transformer_omnigen.py
staoxiao Feb 8, 2025
6a58746
Update src/diffusers/models/transformers/transformer_omnigen.py
staoxiao Feb 8, 2025
2b464c8
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
7888119
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
39148c3
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
52a6f9e
Update tests/pipelines/omnigen/test_pipeline_omnigen.py
staoxiao Feb 8, 2025
aeea57a
Update tests/pipelines/omnigen/test_pipeline_omnigen.py
staoxiao Feb 8, 2025
6b1177b
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
7003a80
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
792c3e6
Update src/diffusers/pipelines/omnigen/pipeline_omnigen.py
staoxiao Feb 8, 2025
b0c6267
Merge pull request #2 from huggingface/main
staoxiao Feb 8, 2025
f5e3f0b
consistent attention processor
staoxiao Feb 8, 2025
3541ab8
updata
staoxiao Feb 8, 2025
4e9850a
update
staoxiao Feb 8, 2025
f91cfcf
check_inputs
staoxiao Feb 9, 2025
711dded
make style
staoxiao Feb 11, 2025
565e51c
update testpipeline
staoxiao Feb 11, 2025
29ad6ae
update testpipeline
staoxiao Feb 11, 2025
970b086
refactor omnigen
a-r-r-o-w Feb 11, 2025
d5d7caa
Merge branch 'main' into refactor/omnigen
a-r-r-o-w Feb 11, 2025
1eb7939
more updates
a-r-r-o-w Feb 11, 2025
aa32b86
apply review suggestion
a-r-r-o-w Feb 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions docs/source/en/api/models/omnigen_transformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,17 @@ specific language governing permissions and limitations under the License.

A Transformer model that accepts multimodal instructions to generate images for [OmniGen](https://github.com/VectorSpaceLab/OmniGen/).

The abstract from the paper is:

*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*

```python
import torch
from diffusers import OmniGenTransformer2DModel

transformer = OmniGenTransformer2DModel.from_pretrained("Shitao/OmniGen-v1-diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
```

## OmniGenTransformer2DModel

[[autodoc]] OmniGenTransformer2DModel
40 changes: 7 additions & 33 deletions docs/source/en/api/pipelines/omnigen.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,27 +19,7 @@

The abstract from the paper is:

*The emergence of Large Language Models (LLMs) has unified language
generation tasks and revolutionized human-machine interaction.
However, in the realm of image generation, a unified model capable of handling various tasks
within a single framework remains largely unexplored. In
this work, we introduce OmniGen, a new diffusion model
for unified image generation. OmniGen is characterized
by the following features: 1) Unification: OmniGen not
only demonstrates text-to-image generation capabilities but
also inherently supports various downstream tasks, such
as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of
OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion
models, it is more user-friendly and can complete complex
tasks end-to-end through instructions without the need for
extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from
learning in a unified format, OmniGen effectively transfers
knowledge across different tasks, manages unseen tasks and
domains, and exhibits novel capabilities. We also explore
the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism.
This work represents the first attempt at a general-purpose image generation model,
and we will release our resources at https:
//github.com/VectorSpaceLab/OmniGen to foster future advancements.*
*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*

<Tip>

Expand All @@ -49,25 +29,22 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m

This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1).


## Inference

First, load the pipeline:

```python
import torch
from diffusers import OmniGenPipeline
pipe = OmniGenPipeline.from_pretrained(
"Shitao/OmniGen-v1-diffusers",
torch_dtype=torch.bfloat16
)

pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
pipe.to("cuda")
```

For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image.
You can try setting the `height` and `width` parameters to generate images with different size.

```py
```python
prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
image = pipe(
prompt=prompt,
Expand All @@ -76,14 +53,14 @@ image = pipe(
guidance_scale=3,
generator=torch.Generator(device="cpu").manual_seed(111),
).images[0]
image
image.save("output.png")
```

OmniGen supports multimodal inputs.
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image.
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.

```py
```python
prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")]
image = pipe(
Expand All @@ -93,14 +70,11 @@ image = pipe(
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(222)).images[0]
image
image.save("output.png")
```


## OmniGenPipeline

[[autodoc]] OmniGenPipeline
- all
- __call__


81 changes: 42 additions & 39 deletions docs/source/en/using-diffusers/omnigen.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,25 +19,22 @@ For more information, please refer to the [paper](https://arxiv.org/pdf/2409.113
This guide will walk you through using OmniGen for various tasks and use cases.

## Load model checkpoints

Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [`~DiffusionPipeline.from_pretrained`] method.

```py
```python
import torch
from diffusers import OmniGenPipeline
pipe = OmniGenPipeline.from_pretrained(
"Shitao/OmniGen-v1-diffusers",
torch_dtype=torch.bfloat16
)
```


pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
```

## Text-to-image

For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image.
You can try setting the `height` and `width` parameters to generate images with different size.

```py
```python
import torch
from diffusers import OmniGenPipeline

Expand All @@ -55,8 +52,9 @@ image = pipe(
guidance_scale=3,
generator=torch.Generator(device="cpu").manual_seed(111),
).images[0]
image
image.save("output.png")
```

<div class="flex justify-center">
<img src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png" alt="generated image"/>
</div>
Expand All @@ -67,7 +65,7 @@ OmniGen supports multimodal inputs.
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image.
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.

```py
```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image
Expand All @@ -86,9 +84,11 @@ image = pipe(
guidance_scale=2,
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(222)).images[0]
image
generator=torch.Generator(device="cpu").manual_seed(222)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png"/>
Expand All @@ -101,7 +101,8 @@ image
</div>

OmniGen has some interesting features, such as visual reasoning, as shown in the example below.
```py

```python
prompt="If the woman is thirsty, what should she take? Find it in the image and highlight it in blue. <img><|image_1|></img>"
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/edit.png")]
image = pipe(
Expand All @@ -110,20 +111,20 @@ image = pipe(
guidance_scale=2,
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(0)).images[0]
image
generator=torch.Generator(device="cpu").manual_seed(0)
).images[0]
image.save("output.png")
```

<div class="flex justify-center">
<img src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/reasoning.png" alt="generated image"/>
</div>


## Controllable generation

OmniGen can handle several classic computer vision tasks.
As shown below, OmniGen can detect human skeletons in input images, which can be used as control conditions to generate new images.
OmniGen can handle several classic computer vision tasks. As shown below, OmniGen can detect human skeletons in input images, which can be used as control conditions to generate new images.

```py
```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image
Expand All @@ -142,8 +143,9 @@ image1 = pipe(
guidance_scale=2,
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(333)).images[0]
image1
generator=torch.Generator(device="cpu").manual_seed(333)
).images[0]
image1.save("image1.png")

prompt="Generate a new photo using the following picture and text as conditions: <img><|image_1|></img>\n A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him."
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/skeletal.png")]
Expand All @@ -153,8 +155,9 @@ image2 = pipe(
guidance_scale=2,
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(333)).images[0]
image2
generator=torch.Generator(device="cpu").manual_seed(333)
).images[0]
image2.save("image2.png")
```

<div class="flex flex-row gap-4">
Expand All @@ -174,7 +177,8 @@ image2


OmniGen can also directly use relevant information from input images to generate new images.
```py

```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image
Expand All @@ -193,23 +197,24 @@ image = pipe(
guidance_scale=2,
img_guidance_scale=1.6,
use_input_image_size_as_output=True,
generator=torch.Generator(device="cpu").manual_seed(0)).images[0]
image
generator=torch.Generator(device="cpu").manual_seed(0)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/same_pose.png"/>
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image</figcaption>
</div>
</div>


## ID and object preserving

OmniGen can generate multiple images based on the people and objects in the input image and supports inputting multiple images simultaneously.
Additionally, OmniGen can extract desired objects from an image containing multiple objects based on instructions.

```py
```python
import torch
from diffusers import OmniGenPipeline
from diffusers.utils import load_image
Expand All @@ -231,9 +236,11 @@ image = pipe(
width=1024,
guidance_scale=2.5,
img_guidance_scale=1.6,
generator=torch.Generator(device="cpu").manual_seed(666)).images[0]
image
generator=torch.Generator(device="cpu").manual_seed(666)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
<div class="flex-1">
<img class="rounded-xl" src="https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/3.png"/>
Expand All @@ -249,7 +256,6 @@ image
</div>
</div>


```py
import torch
from diffusers import OmniGenPipeline
Expand All @@ -261,7 +267,6 @@ pipe = OmniGenPipeline.from_pretrained(
)
pipe.to("cuda")


prompt="A woman is walking down the street, wearing a white long-sleeve blouse with lace details on the sleeves, paired with a blue pleated skirt. The woman is <img><|image_1|></img>. The long-sleeve blouse and a pleated skirt are <img><|image_2|></img>."
input_image_1 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/emma.jpeg")
input_image_2 = load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/dress.jpg")
Expand All @@ -273,8 +278,9 @@ image = pipe(
width=1024,
guidance_scale=2.5,
img_guidance_scale=1.6,
generator=torch.Generator(device="cpu").manual_seed(666)).images[0]
image
generator=torch.Generator(device="cpu").manual_seed(666)
).images[0]
image.save("output.png")
```

<div class="flex flex-row gap-4">
Expand All @@ -292,13 +298,12 @@ image
</div>
</div>


## Optimization when inputting multiple images
## Optimization when using multiple images

For text-to-image task, OmniGen requires minimal memory and time costs (9GB memory and 31s for a 1024x1024 image on A800 GPU).
However, when using input images, the computational cost increases.

Here are some guidelines to help you reduce computational costs when inputting multiple images. The experiments are conducted on an A800 GPU with two input images.
Here are some guidelines to help you reduce computational costs when using multiple images. The experiments are conducted on an A800 GPU with two input images.

Like other pipelines, you can reduce memory usage by offloading the model: `pipe.enable_model_cpu_offload()` or `pipe.enable_sequential_cpu_offload() `.
In OmniGen, you can also decrease computational overhead by reducing the `max_input_image_size`.
Expand All @@ -310,5 +315,3 @@ The memory consumption for different image sizes is shown in the table below:
| max_input_image_size=512 | 17GB |
| max_input_image_size=256 | 14GB |



2 changes: 1 addition & 1 deletion src/diffusers/models/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -1199,7 +1199,7 @@ def apply_rotary_emb(
x_real, x_imag = x.reshape(*x.shape[:-1], -1, 2).unbind(-1) # [B, S, H, D//2]
x_rotated = torch.stack([-x_imag, x_real], dim=-1).flatten(3)
elif use_real_unbind_dim == -2:
# Used for Stable Audio
# Used for Stable Audio and OmniGen
x_real, x_imag = x.reshape(*x.shape[:-1], 2, -1).unbind(-2) # [B, S, H, D//2]
x_rotated = torch.cat([-x_imag, x_real], dim=-1)
else:
Expand Down
Loading