You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/api/models/omnigen_transformer.md
+11
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,17 @@ specific language governing permissions and limitations under the License.
14
14
15
15
A Transformer model that accepts multimodal instructions to generate images for [OmniGen](https://github.com/VectorSpaceLab/OmniGen/).
16
16
17
+
The abstract from the paper is:
18
+
19
+
*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*
Copy file name to clipboardExpand all lines: docs/source/en/api/pipelines/omnigen.md
+7-33
Original file line number
Diff line number
Diff line change
@@ -19,27 +19,7 @@
19
19
20
20
The abstract from the paper is:
21
21
22
-
*The emergence of Large Language Models (LLMs) has unified language
23
-
generation tasks and revolutionized human-machine interaction.
24
-
However, in the realm of image generation, a unified model capable of handling various tasks
25
-
within a single framework remains largely unexplored. In
26
-
this work, we introduce OmniGen, a new diffusion model
27
-
for unified image generation. OmniGen is characterized
28
-
by the following features: 1) Unification: OmniGen not
29
-
only demonstrates text-to-image generation capabilities but
30
-
also inherently supports various downstream tasks, such
31
-
as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of
32
-
OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion
33
-
models, it is more user-friendly and can complete complex
34
-
tasks end-to-end through instructions without the need for
35
-
extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from
36
-
learning in a unified format, OmniGen effectively transfers
37
-
knowledge across different tasks, manages unseen tasks and
38
-
domains, and exhibits novel capabilities. We also explore
39
-
the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism.
40
-
This work represents the first attempt at a general-purpose image generation model,
41
-
and we will release our resources at https:
42
-
//github.com/VectorSpaceLab/OmniGen to foster future advancements.*
22
+
*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*
43
23
44
24
<Tip>
45
25
@@ -49,25 +29,22 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.m
49
29
50
30
This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1).
For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image.
68
45
You can try setting the `height` and `width` parameters to generate images with different size.
69
46
70
-
```py
47
+
```python
71
48
prompt ="Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
Copy file name to clipboardExpand all lines: docs/source/en/using-diffusers/omnigen.md
+42-39
Original file line number
Diff line number
Diff line change
@@ -19,25 +19,22 @@ For more information, please refer to the [paper](https://arxiv.org/pdf/2409.113
19
19
This guide will walk you through using OmniGen for various tasks and use cases.
20
20
21
21
## Load model checkpoints
22
+
22
23
Model weights may be stored in separate subfolders on the Hub or locally, in which case, you should use the [`~DiffusionPipeline.from_pretrained`] method.
OmniGen can handle several classic computer vision tasks.
124
-
As shown below, OmniGen can detect human skeletons in input images, which can be used as control conditions to generate new images.
125
+
OmniGen can handle several classic computer vision tasks. As shown below, OmniGen can detect human skeletons in input images, which can be used as control conditions to generate new images.
prompt="Generate a new photo using the following picture and text as conditions: <img><|image_1|></img>\n A young boy is sitting on a sofa in the library, holding a book. His hair is neatly combed, and a faint smile plays on his lips, with a few freckles scattered across his cheeks. The library is quiet, with rows of shelves filled with books stretching out behind him."
prompt="A woman is walking down the street, wearing a white long-sleeve blouse with lace details on the sleeves, paired with a blue pleated skirt. The woman is <img><|image_1|></img>. The long-sleeve blouse and a pleated skirt are <img><|image_2|></img>."
For text-to-image task, OmniGen requires minimal memory and time costs (9GB memory and 31s for a 1024x1024 image on A800 GPU).
299
304
However, when using input images, the computational cost increases.
300
305
301
-
Here are some guidelines to help you reduce computational costs when inputting multiple images. The experiments are conducted on an A800 GPU with two input images.
306
+
Here are some guidelines to help you reduce computational costs when using multiple images. The experiments are conducted on an A800 GPU with two input images.
302
307
303
308
Like other pipelines, you can reduce memory usage by offloading the model: `pipe.enable_model_cpu_offload()` or `pipe.enable_sequential_cpu_offload() `.
304
309
In OmniGen, you can also decrease computational overhead by reducing the `max_input_image_size`.
@@ -310,5 +315,3 @@ The memory consumption for different image sizes is shown in the table below:
0 commit comments