|
| 1 | +<!--Copyright 2024 The HuggingFace Team. All rights reserved. |
| 2 | +# |
| 3 | +# Licensed under the Apache License, Version 2.0 (the "License"); |
| 4 | +# you may not use this file except in compliance with the License. |
| 5 | +# You may obtain a copy of the License at |
| 6 | +# |
| 7 | +# http://www.apache.org/licenses/LICENSE-2.0 |
| 8 | +# |
| 9 | +# Unless required by applicable law or agreed to in writing, software |
| 10 | +# distributed under the License is distributed on an "AS IS" BASIS, |
| 11 | +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
| 12 | +# See the License for the specific language governing permissions and |
| 13 | +# limitations under the License. |
| 14 | +--> |
| 15 | + |
| 16 | +# OmniGen |
| 17 | + |
| 18 | +[OmniGen: Unified Image Generation](https://arxiv.org/pdf/2409.11340) from BAAI, by Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, Zheng Liu. |
| 19 | + |
| 20 | +The abstract from the paper is: |
| 21 | + |
| 22 | +*The emergence of Large Language Models (LLMs) has unified language |
| 23 | +generation tasks and revolutionized human-machine interaction. |
| 24 | +However, in the realm of image generation, a unified model capable of handling various tasks |
| 25 | +within a single framework remains largely unexplored. In |
| 26 | +this work, we introduce OmniGen, a new diffusion model |
| 27 | +for unified image generation. OmniGen is characterized |
| 28 | +by the following features: 1) Unification: OmniGen not |
| 29 | +only demonstrates text-to-image generation capabilities but |
| 30 | +also inherently supports various downstream tasks, such |
| 31 | +as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of |
| 32 | +OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion |
| 33 | +models, it is more user-friendly and can complete complex |
| 34 | +tasks end-to-end through instructions without the need for |
| 35 | +extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from |
| 36 | +learning in a unified format, OmniGen effectively transfers |
| 37 | +knowledge across different tasks, manages unseen tasks and |
| 38 | +domains, and exhibits novel capabilities. We also explore |
| 39 | +the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. |
| 40 | +This work represents the first attempt at a general-purpose image generation model, |
| 41 | +and we will release our resources at https: |
| 42 | +//github.com/VectorSpaceLab/OmniGen to foster future advancements.* |
| 43 | + |
| 44 | +<Tip> |
| 45 | + |
| 46 | +Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines. |
| 47 | + |
| 48 | +</Tip> |
| 49 | + |
| 50 | +This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1). |
| 51 | + |
| 52 | + |
| 53 | +## Inference |
| 54 | + |
| 55 | +First, load the pipeline: |
| 56 | + |
| 57 | +```python |
| 58 | +import torch |
| 59 | +from diffusers import OmniGenPipeline |
| 60 | +pipe = OmniGenPipeline.from_pretrained( |
| 61 | + "Shitao/OmniGen-v1-diffusers", |
| 62 | + torch_dtype=torch.bfloat16 |
| 63 | +) |
| 64 | +pipe.to("cuda") |
| 65 | +``` |
| 66 | + |
| 67 | +For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image. |
| 68 | +You can try setting the `height` and `width` parameters to generate images with different size. |
| 69 | + |
| 70 | +```py |
| 71 | +prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD." |
| 72 | +image = pipe( |
| 73 | + prompt=prompt, |
| 74 | + height=1024, |
| 75 | + width=1024, |
| 76 | + guidance_scale=3, |
| 77 | + generator=torch.Generator(device="cpu").manual_seed(111), |
| 78 | +).images[0] |
| 79 | +image |
| 80 | +``` |
| 81 | + |
| 82 | +OmniGen supports multimodal inputs. |
| 83 | +When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image. |
| 84 | +It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image. |
| 85 | + |
| 86 | +```py |
| 87 | +prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola." |
| 88 | +input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")] |
| 89 | +image = pipe( |
| 90 | + prompt=prompt, |
| 91 | + input_images=input_images, |
| 92 | + guidance_scale=2, |
| 93 | + img_guidance_scale=1.6, |
| 94 | + use_input_image_size_as_output=True, |
| 95 | + generator=torch.Generator(device="cpu").manual_seed(222)).images[0] |
| 96 | +image |
| 97 | +``` |
| 98 | + |
| 99 | + |
| 100 | +## OmniGenPipeline |
| 101 | + |
| 102 | +[[autodoc]] OmniGenPipeline |
| 103 | + - all |
| 104 | + - __call__ |
| 105 | + |
| 106 | + |
0 commit comments