Skip to content

Commit 6d6c10c

Browse files
committed
update docs
1 parent e4173df commit 6d6c10c

File tree

4 files changed

+71
-3
lines changed

4 files changed

+71
-3
lines changed

docs/source/en/_toctree.yml

+4
Original file line numberDiff line numberDiff line change
@@ -276,6 +276,8 @@
276276
title: ConsisIDTransformer3DModel
277277
- local: api/models/cogview3plus_transformer2d
278278
title: CogView3PlusTransformer2DModel
279+
- local: api/models/cosmos_transformer3d
280+
title: CosmosTransformer3DModel
279281
- local: api/models/dit_transformer2d
280282
title: DiTTransformer2DModel
281283
- local: api/models/flux_transformer
@@ -396,6 +398,8 @@
396398
title: ControlNet-XS with Stable Diffusion XL
397399
- local: api/pipelines/controlnet_union
398400
title: ControlNetUnion
401+
- local: api/pipelines/cosmos
402+
title: Cosmos
399403
- local: api/pipelines/dance_diffusion
400404
title: Dance Diffusion
401405
- local: api/pipelines/ddim
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# CosmosTransformer3DModel
13+
14+
A Diffusion Transformer model for 3D video-like data was introduced in [Cosmos World Foundation Model Platform for Physical AI](https://huggingface.co/papers/2501.03575) by NVIDIA.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import CosmosTransformer3DModel
20+
21+
transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)
22+
```
23+
24+
## CosmosTransformer3DModel
25+
26+
[[autodoc]] CosmosTransformer3DModel
27+
28+
## Transformer2DModelOutput
29+
30+
[[autodoc]] models.modeling_outputs.Transformer2DModelOutput
+35
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License. -->
14+
15+
# Cosmos
16+
17+
[Cosmos World Foundation Model Platform for Physical AI](https://huggingface.co/papers/2501.03575) by NVIDIA.
18+
19+
*Physical AI needs to be trained digitally first. It needs a digital twin of itself, the policy model, and a digital twin of the world, the world model. In this paper, we present the Cosmos World Foundation Model Platform to help developers build customized world models for their Physical AI setups. We position a world foundation model as a general-purpose world model that can be fine-tuned into customized world models for downstream applications. Our platform covers a video curation pipeline, pre-trained world foundation models, examples of post-training of pre-trained world foundation models, and video tokenizers. To help Physical AI builders solve the most critical problems of our society, we make our platform open-source and our models open-weight with permissive licenses available via https://github.com/NVIDIA/Cosmos.*
20+
21+
<Tip>
22+
23+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
24+
25+
</Tip>
26+
27+
## CosmosPipeline
28+
29+
[[autodoc]] CosmosPipeline
30+
- all
31+
- __call__
32+
33+
## CosmosPipelineOutput
34+
35+
[[autodoc]] pipelines.cosmos.pipeline_output.CosmosPipelineOutput

scripts/convert_cosmos_to_diffusers.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -130,9 +130,8 @@ def get_args():
130130
"--transformer_ckpt_path", type=str, default=None, help="Path to original transformer checkpoint"
131131
)
132132
parser.add_argument("--vae_ckpt_path", type=str, default=None, help="Path to original VAE checkpoint")
133-
parser.add_argument("--text_encoder_path", type=str, default=None, help="Path to original llama checkpoint")
134-
parser.add_argument("--tokenizer_path", type=str, default=None, help="Path to original llama tokenizer")
135-
parser.add_argument("--text_encoder_2_path", type=str, default=None, help="Path to original clip checkpoint")
133+
parser.add_argument("--text_encoder_path", type=str, default=None, help="Path to original T5 checkpoint")
134+
parser.add_argument("--tokenizer_path", type=str, default=None, help="Path to original T5 tokenizer")
136135
parser.add_argument("--save_pipeline", action="store_true")
137136
parser.add_argument("--output_path", type=str, required=True, help="Path where converted model should be saved")
138137
parser.add_argument("--dtype", default="bf16", help="Torch dtype to save the transformer in.")

0 commit comments

Comments
 (0)