Skip to content

Commit 57c620b

Browse files
chore: update SigLIP2 model card (#37624)
* update siglip2 model card * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * address comments * separate naflex and fixres variant * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> * Update docs/source/en/model_doc/siglip2.md Co-authored-by: Steven Liu <[email protected]> --------- Co-authored-by: Steven Liu <[email protected]>
1 parent eb4afdd commit 57c620b

File tree

1 file changed

+110
-175
lines changed

1 file changed

+110
-175
lines changed

docs/source/en/model_doc/siglip2.md

+110-175
Original file line numberDiff line numberDiff line change
@@ -14,225 +14,160 @@ rendered properly in your Markdown viewer.
1414
1515
-->
1616

17-
# SigLIP2
18-
19-
<div class="flex flex-wrap space-x-1">
20-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
21-
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
22-
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
17+
<div style="float: right;">
18+
<div class="flex flex-wrap space-x-1">
19+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
20+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
21+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
22+
</div>
2323
</div>
2424

25-
## Overview
26-
27-
The SigLIP2 model was proposed in [SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features](https://huggingface.co/papers/2502.14786) by Michael Tschannen, Alexey Gritsenko, Xiao Wang, Muhammad Ferjad Naeem, Ibrahim Alabdulmohsin,
28-
Nikhil Parthasarathy, Talfan Evans, Lucas Beyer, Ye Xia, Basil Mustafa, Olivier Hénaff, Jeremiah Harmsen,
29-
Andreas Steiner and Xiaohua Zhai.
30-
31-
The model comes in two variants
32-
33-
1) FixRes - model works with fixed resolution images (backward compatible with SigLIP v1)
34-
2) NaFlex - model works with variable image aspect ratios and resolutions (SigLIP2 in `transformers`)
35-
36-
The abstract from the paper is the following:
37-
38-
*We introduce SigLIP 2, a family of new multilingual vision-language encoders that build on the success
39-
of the original SigLIP. In this second iteration, we extend the original image-text training objective with
40-
several prior, independently developed techniques into a unified recipe—this includes decoder-based
41-
pretraining, self-supervised losses (self-distillation, masked prediction) and online data curation. With
42-
these changes, SigLIP 2 models outperform their SigLIP counterparts at all model scales in core capabilities,
43-
including zero-shot classification (best SigLIP 2 ViT-g/16 achieves 85.0% ImageNet zero-shot
44-
accuracy), image-text retrieval, and transfer performance when extracting visual representations for
45-
Vision-Language Models (VLMs). Furthermore, the new training recipe leads to significant improvements
46-
on localization and dense prediction tasks. We also train variants which support multiple resolutions
47-
and preserve the input’s native aspect ratio. Finally, we train on a more diverse data-mixture that
48-
includes de-biasing techniques, leading to much better multilingual understanding and improved fair-
49-
ness. To provide users with the ability to trade-off inference cost with performance, we release model
50-
checkpoints at four sizes (ViT-B/86M, L/303M, So400m/400M, and g/1B).*
51-
52-
## Usage tips
53-
54-
- Usage of SigLIP2 is similar to [SigLIP](siglip) and [CLIP](clip). The main difference from CLIP is the training loss, which does not require a global view of all the pairwise similarities of images and texts within a batch. One needs to apply the sigmoid activation function to the logits, rather than the softmax.
55-
- Training is supported but does not use `torch.distributed` utilities which may limit the scalability of batch size. However, DDP and FDSP works on single-node multi-gpu setup.
56-
- When using the standalone [`GemmaTokenizerFast`] make sure to pass `padding="max_length"` and `max_length=64` as that's how the model was trained.
57-
- Model was trained with *lowercased* text, make sure you make the same preprocessing for your text labels.
58-
- To get the same results as the pipeline, a prompt template of "this is a photo of {label}" should be used.
59-
- The NaFlex variant supports processing images at higher resolutions by adjusting the `max_num_patches` parameter in the `Processor`. The default value is `max_num_patches=256`. Increasing `max_num_patches` to 1024 (4x) will approximately double processed image height and width, while preserving the aspect ratio.
25+
# SigLIP2
6026

61-
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/siglip2_metrics_table.png"
62-
alt="drawing" width="600"/>
27+
## Overview
6328

64-
This model was contributed by [qubvel](https://huggingface.co/qubvel-hf).
65-
The original code can be found [here](https://github.com/google-research/big_vision/tree/main).
29+
[SigLIP2](https://huggingface.co/papers/2502.14786) is a family of multilingual vision-language encoders that builds on the [SigLIP](./siglip) training recipe. It includes decoder-based pretraining, self-distillation, and masked prediction to improve dense prediction tasks (segmentation, depth estimation, etc.). This model is available in two variants:
6630

67-
## Usage example
31+
- NaFlex supports different resolutions and maintains the native image aspect ratio
32+
- FixRes supports fixed resolutions and is backwards compatible with [SigLIP](./siglip)
6833

69-
There are 2 main ways to use SigLIP2: either using the pipeline API, which abstracts away all the complexity for you, or by using the `Siglip2Model` class yourself.
7034

71-
### FixRes variant
35+
You can find all the original SigLIP2 checkpoints under the [SigLIP2](https://huggingface.co/collections/google/siglip2-67b5dcef38c175486e240107) collection.
7236

73-
**Pipeline API**
37+
> [!TIP]
38+
> Click on the SigLIP2 models in the right sidebar for more examples of how to apply SigLIP2 to different image and text tasks.
7439
75-
The pipeline allows to use the model in a few lines of code:
40+
The example below demonstrates zero-shot classification with [`Pipeline`] or the [`AutoModel`] class.
7641

77-
```python
78-
>>> from transformers import pipeline
79-
>>> from PIL import Image
80-
>>> import requests
42+
<hfoptions id="usage">
43+
<hfoption id="Pipeline">
8144

82-
>>> # load pipe
83-
>>> image_classifier = pipeline(
84-
... task="zero-shot-image-classification",
85-
... model="google/siglip2-base-patch16-224",
86-
... )
45+
```py
46+
import torch
47+
from transformers import pipeline
8748

88-
>>> # load image
89-
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
90-
>>> image = Image.open(requests.get(url, stream=True).raw)
49+
image = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
50+
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
9151

92-
>>> # inference
93-
>>> candidate_labels = ["2 cats", "a plane", "a remote"]
94-
>>> outputs = image_classifier(image, candidate_labels=candidate_labels)
95-
>>> outputs = [{"score": round(output["score"], 4), "label": output["label"] } for output in outputs]
96-
>>> print(outputs)
97-
[{'score': 0.1499, 'label': '2 cats'}, {'score': 0.0008, 'label': 'a remote'}, {'score': 0.0, 'label': 'a plane'}]
52+
pipeline = pipeline(task="zero-shot-image-classification", model="google/siglip2-base-patch16-224", device=0, torch_dtype=torch.bfloat16)
53+
pipeline(image, candidate_labels=candidate_labels)
9854
```
9955

100-
**Using the model yourself**
101-
102-
If you want to do the pre- and postprocessing yourself, here's how to do that:
56+
</hfoption>
57+
<hfoption id="AutoModel (FixRes)">
10358

104-
```python
105-
>>> from PIL import Image
106-
>>> import requests
107-
>>> from transformers import AutoProcessor, AutoModel
108-
>>> import torch
59+
```py
60+
import torch
61+
import requests
62+
from PIL import Image
63+
from transformers import AutoProcessor, AutoModel
10964

110-
>>> model = AutoModel.from_pretrained("google/siglip2-base-patch16-224")
111-
>>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
65+
model = AutoModel.from_pretrained("google/siglip2-base-patch16-224", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
66+
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
11267

113-
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
114-
>>> image = Image.open(requests.get(url, stream=True).raw)
68+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
69+
image = Image.open(requests.get(url, stream=True).raw)
70+
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
11571

116-
>>> candidate_labels = ["2 cats", "2 dogs"]
11772
# follows the pipeline prompt template to get same results
118-
>>> texts = [f"This is a photo of {label}." for label in candidate_labels]
73+
texts = [f'This is a photo of {label}.' for label in candidate_labels]
11974

12075
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
121-
>>> inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt")
76+
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to("cuda")
12277

123-
>>> with torch.no_grad():
124-
... outputs = model(**inputs)
78+
with torch.no_grad():
79+
outputs = model(**inputs)
12580

126-
>>> logits_per_image = outputs.logits_per_image
127-
>>> probs = torch.sigmoid(logits_per_image) # these are the probabilities
128-
>>> print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
129-
15.0% that image 0 is '2 cats'
81+
logits_per_image = outputs.logits_per_image
82+
probs = torch.sigmoid(logits_per_image)
83+
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
13084
```
13185

132-
### NaFlex variant
133-
134-
NaFlex combines ideas from FlexiViT, i.e. supporting multiple, predefined sequence lengths
135-
with a single ViT model, and NaViT, namely processing images at their native aspect ratio.
136-
This enables processing different types of images at appropriate resolution, e.g. using a
137-
larger resolution to process document images, while at the same time minimizing the impact
138-
of aspect ratio distortion on certain inference tasks, e.g. on OCR.
139-
140-
Given a patch size and target sequence length, NaFlex preprocesses the data by first resizing
141-
the input image such that the height and width after resizing are multiples of the patch size,
142-
while
143-
144-
1. keeping the aspect ratio distortion as small as possible
145-
2. producing a sequence length of at most the desired target sequence length (`max_num_patches`)
146-
147-
The resulting distortion in width and height is at most `(patch_size - 1) / width` and
148-
`(patch_size - 1) / height`, respectively, which tends to be small for common resolutions and aspect ratios.
149-
After resizing, the image is split into a sequence of patches, and a mask with padding information is added.
150-
151-
```python
152-
>>> from PIL import Image
153-
>>> import requests
154-
>>> from transformers import AutoProcessor, AutoModel
155-
>>> import torch
156-
157-
>>> model = AutoModel.from_pretrained("google/siglip2-base-patch16-naflex")
158-
>>> processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-naflex")
159-
160-
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
161-
>>> image = Image.open(requests.get(url, stream=True).raw)
162-
163-
>>> candidate_labels = ["2 cats", "2 dogs"]
164-
# follows the pipeline prompt template to get same results
165-
>>> texts = [f"This is a photo of {label}." for label in candidate_labels]
166-
167-
# default value for `max_num_patches` is 256, but you can increase resulted image resolution providing
168-
# higher values e.g. `max_num_patches=512`
169-
>>> inputs = processor(text=texts, images=image, max_num_patches=256, return_tensors="pt")
86+
</hfoption>
87+
<hfoption id="AutoModel (NaFlex)">
17088

171-
>>> with torch.no_grad():
172-
... outputs = model(**inputs)
89+
```py
90+
import torch
91+
import requests
92+
from PIL import Image
93+
from transformers import AutoProcessor, AutoModel
17394

174-
>>> logits_per_image = outputs.logits_per_image
175-
>>> probs = torch.sigmoid(logits_per_image) # these are the probabilities
176-
>>> print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
177-
21.1% that image 0 is '2 cats'
178-
```
95+
model = AutoModel.from_pretrained("google/siglip2-base-patch16-naflex", torch_dtype=torch.float16, device_map="auto", attn_implementation="sdpa")
96+
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-naflex")
17997

180-
## Resources
98+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
99+
image = Image.open(requests.get(url, stream=True).raw)
100+
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
101+
texts = [f'This is a photo of {label}.' for label in candidate_labels]
181102

182-
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with SigLIP2.
103+
# default value for `max_num_patches` is 256, but you can increase resulted image resolution providing higher values e.g. `max_num_patches=512`
104+
inputs = processor(text=texts, images=image, padding="max_length", max_num_patches=256, return_tensors="pt").to("cuda")
183105

184-
- [Zero-shot image classification task guide](../tasks/zero_shot_image_classification)
185-
- Demo notebook for SigLIP2 can be found [here](https://github.com/qubvel/transformers-notebooks/tree/master/notebooks/SigLIP2_inference.ipynb). 🌎
106+
with torch.no_grad():
107+
outputs = model(**inputs)
186108

187-
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
109+
logits_per_image = outputs.logits_per_image
110+
probs = torch.sigmoid(logits_per_image)
111+
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
112+
```
188113

114+
</hfoption>
115+
</hfoptions>
189116

190-
## Combining SigLIP2 and Flash Attention 2
117+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
191118

192-
First, make sure to install the latest version of Flash Attention 2.
119+
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to int4.
193120

194-
```bash
195-
pip install -U flash-attn --no-build-isolation
196-
```
121+
```py
122+
import torch
123+
import requests
124+
from PIL import Image
125+
from transformers import AutoProcessor, AutoModel, BitsAndBytesConfig
197126

198-
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
127+
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
128+
model = AutoModel.from_pretrained("google/siglip2-large-patch16-512", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa")
129+
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
199130

200-
To load and run a model using Flash Attention 2, refer to the snippet below:
131+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
132+
image = Image.open(requests.get(url, stream=True).raw)
133+
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
201134

202-
```python
203-
>>> import torch
204-
>>> import requests
205-
>>> from PIL import Image
206-
>>> from transformers import AutoProcessor, AutoModel
207-
>>> device = "cuda" # the device to load the model onto
135+
# follows the pipeline prompt template to get same results
136+
texts = [f'This is a photo of {label}.' for label in candidate_labels]
208137

209-
>>> model = AutoModel.from_pretrained(
210-
... "google/siglip2-so400m-patch14-384",
211-
... attn_implementation="flash_attention_2",
212-
... torch_dtype=torch.float16,
213-
... device_map=device,
214-
... )
215-
>>> processor = AutoProcessor.from_pretrained("google/siglip2-so400m-patch14-384")
138+
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
139+
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to("cuda")
216140

217-
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
218-
>>> image = Image.open(requests.get(url, stream=True).raw)
141+
with torch.no_grad():
142+
outputs = model(**inputs)
219143

220-
>>> candidate_labels = ["2 cats", "2 dogs"]
221-
# follows the pipeline prompt template to get same results
222-
>>> texts = [f'This is a photo of {label}.' for label in candidate_labels]
223-
# important: we pass `padding=max_length` since the model was trained with this
224-
>>> inputs = processor(text=texts, images=image, padding="max_length", return_tensors="pt").to(device)
225-
226-
>>> with torch.no_grad():
227-
... with torch.autocast(device):
228-
... outputs = model(**inputs)
229-
230-
>>> logits_per_image = outputs.logits_per_image
231-
>>> probs = torch.sigmoid(logits_per_image) # these are the probabilities
232-
>>> print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
233-
19.8% that image 0 is '2 cats'
144+
logits_per_image = outputs.logits_per_image
145+
probs = torch.sigmoid(logits_per_image)
146+
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
234147
```
235148

149+
## Notes
150+
151+
- Training is supported for DDP and FSDP on single-node multi-GPU setups. However, it does not use [torch.distributed](https://pytorch.org/tutorials/beginner/dist_overview.html) utilities which may limit the scalability of batch size.
152+
- When using the standalone [`GemmaTokenizerFast`] make sure to pass `padding="max_length"` and `max_length=64` as that's how the model was trained.
153+
- Model was trained with *lowercased* text, so make sure your text labels are preprocessed the same way.
154+
- To get the same results as the [`Pipeline`], a prompt template of `"This is a photo of {label}."` should be passed to the processor.
155+
- The NaFlex variant processes different types of images at the appropriate resolution (using a larger resolution to process document images for example), while also minimizing the impact of aspect ratio distortion for certain inference tasks like OCR.
156+
157+
NaFlex resizes the input image so the height and width are multiples of the patch size after resizing. It keeps the aspect ratio distortion as low as possible and produces a sequence length of at most the desired target sequence length (`max_num_patches`). After resizing, the image is split into a sequence of patches and a mask with padding information is added.
158+
- Toggle the `attn_implementation` parameter to either `"sdpa"` or `"flash_attention_2"` to use a more memory-efficient attention.
159+
```py
160+
# pip install -U flash-attn --no-build-isolation
161+
162+
from transformers import SiglipModel
163+
164+
model = SiglipModel.from_pretrained(
165+
"google/siglip2-so400m-patch14-384",
166+
attn_implementation="flash_attention_2",
167+
torch_dtype=torch.float16,
168+
device_map=device,
169+
)
170+
```
236171
## Siglip2Config
237172

238173
[[autodoc]] Siglip2Config

0 commit comments

Comments
 (0)