You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updated the Model docs - for the ALIGN model (#38072)
* Updated the Model docs - for the ALIGN model
* Update docs/source/en/model_doc/align.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/align.md
Co-authored-by: Steven Liu <[email protected]>
* Updated align.md
* Update docs/source/en/model_doc/align.md
Co-authored-by: Steven Liu <[email protected]>
* Update docs/source/en/model_doc/align.md
Co-authored-by: Steven Liu <[email protected]>
* Update align.md
* fix
---------
Co-authored-by: Steven Liu <[email protected]>
[ALIGN](https://huggingface.co/papers/2102.05918) is pretrained on a noisy 1.8 billion alt‑text and image pair dataset to show that scale can make up for the noise. It uses a dual‑encoder architecture, [EfficientNet](./efficientnet) for images and [BERT](./bert) for text, and a contrastive loss to align similar image–text embeddings together while pushing different embeddings apart. Once trained, ALIGN can encode any image and candidate captions into a shared vector space for zero‑shot retrieval or classification without requiring extra labels. This scale‑first approach reduces dataset curation costs and powers state‑of‑the‑art image–text retrieval and zero‑shot ImageNet classification.
24
26
25
-
The ALIGN model was proposed in [Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision](https://arxiv.org/abs/2102.05918) by Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yunhsuan Sung, Zhen Li, Tom Duerig. ALIGN is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. ALIGN features a dual-encoder architecture with [EfficientNet](efficientnet) as its vision encoder and [BERT](bert) as its text encoder, and learns to align visual and text representations with contrastive learning. Unlike previous work, ALIGN leverages a massive noisy dataset and shows that the scale of the corpus can be used to achieve SOTA representations with a simple recipe.
27
+
You can find all the original ALIGN checkpoints under the [Kakao Brain](https://huggingface.co/kakaobrain?search_models=align) organization.
26
28
27
-
The abstract from the paper is the following:
29
+
> [!TIP]
30
+
> Click on the ALIGN models in the right sidebar for more examples of how to apply ALIGN to different vision and text related tasks.
28
31
29
-
*Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.*
32
+
The example below demonstrates zero-shot image classification with [`Pipeline`] or the [`AutoModel`] class.
30
33
31
-
This model was contributed by [Alara Dirik](https://huggingface.co/adirik).
32
-
The original code is not released, this implementation is based on the Kakao Brain implementation based on the original paper.
34
+
<hfoptionsid="usage">
33
35
34
-
## Usage example
36
+
<hfoptionid="Pipeline">
35
37
36
-
ALIGN uses EfficientNet to get visual features and BERT to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similarity score.
[`AlignProcessor`] wraps [`EfficientNetImageProcessor`] and [`BertTokenizer`] into a single instance to both encode the text and preprocess the images. The following example shows how to get the image-text similarity scores using [`AlignProcessor`] and [`AlignModel`].
58
+
</hfoption>
59
+
<hfoptionid="AutoModel">
39
60
40
-
```python
41
-
import requests
61
+
```py
42
62
import torch
63
+
import requests
43
64
fromPILimport Image
44
-
from transformers importAlignProcessor, AlignModel
65
+
from transformers importAutoProcessor, AutoModelForZeroShotImageClassification
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ALIGN.
89
+
for label, score inzip(candidate_labels, probs):
90
+
print(f"{label:20s} → {score.item():.4f}")
91
+
```
69
92
70
-
- A blog post on [ALIGN and the COYO-700M dataset](https://huggingface.co/blog/vit-align).
71
-
- A zero-shot image classification [demo](https://huggingface.co/spaces/adirik/ALIGN-zero-shot-image-classification).
72
-
-[Model card](https://huggingface.co/kakaobrain/align-base) of `kakaobrain/align-base` model.
93
+
</hfoption>
94
+
95
+
</hfoptions>
96
+
97
+
## Notes
98
+
99
+
- ALIGN projects the text and visual features into latent space and the dot product between the projected image and text features is used as the similarity score. The example below demonstrates how to calculate the image-text similarity score with [`AlignProcessor`] and [`AlignModel`].
100
+
101
+
```py
102
+
# Example of using ALIGN for image-text similarity
103
+
from transformers import AlignProcessor, AlignModel
print(f"Most similar text: '{texts[most_similar_idx]}'")
147
+
```
73
148
74
-
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource.
149
+
## Resources
150
+
- Refer to the [Kakao Brain’s Open Source ViT, ALIGN, and the New COYO Text-Image Dataset](https://huggingface.co/blog/vit-align) blog post for more details.
0 commit comments