Skip to content

Add llama4 #37307

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 254 commits into from
Apr 5, 2025
Merged

Add llama4 #37307

merged 254 commits into from
Apr 5, 2025

Conversation

ArthurZucker
Copy link
Collaborator

What does this PR do?

@ArthurZucker ArthurZucker merged commit 25b7f27 into main Apr 5, 2025
2 of 7 checks passed
@ArthurZucker ArthurZucker deleted the add-llama4 branch April 5, 2025 20:02
@yeqcharlotte
Copy link
Contributor

Thanks!! 🔥🔥🔥🔥🔥

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@kadirnar
Copy link
Contributor

kadirnar commented Apr 5, 2025

@ArthurZucker

I ran the https://huggingface.co/meta-llama/Llama-4-Scout-17B-16E-Instruct model on a 1xA100 device. However, I'm getting this error.

  File "/ephemeral/.venv/lib/python3.10/site-packages/torch/nn/functional.py", line 5209, in pad
    return torch._C._nn.pad(input, pad, mode, value)
TypeError: pad(): argument 'pad' failed to unpack the object at pos 2 with error "type must be tuple of ints,but got NoneType"

Code:

from transformers import AutoProcessor, Llama4ForConditionalGeneration
import torch

model_id = "meta-llama/Llama-4-Maverick-17B-128E-Instruct"

processor = AutoProcessor.from_pretrained(model_id)
model = Llama4ForConditionalGeneration.from_pretrained(
    model_id,
    attn_implementation="flex_attention",
    device_map="auto",
    torch_dtype=torch.bfloat16,
)

url1 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
url2 = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets/cat_style_layout.png"
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": url1},
            {"type": "image", "url": url2},
            {"type": "text", "text": "Can you describe how these two images are similar, and how they differ?"},
        ]
    },
]

inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=256,
)

response = processor.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])[0]
print(response)
print(outputs[0])

env:

  • transformers version: 4.51.0
  • Platform: Linux-6.11.0-13-generic-x86_64-with-glibc2.40
  • Python version: 3.10.16
  • Huggingface_hub version: 0.30.1
  • Safetensors version: 0.5.3
  • Accelerate version: 1.6.0
  • Accelerate config: not found
  • DeepSpeed version: not installed
  • PyTorch version (GPU?): 2.6.0+cu124 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:
  • Using GPU in script?:
  • GPU type: NVIDIA A100 80GB PCIe

@ArthurZucker
Copy link
Collaborator Author

Having a look asap!

@nivibilla
Copy link

@ArthurZucker I assume this was a mistake to leave in? 😅

default="/fsx/arthur/Llama-4-17B-Omni-Instruct-Original",

@ArthurZucker
Copy link
Collaborator Author

Oh yeah you are using dynamic cache we will disable it

@ArthurZucker
Copy link
Collaborator Author

Static cache should be used

@ddh0
Copy link

ddh0 commented Apr 7, 2025

Is llama4 fully supported by Tranformers at this time? It would be nice to get some clarification on this since nearly everyone seems to be getting such bad results with Scout and Maverick

@ArthurZucker
Copy link
Collaborator Author

It is it is, we did not see that bad results but we are investigating!

@ArthurZucker
Copy link
Collaborator Author

😭

@radoslav-dimitrov-indeavr

This PR causes Transformers to error out when a model is using Tensorflow and the environment does not provide torch in any way

transformers/src/transformers/pipelines/base.py:

        if torch.distributed.is_initialized():

Source

@@ -981,6 +981,8 @@ def __init__(
else:
self.device = device if device is not None else -1

if torch.distributed.is_initialized():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @ArthurZucker, why this modification is for llama4?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was mostly because llama4 is too big to run without distributed, sorry that it broke stuff!

zucchini-nlp pushed a commit to zucchini-nlp/transformers that referenced this pull request May 14, 2025
* remove one of the last deps

* update fast image processor after refactor

* styling

* more quality of life improvements

* nit

* update

* cleanups

* some cleanups

* vllm updates

* update fake image token

* [convert] Fix typo

* [convert] Strip extraneous bytes from shards

* [convert] Minor fixes

* [convert] Use num_experts

* multi-image fixes in modeling + processor

* fixup size

* 128 experts

* Use default rope

* Unfuse mlp

* simplify a lot inputs embeds merging

* remove .item() 👀

* fix from review

* Address feedback

* Use None "default" for rope_scaling. Add eot.

* set seed

* return aspect ratios and bug fixes

* Moe 128 rebased (huggingface#8)

* 128 experts

* Use default rope

* Unfuse mlp

* Address feedback

* Use None "default" for rope_scaling. Add eot.

* Meta/llama quant compat (huggingface#7)

* add quant compatible model & conversion code for llama4

* fix a few issues

* fix a few issues

* minor type mapping fix

---------

Co-authored-by: Lu Fang <[email protected]>

* use a new config parameter to determine which model definition to use for MoE

---------

Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Lu Fang <[email protected]>

* un-comment write_tokenizer from converting script

* remove un-used imports

* [llama4] Pop aspect_ratios from image processor output in Llama4Processor

Signed-off-by: Jon Swenson <[email protected]>

* Fix parameter_count name

* Update src/transformers/models/llama4/configuration_llama4.py

* nit

* Add changes for no_rope, moe_layers, chunked attention. Just need to test all

* Update src/transformers/models/llama4/image_processing_llama4_fast.py

* nit

* fix post merge with main

* support flex attention

* fixes

* fix

* add layer

* small updates

* rebase and delete llm_compressor

* nit

* [llama4/mm] Add back <|image|> token that delimits global tile

* [llama4/mm] Fix Llama 4 image processing unit tests

* add explicit dtype

Signed-off-by: Jon Swenson <[email protected]>

* sdpa works

* comment todo small

* fix model loading

Signed-off-by: Zijing Liu <[email protected]>

* revert

* nits

* small fix for TP on 1 node

* Read new params from config

* Add <|eom|>

* lol don't know how this got here

* adding fp8

* Save processor, fix chat template

* style

* Add boi/eoi tokens

We don't use them.

* fixes for now flex seems to work :)

* updates

* nits

* updates

* missking keys

* add context parallel

* update

* update

* fix

* nits

* add worldsize and make eager attn work for vision

* Ignore new key present in base models

* add tp_plan

* fix nope

Signed-off-by: Zijing Liu <[email protected]>

* minor fix

Signed-off-by: Zijing Liu <[email protected]>

* Clean up Llama4 vision model

* current updates

* add support for `attn_temperature_tuning`

* add floor scale

* add missing attn scales

* push what works, dirty trick for the device synch

* oups

* Fix pad_token_id

See
https://huggingface.co/ll-re/Llama-4-Scout-17B-16E/discussions/2/files
Confirmed in the original codebase.

* fix causallml loading

* rm

* fix tied-weights

* fix sdpa

* push current version

* should work with both short and long

* add compressed_tensos & fix fbgemm tp

* Fix flex impl

* style

* chunking

* try to revert the potentially breaking change

* fix auto factory

* fix shapes in general

* rm processing

* commit cache utils cleanup

* Fix context length

* fix

* allocate

* update tp_plan

* fix SDPA!

* Add support for sparse `Llama4TextMoe` layer from the kernel hub

* cleanup

* better merge

* update

* still broken fixing now

* nits

* revert print

* Write max_position_embeddings and max_model_length

* Update modeling_llama4.py

* Save attention_chunk_size

* Sync eos terminators

* Read initializer_range

* style

* remove `dict`

* fix

* eager should use `chunked_attention_mask`

* revert

* fixup

* fix config

* Revert "Merge pull request huggingface#36 from huggingface/sparse-llama4-moe"

This reverts commit ccda19f, reversing
changes made to a515579.

* Fix typo and remove warning with compiled flex and chunked prefill

* Fix MoE vs FF (huggingface#41)

* fix

* Use correct no_rope_layers if provided one is empty list

* update tests

* fix

* skipping some tests

* fix fp8 loading

Signed-off-by: Zijing Liu <[email protected]>

* fix text geneartion pipeline

Signed-off-by: Zijing Liu <[email protected]>

* eager needs 4D mask

* fix

* Some cleanup

* fix

* update

* fix

* replace correctly module

* patch

* modulelist

* update

* update

* clean up

* Don't move to `cuda:0` in distributed mode

* restrict to compressed tensors for now

* rm print

* Docs!

* Fixes

* Update docs/source/en/model_doc/llama4.md

Co-authored-by: Pedro Cuenca <[email protected]>

* Fixes

* cuda graph fix

* revert some stuff

* fixup

* styling

* Update src/transformers/models/llama4/modeling_llama4.py

Co-authored-by: Arthur <[email protected]>

* fixup

* commit licence, cleanup here and there and style

* more styling changes

* fix dummies

* fix and clean docstrings

* remove comment

* remove warning

* Only fast image processor is supported

* nit

* trigger CI

* fix issue with flex encoder

* fix dynamic cache

* Code quality

* Code quality

* fix more tests for now

* Code quality

* Code quality

* Nuke bunch of failing stuff

* Code quality

* Code quality

* cleanup removal of slow image processor

* ruff fix fast image processor

* fix

* fix styling

* Docs

* Repo consistency

* Repo consistency

* fix sliding window issue

* separate llama cache

* styling

* Repo consistency

* Repo consistency

* push waht works

* L4 Repo consistency

* Docs

* fix last last alst alst alst alstsaltlsltlaslt

---------

Signed-off-by: Jon Swenson <[email protected]>
Signed-off-by: Zijing Liu <[email protected]>
Co-authored-by: yonigozlan <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
Co-authored-by: Pablo Montalvo <[email protected]>
Co-authored-by: Pablo Montalvo <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: Zijing Liu <[email protected]>
Co-authored-by: Lu Fang <[email protected]>
Co-authored-by: Zijing Liu <[email protected]>
Co-authored-by: Jon Swenson <[email protected]>
Co-authored-by: jmswen <[email protected]>
Co-authored-by: MekkCyber <[email protected]>
Co-authored-by: Mohamed Mekkouri <[email protected]>
Co-authored-by: Mohit Sharma <[email protected]>
Co-authored-by: Yong Hoon Shin <[email protected]>
Co-authored-by: Marc Sun <[email protected]>
Co-authored-by: drisspg <[email protected]>
Co-authored-by: Cyril Vallez <[email protected]>
Co-authored-by: Daniël de Kok <[email protected]>
Co-authored-by: Lysandre <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: ydshieh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.