Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(mm): flux variant probing #7862

Merged
merged 1 commit into from
Mar 31, 2025

Conversation

psychedelicious
Copy link
Collaborator

@psychedelicious psychedelicious commented Mar 31, 2025

Summary

Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Related Issues / Discussions

Closes #7856
Closes #7859

QA Instructions

I have a variety of FLUX models already installed. My testing strategy:

  • Set use_memory_db: true in invokeai.yaml
  • Restart the server
  • Scan my FLUX main models folder in MM tab of UI
  • Install all

I tested these models, all of which install correctly and have the correct variant (normal vs inpaint for FLUX Fill):

  • FLUX Dev.safetensors
  • FLUX Schnell.safetensors
  • FLUX Fill.safetensors
  • FLUX Dev (Quantized).safetensors
  • FLUX Schnell (Quantized).safetensors
  • flux1-fill-dev-Q8_0.gguf
  • midjourneyReplica_flux1Dev.safetensors

Merge Plan

n/a

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)
  • Updated What's New copy (if doing a release after this PR)

@github-actions github-actions bot added python PRs that change python files backend PRs that change backend files python-tests PRs that change python tests labels Mar 31, 2025
@psychedelicious psychedelicious force-pushed the psyche/fix/mm/flux-variant-probing branch from 514a7f9 to f477899 Compare March 31, 2025 00:29
Before FLUX Fill was merged, we didn't do any checks for the model variant. We always returned "normal".

To determine if a model is a FLUX Fill model, we need to check the state dict for a specific key. Initially, this logic was too strict and rejected quantized FLUX models. This issue was resolved, but it turns out there is another failure mode - some fine-tunes use a different key.

This change further reduces the strictness, handling the alternate key and also falling back to "normal" if we don't see either key. This effectively restores the previous probing behaviour for all FLUX models.

Closes #7856
Closes #7859
@psychedelicious psychedelicious force-pushed the psyche/fix/mm/flux-variant-probing branch from 600fb2f to be16666 Compare March 31, 2025 00:40
@psychedelicious psychedelicious enabled auto-merge (rebase) March 31, 2025 00:41
@psychedelicious
Copy link
Collaborator Author

There is something wonky w/ the macos test runner. Can't reproduce this issue locally on my M1 Pro. I'm going to split off the testing changes into a separate PR so we can get the user-facing issue resolved sooner rather than later.

@psychedelicious psychedelicious force-pushed the psyche/fix/mm/flux-variant-probing branch from be16666 to 20469a8 Compare March 31, 2025 01:26
@psychedelicious psychedelicious merged commit a44bfb4 into main Mar 31, 2025
11 checks passed
@psychedelicious psychedelicious deleted the psyche/fix/mm/flux-variant-probing branch March 31, 2025 01:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files python PRs that change python files python-tests PRs that change python tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: FLUX model loading ERROR [bug]: Model Install Error
3 participants