Skip to content

RFInversionFluxPipeline, small fix for enable_model_cpu_offload & enable_sequential_cpu_offload compatibility #10480

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jan 7, 2025

Conversation

Teriks
Copy link
Contributor

@Teriks Teriks commented Jan 7, 2025

Use self._execution_device instead of self.device when selecting a device for the input image tensor in RFInversionFluxPipeline.encode_image.

This allows for compatibility with enable_model_cpu_offload & enable_sequential_cpu_offload

Since this is in a copied method, might want to look elsewhere for this.

What does this PR do?

Allows for turning on VRAM optimizations without encountering meta tensor copy error / device mismatch etc.

import torch
import diffusers
from PIL import Image

pipe = RFInversionFluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev",
                                               torch_dtype=torch.bfloat16)
# potato hardware mode
# pipe.enable_model_cpu_offload()

# extra potato hardware mode
pipe.enable_sequential_cpu_offload()

image = Image.open("./example/cat.png").resize((1024, 1024))

inverted_latents, image_latents, latent_image_ids = pipe.invert(image=image,
                                                                num_inversion_steps=28,
                                                                gamma=0.5)

image = pipe("portrait of a tiger",
     inverted_latents=inverted_latents,
     image_latents=image_latents,
     latent_image_ids=latent_image_ids,
     start_timestep=0,
     stop_timestep=.38,
     num_inference_steps=28,
     eta=0.9,
     ).images[0]

image.save('tiger.png')

Before submitting

Who can review?

@linoytsaban

Use self._execution_device instead of self.device when selecting
a device for the input image tensor.

This allows for compatibility with enable_model_cpu_offload &
enable_sequential_cpu_offload
@linoytsaban linoytsaban self-assigned this Jan 7, 2025
Copy link
Collaborator

@linoytsaban linoytsaban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch thanks @Teriks!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@linoytsaban linoytsaban merged commit 03bcf5a into huggingface:main Jan 7, 2025
9 checks passed
@Teriks Teriks deleted the rf_inversion_fix branch January 13, 2025 04:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants