Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guiders support for Wan #11211

Open
wants to merge 5 commits into
base: feature/guiders
Choose a base branch
from
Open

Conversation

a-r-r-o-w
Copy link
Member

@a-r-r-o-w a-r-r-o-w commented Apr 4, 2025

This PR shows the changes required to make existing pipelines compatible with guiders.

Example results with Wan 1.3B:

Guidance methods
APG CFG CFG-Zero* PAG SLG
AdaptiveProjectedGuidance.mp4
ClassifierFreeGuidance.mp4
ClassifierFreeZeroStarGuidance.mp4
PerturbedAttentionGuidance.mp4
SkipLayerGuidance.mp4
A frog doing his taxes while he sits on a swing!
code
import argparse
from pathlib import Path

import torch
from diffusers import AutoencoderKLWan, WanPipeline
from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig
from diffusers.guiders import AdaptiveProjectedGuidance
from diffusers.guiders import ClassifierFreeGuidance
from diffusers.guiders import ClassifierFreeZeroStarGuidance
from diffusers.guiders import PerturbedAttentionGuidance
from diffusers.guiders import SkipLayerGuidance
from diffusers.hooks import LayerSkipConfig
from diffusers.utils import export_to_video


def main(args):
    output_dir = Path(args.output_dir)
    output_dir.mkdir(parents=True, exist_ok=True)
    
    model_id = "Wan-AI/Wan2.1-T2V-1.3B-Diffusers"
    vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
    pipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)
    pipe.to("cuda")

    if args.cache_threshold is not None:
        pipe.transformer.enable_cache(FirstBlockCacheConfig(threshold=args.cache_threshold))

    cfg = ClassifierFreeGuidance(guidance_scale=5.0)

    slg = SkipLayerGuidance(
        guidance_scale=4.0,
        skip_layer_guidance_scale=2.0,
        skip_layer_guidance_start=0.01,
        skip_layer_guidance_stop=0.2,
        skip_layer_config=LayerSkipConfig(indices=[11, 16]),
    )

    cfgz = ClassifierFreeZeroStarGuidance(
        guidance_scale=5.0,
        zero_init_steps=1,
    )

    apg = AdaptiveProjectedGuidance(
        guidance_scale=12.0,
        adaptive_projected_guidance_momentum=0.2,
        adaptive_projected_guidance_rescale=15.0,
        eta=1.0,
        guidance_rescale=0.0,
        use_original_formulation=False,
    )

    pag = PerturbedAttentionGuidance(
        pag_applied_layers="blocks.11",
        guidance_scale=4.0,
        pag_scale=1.5,
    )

    prompt = "A frog doing his taxes while he sits on a swing!"
    negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"

    cfg_methods = [cfg, slg, cfgz, apg, pag]

    for guidance in cfg_methods:
        video = pipe(prompt=prompt, negative_prompt=negative_prompt, guidance=guidance, generator=torch.Generator().manual_seed(42)).frames[0]
        if args.cache_threshold is not None:
            output_path = output_dir / f"{guidance.__class__.__name__}_cache_threshold_{args.cache_threshold:.3f}.mp4"
        else:
            output_path = output_dir / f"{guidance.__class__.__name__}.mp4"
        export_to_video(video, output_path, fps=16)


def get_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--output_dir", type=str, default="basic_comparison_wan")
    parser.add_argument("--cache_threshold", type=float, default=None)
    return parser.parse_args()


if __name__ == "__main__":
    args = get_args()
    main(args)

@a-r-r-o-w a-r-r-o-w requested a review from yiyixuxu April 4, 2025 22:10
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

attention_kwargs=attention_kwargs,
return_dict=False,
)[0]
noise_pred = noise_uncond + guidance_scale * (noise_pred - noise_uncond)
guidance.prepare_outputs(noise_pred)
Copy link
Collaborator

@yiyixuxu yiyixuxu Apr 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so first, let's test very very thorougly on the potentially performance difference on this change (only need to SDXL for now, different num_images_per_prompt, machine type, etc)

second, code-wise I think it's less confusing with something like this, i.e. explicitly pass the model as input (otherwise it's unclear there is a model call there), and a function should always return an output if it modify input

noise_pred = guider.prepare_cond( self.transformer, ...)
outputs = guider.prepare_guider_output( self.transformer, ....)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants