Skip to content

Thinking about the guided filter #73

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
charlesknipp opened this issue Mar 13, 2025 · 2 comments
Closed

Thinking about the guided filter #73

charlesknipp opened this issue Mar 13, 2025 · 2 comments
Assignees

Comments

@charlesknipp
Copy link
Member

Guided Filter Construction

Given the changes proposed in #37 the interface is much more oriented to in-place operations. This breaks the original implementation of the guided filter, but as Tim suggested there are plenty of ways to get around this. I propose a handful of options below.

Method 1 (the overhauled predict)

We can reimplement the guided filter calculations such that the log weights must include the transition log density and proposal log density in the predict method. Ideally, the update method remains unchanged with a different type signature. Unfortunately, this implies that the log-likelihood can not be marginalized between iterations.

function predict(...)
    # forward simulation from a proposal
    proposed_particles = map(
        x -> SSMProblems.simulate(rng, model, filter.proposal, step, x, observation; kwargs...),
        collect(state),
    )

    log_increments = map(zip(proposed_particles, state.particles)) do (new_state, prev_state)
        log_f = SSMProblems.logdensity(model.dyn, step, prev_state, new_state; kwargs...)
        log_q = SSMProblems.logdensity(
            model, filter.proposal, step, prev_state, new_state, observation; kwargs...
        )

        (log_f - log_q)
    end

    proposed_state = ParticleDistribution(
        proposed_particles, deepcopy(state.log_weights) + log_increments
    )

    return update_ref!(proposed_state, ref_state, step)
end

Method 2 (the overwritten step method)

Instead of keeping with the format above, we can instead overload step to perform all the computations in a single step. While this is quite inelegant, it may well be the most efficient version of the filter. It also solves the issue of the marginal log-likelihood term from the first method. Regardless, this is quite messy since we no longer have methods for predict and update.

function step(...)
    prev_state = resample(rng, alg.resampler, state)
    marginalization_term = logsumexp(prev_state.log_weights)

    isnothing(callback) || callback(model, alg, iter, prev_state, observation, PostResample; kwargs...)

    state.particles = map(
        x -> SSMProblems.simulate(rng, model, filter.proposal, step, x, observation; kwargs...),
        collect(state)
    )
    
    state = update_ref!(state, ref_state, step)

    isnothing(callback) || callback(model, alg, iter, state, observation, PostPredict; kwargs...)

    particle_collection = zip(state.particles, prev_state.particles)
    state.log_weights += map(particle_collection) do (prop_state, state)
        log_f = SSMProblems.logdensity(model.dyn, step, state, prop_state; kwargs...)
        log_g = SSMProblems.logdensity(model.obs, step, prop_state, observation; kwargs...)
        log_q = SSMProblems.logdensity(
            model, filter.proposal, step, state, prop_state, observation; kwargs...
        )

        (log_f + log_g - log_q)
    end

    ll_increment = logsumexp(state.log_weights) - marginalization_term

    isnothing(callback) || callback(model, alg, iter, state, observation, PostUpdate; kwargs...)

    return state, ll_increment
end

Motivation

An operational guided filter in conjunction with automatic differentiation (see #26) allows for the use of variational algorithms to tune a proposal. Since I have the algorithms written with a primitive filtering interface, it would be another low hanging fruit.

@THargreaves
Copy link
Collaborator

Thank you for kicking off this discussion!

I definitely lean more towards the first method since it allows the decomposition of the step and it means that the distribution you get out of predict is actually p(x_{t+1} | y_{1:t}).

I can then see two ways of computing the log-likelihood:

  1. Pass the post-resample log-likelihood in to update as a keyword argument
  2. Have update return some LazyLLIncrement type which you then combine with marginalization_term at the end of step to get the actually ll_increment

@charlesknipp
Copy link
Member Author

See #74, which is due to merge with main

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants