-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Forward sampling functions should warn that they ignore Potential in a model #3865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
By design, potential's only affect the model's |
Similarly, transforms are (mostly) ignored. Except when they aren't (e.g., |
As we've discussed elsewhere, what we call a distributions |
Well, arguably if you specify a transform, and we can't support generating transformed samples, then IMO forward sampling should just raise an error on transformed RVs and give up. Unfortunately, the docs here are quite opaque. Here's the definition of the
... and the |
Can we have a check of the logp when forward sampling that rejects the sample if the constraint is not satisfied? |
I thought it was just me, but then we got a bug report where someone passed |
@lucianopaz @fonnesbeck -- could we address this issue by having the forward sampling functions use importance sampling? I.e., could we evaluate the potential logp on each sample and use that as the weight for the sample? (and I'm kind of hazy on the math here -- we might have to take the antilog of the potential value, right?) I'm not sure how possible it would be to implement this, since all the code having to do with traces have deeply baked in the assumption that all the trace elements have the same weight. I suppose one could instead sub-sample from the original trace using a weighting derived from the potential value at each sample to develop a new, weighted trace. I'm not sure how else one could incorporate a potential in forward sampling, but this is just the project of a few minutes musing while I was on a walk. |
Yeah, sampling-importance-resampling would be more general than just rejecting constraints since it would deal with the case where the Potential is not just returning -inf values as the constraint. |
Yes, this could be done. However, I'm not in favor of doning it by default. Drawing forward samples from a model with potentials could be done with particle filters like importance sampling, SIR or SMC, but it could also be done using mcmc methods like NUTS or Gibbs. The difference between I think that we should simply warn the user that forward sampling ignores potentials and direct it to use some other Montecarlo method if it interested in always satisfying the potentials. |
@lucianopaz So it seems that what is needed is:
|
Working on 1. is on my to-do list -- hopefully I'll work on it by the end of May 😊 |
I think step 2, was just about having a notebook where you use normal pm.sample() in a model with Potentials, but no observed variables, to obtain the same prior_predictive_sampling that one would get if the prior_predictive_sampling was able to work with models with Potentials. What one would desire:
How it can be obtained:
It does not solve the problem of posterior predictive samples though... |
Thanks @ricardoV94! Having thought a bit more about this now, I think that's exactly that 👌 |
Description of your problem
As discussed here with @junpenglao, it seems like
sample_posterior_predictive
cannot account for aPotential
in a model withGeometric
likelihood:This samples perfectly, but the constraint was not applied to PP samples:

The data come from Kaggle’s bike-sharing demand contest.
Versions and main components
The text was updated successfully, but these errors were encountered: