-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Update Posterior Predictive Checks Notebook #3857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This reverts commit 5c08d92. Revert changes made in sampling file
Check out this pull request on You'll be able to see Jupyter notebook diff and discuss changes. Powered by ReviewNB. |
I'll have a look. As a PyMC3 user, one thing I find quite difficult is that some of the distributions do not support predictive sampling and, worse, just generate garbage instead of erroring out. This might be worth addressing with a caution. |
Codecov Report
@@ Coverage Diff @@
## master #3857 +/- ##
==========================================
- Coverage 90.75% 90.45% -0.31%
==========================================
Files 135 135
Lines 21184 21184
==========================================
- Hits 19226 19161 -65
- Misses 1958 2023 +65
|
Thanks @rpgoldman ! |
I'm afraid not. To be honest, those codecov outputs are useless to me because I don't know how to interpret them, and they are very "twitchy" -- touching arbitrary bits of the repo seem to make them swing in unpredictable ways. |
Yeah, I feel the same... The Travis tests passed, so I guess it's the most important 🤷♂ |
I don't believe they do in the sense that there won't be conflicts. I have no idea what they do to the test results of master. I think @ahartikainen has been wrestling with these tests. |
@twiecki @AlexAndorra I'll check this out and try to do a quick pass over it in my repo and give you a PR back. Not sure I will get this done today, though. |
Thanks @rpgoldman ! If it's easier for you though, you can just make your comments in ReviewNB -- we do that on the "Ressources" repo and it works well 👌 |
I was looking for a way to do that, and I couldn't figure out how. I could comment on it, but if I just wanted to tweak a paragraph, I couldn't. Editing text sometimes is just easier to edit than to describe the edits!
I understand, but that runs the risk of letting users (like me!) think that one can follow the model design - prior predictive evaluation - find posterior - evaluate posterior by sampling process when PyMC3 only fully supports finding the posterior. |
Yeah, I think you can't do that.
Well, I'd say you can for a substantial number of distributions. When you can't do it out-of-the-box, then you have to push the parameters through the model yourself -- which can get quite complicated, indeed. |
What tests fail? Codecov? Ignore it for now, I think there is some failure what is compared to what (in arviz repo too). (It compares latests commits?) |
Just took a look through ReviewNB and this is much improved. Many thanks @AlexAndorra! |
The Posterior Predictive Checks notebook on the website uses PyMC 3.6 and doesn't use ArviZ. As this is a favorite topic of questions on Discourse and elsewhere, I thought it'd be useful to update and extend it.
I made the following changes:
Thanks in advance for the review, and let me know if anything needs edits ✌️