Skip to content

Fix pm.DensityDist bug and incorporate latest upstream changes #42

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Jul 22, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
f93b5e7
Update GP NBs to use standard notebook style (#3978)
bwengals Jun 26, 2020
facbdf1
rewrite radon notebook using ArviZ and xarray (#3963)
OriolAbril Jun 29, 2020
747db63
SMC: refactor, speed-up and run multiple chains in parallel for diagn…
aloctavodia Jun 29, 2020
8560f1e
Honor discard_tuned_samples during KeyboardInterrupt (#3785)
aseyboldt Jul 1, 2020
7842072
Add time values as sampler stats for NUTS (#3986)
aseyboldt Jul 1, 2020
1bf867e
Drop support for py3.6 (#3992)
aseyboldt Jul 3, 2020
8770259
Fix Mixture distribution mode computation and logp dimensions
brandonwillard Jul 3, 2020
a34f63a
Add more info to divergence warnings (#3990)
aseyboldt Jul 5, 2020
1af9976
follow-up of py36 drop (#3998)
OriolAbril Jul 5, 2020
d465c3c
Revert "Drop support for py3.6 (#3992)"
junpenglao Jul 6, 2020
e6c1e66
Update README.rst
junpenglao Jul 6, 2020
24193d6
Update setup.py
junpenglao Jul 6, 2020
bd90b2d
Update requirements.txt
junpenglao Jul 6, 2020
77873e9
Update requirements.txt
junpenglao Jul 6, 2020
90f48ed
Show pickling issues in notebook on windows (#3991)
aseyboldt Jul 7, 2020
692a09f
Fix keep_size for arviz structures. (#4006)
rpgoldman Jul 12, 2020
9e8975f
SMC-ABC add distance, refactor and update notebook (#3996)
aloctavodia Jul 16, 2020
28a4621
add docs for interpretation of length scales in periodic kernel (#3989)
tirthasheshpatel Jul 17, 2020
b2c682e
Fix Matplotlib type error for tests (#4023)
rpgoldman Jul 20, 2020
27f8b3c
Switch from pm.DensityDist to pm.Potential to describe the likelihood…
gmingas Jul 21, 2020
4c81646
Merge branch 'mlda' into mlda_develop
gmingas Jul 21, 2020
f07c273
Remove Dirichlet distribution type restrictions (#4000)
brandonwillard Jul 21, 2020
981df60
Merge remote-tracking branch 'upstream/master' into mlda_develop
gmingas Jul 22, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,4 @@ benchmarks/results/
pytestdebug.log
.dir-locals.el
.pycheckers

15 changes: 13 additions & 2 deletions RELEASE-NOTES.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,24 @@
# Release Notes

## PyMC3 3.9.x (on deck)
*waiting for contributions*

### Maintenance
- Fix an error on Windows and Mac where error message from unpickling models did not show up in the notebook, or where sampling froze when a worker process crashed (see [#3991](https://github.com/pymc-devs/pymc3/pull/3991)).

### Documentation
- Notebook on [multilevel modeling](https://docs.pymc.io/notebooks/multilevel_modeling.html) has been rewritten to showcase ArviZ and xarray usage for inference result analysis (see [#3963](https://github.com/pymc-devs/pymc3/pull/3963))

### New features
- Introduce optional arguments to `pm.sample`: `mp_ctx` to control how the processes for parallel sampling are started, and `pickle_backend` to specify which library is used to pickle models in parallel sampling when the multiprocessing cnotext is not of type `fork`. (see [#3991](https://github.com/pymc-devs/pymc3/pull/3991))
- Add sampler stats `process_time_diff`, `perf_counter_diff` and `perf_counter_start`, that record wall and CPU times for each NUTS and HMC sample (see [ #3986](https://github.com/pymc-devs/pymc3/pull/3986)).
- Extend `keep_size` argument handling for `sample_posterior_predictive` and `fast_sample_posterior_predictive`, to work on arviz InferenceData and xarray Dataset input values. (see [PR #4006](https://github.com/pymc-devs/pymc3/pull/4006) and [Issue #4004](https://github.com/pymc-devs/pymc3/issues/4004).
- SMC-ABC: add the wasserstein and energy distance functions. Refactor API, the distance, sum_stats and epsilon arguments are now passed `pm.Simulator` instead of `pm.sample_smc`. Add random method to `pm.Simulator`. Add option to save the simulated data. Improves LaTeX representation [#3996](https://github.com/pymc-devs/pymc3/pull/3996)

## PyMC3 3.9.2 (24 June 2020)
### Maintenance
- Warning added in GP module when `input_dim` is lower than the number of columns in `X` to compute the covariance function (see [#3974](https://github.com/pymc-devs/pymc3/pull/3974)).
- Pass the `tune` argument from `sample` when using `advi+adapt_diag_grad` (see issue [#3965](https://github.com/pymc-devs/pymc3/issues/3965), fixed by [#3979](https://github.com/pymc-devs/pymc3/pull/3979)).
- Add simple test case for new coords and dims feature in `pm.Model` (see [#3977](https://github.com/pymc-devs/pymc3/pull/3977)).
- Add simple test case for new coords and dims feature in `pm.Model` (see [#3977](https://github.com/pymc-devs/pymc3/pull/3977)).
- Require ArviZ >= 0.9.0 (see [#3977](https://github.com/pymc-devs/pymc3/pull/3977)).

_NB: The `docs/*` folder is still removed from the tarball due to an upload size limit on PyPi._
Expand Down
173 changes: 112 additions & 61 deletions docs/source/notebooks/Bayes_factor.ipynb

Large diffs are not rendered by default.

535 changes: 357 additions & 178 deletions docs/source/notebooks/GP-Latent.ipynb

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions docs/source/notebooks/MLDA_benchmarks_tuning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -231,9 +231,9 @@
" # convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" # use a Potential for the likelihood\n",
" ll = logl[j]\n",
" pm.DensityDist('likelihood', lambda v: ll(v), observed={'v': theta})\n",
" pm.Potential('likelihood', ll(theta))\n",
"\n",
" coarse_models.append(cmodel)\n",
" \n",
Expand All @@ -248,8 +248,8 @@
" # Convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" pm.DensityDist('likelihood', lambda v: logl[-1](v), observed={'v': theta})\n",
" ## use a Potential for the likelihood\n",
" pm.Potential('likelihood', logl[-1](theta))\n",
" \n",
" return model, coarse_models, true_parameters"
]
Expand Down Expand Up @@ -2716,7 +2716,7 @@
"source": [
"Generally, the optimal subsampling rate depends on the complexity of the fine posterior. The more complex the posterior, the more samples are needed to generate a decent proposal. The reason is that the MLDA sampler is based on the assumption that the coarse proposal samples (i.e. the samples sent from the coarse chain to the fine one) are independent from each other. In order to generate independent samples, it is necessary to run the coarse chain for an adequate number of iterations to get rid of autocorrelation. The more complex the posterior the more iterations are needed and thus a larger subsampling rate.\n",
"\n",
"Note that in cases where you have more than one coarse model/level, MLDA allows you to choose a different subsampling rate for each coarse level (as a list of integers when you instantiate the stepper)."
"Note that in cases where you have more than one coarse model/level, MLDA allows you to choose a different subsampling rate for each coarse level (as a list of integers when you instantiate the stepper)."
]
}
],
Expand Down
8 changes: 4 additions & 4 deletions docs/source/notebooks/MLDA_multilevel_groundwater_flow.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -352,9 +352,9 @@
" # convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" # use a Potential for the likelihood\n",
" ll = logl[j]\n",
" pm.DensityDist('likelihood', lambda v: ll(v), observed={'v': theta})\n",
" pm.Potential('likelihood', ll(theta))\n",
"\n",
" coarse_models.append(model)\n"
]
Expand Down Expand Up @@ -599,8 +599,8 @@
" # Convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" pm.DensityDist('likelihood', lambda v: logl[-1](v), observed={'v': theta})\n",
" # use a Potential for the likelihood\n",
" pm.Potential('likelihood', logl[-1](theta))\n",
"\n",
" # Initialise an MLDA step method object, passing the subsampling rate and\n",
" # coarse models list\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -352,9 +352,9 @@
" # convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" # use a Potential for the likelihood\n",
" ll = logl[j]\n",
" pm.DensityDist('likelihood', lambda v: ll(v), observed={'v': theta})\n",
" pm.Potential('likelihood', ll(theta))\n",
"\n",
" coarse_models.append(model)\n"
]
Expand Down Expand Up @@ -557,8 +557,8 @@
" # Convert m and c to a tensor vector\n",
" theta = tt.as_tensor_variable(parameters)\n",
"\n",
" # use a DensityDist (use a lamdba function to \"call\" the Op)\n",
" pm.DensityDist('likelihood', lambda v: logl[-1](v), observed={'v': theta})\n",
" # use a Potential for the likelihood\n",
" pm.Potential('likelihood', logl[-1](theta))\n",
"\n",
" # Initialise an MLDA step method object, passing the subsampling rate and\n",
" # coarse models list\n",
Expand Down
339 changes: 138 additions & 201 deletions docs/source/notebooks/SMC-ABC_Lotka-Volterra_example.ipynb

Large diffs are not rendered by default.

Loading