You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Fix Stucchio URL
The backslashes are appearing in the actual URL
* Update bibtex-tidy
* Fix Padonou URL
* Increase Node version 15→18
* Run pre-commit autoupdate
* Run pre-commit on all files
Copy file name to clipboardExpand all lines: examples/case_studies/GEV.myst.md
+1-5
Original file line number
Diff line number
Diff line change
@@ -10,8 +10,6 @@ kernelspec:
10
10
name: pymc4-dev
11
11
---
12
12
13
-
+++ {"tags": []}
14
-
15
13
# Generalized Extreme Value Distribution
16
14
17
15
:::{post} Sept 27, 2022
@@ -20,7 +18,7 @@ kernelspec:
20
18
:author: Colin Caprani
21
19
:::
22
20
23
-
+++ {"tags": []}
21
+
+++
24
22
25
23
## Introduction
26
24
@@ -94,8 +92,6 @@ And now set up the model using priors estimated from a quick review of the histo
94
92
- $\xi$: we are agnostic to the tail behaviour so centre this at zero, but limit to physically reasonable bounds of $\pm 0.6$, and keep it somewhat tight near zero.
95
93
96
94
```{code-cell} ipython3
97
-
:tags: []
98
-
99
95
# Optionally centre the data, depending on fitting and divergences
Copy file name to clipboardExpand all lines: examples/case_studies/bart_heteroscedasticity.myst.md
-6
Original file line number
Diff line number
Diff line change
@@ -24,8 +24,6 @@ kernelspec:
24
24
In this notebook we show how to use BART to model heteroscedasticity as described in Section 4.1 of [`pymc-bart`](https://github.com/pymc-devs/pymc-bart)'s paper {cite:p}`quiroga2022bart`. We use the `marketing` data set provided by the R package `datarium` {cite:p}`kassambara2019datarium`. The idea is to model a marketing channel contribution to sales as a function of budget.
25
25
26
26
```{code-cell} ipython3
27
-
:tags: []
28
-
29
27
import os
30
28
31
29
import arviz as az
@@ -37,8 +35,6 @@ import pymc_bart as pmb
37
35
```
38
36
39
37
```{code-cell} ipython3
40
-
:tags: []
41
-
42
38
%config InlineBackend.figure_format = "retina"
43
39
az.style.use("arviz-darkgrid")
44
40
plt.rcParams["figure.figsize"] = [10, 6]
@@ -157,8 +153,6 @@ The fit looks good! In fact, we see that the mean and variance increase as a fun
Copy file name to clipboardExpand all lines: examples/case_studies/binning.myst.md
-40
Original file line number
Diff line number
Diff line change
@@ -106,8 +106,6 @@ Hypothetically we could have used base python, or numpy, to describe the generat
106
106
The approach was illustrated with a Gaussian distribution, and below we show a number of worked examples using Gaussian distributions. However, the approach is general, and at the end of the notebook we provide a demonstration that the approach does indeed extend to non-Gaussian distributions.
107
107
108
108
```{code-cell} ipython3
109
-
:tags: []
110
-
111
109
import warnings
112
110
113
111
import arviz as az
@@ -219,8 +217,6 @@ We will start by investigating what happens when we use only one set of bins to
219
217
### Model specification
220
218
221
219
```{code-cell} ipython3
222
-
:tags: []
223
-
224
220
with pm.Model() as model1:
225
221
sigma = pm.HalfNormal("sigma")
226
222
mu = pm.Normal("mu")
@@ -235,8 +231,6 @@ pm.model_to_graphviz(model1)
235
231
```
236
232
237
233
```{code-cell} ipython3
238
-
:tags: []
239
-
240
234
with model1:
241
235
trace1 = pm.sample()
242
236
```
@@ -248,8 +242,6 @@ Given the posterior values,
248
242
we should be able to generate observations that look close to what we observed.
249
243
250
244
```{code-cell} ipython3
251
-
:tags: []
252
-
253
245
with model1:
254
246
ppc = pm.sample_posterior_predictive(trace1)
255
247
```
@@ -294,22 +286,16 @@ The more important question is whether we have recovered the parameters of the d
294
286
Recall that we used `mu = -2` and `sigma = 2` to generate the data.
Pretty good! And we can access the posterior mean estimates (stored as [xarray](http://xarray.pydata.org/en/stable/index.html) types) as below. The MCMC samples arrive back in a 2D matrix with one dimension for the MCMC chain (`chain`), and one for the sample number (`draw`). We can calculate the overall posterior average with `.mean(dim=["draw", "chain"])`.
0 commit comments