Skip to content

Commit 07ab6ca

Browse files
committed
nb updated and added to examples
1 parent 08e0617 commit 07ab6ca

File tree

6 files changed

+43
-29
lines changed

6 files changed

+43
-29
lines changed

RELEASE-NOTES.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
- Improve NUTS initialization `advi+adapt_diag_grad` and add `jitter+adapt_diag_grad` (#2643)
99
- Update loo, new improved algorithm (#2730)
1010
- New CSG (Constant Stochastic Gradient) approximate posterior sampling
11-
algorithm added
11+
algorithm (#2544)
1212
### Fixes
1313
- Fixed `compareplot` to use `loo` output.
1414
- Add test for `model.logp_array` and `model.bijection` (#2724)
@@ -202,6 +202,7 @@ Taku Yoshioka <[email protected]>
202202
Peadar Coyle (springcoil) <[email protected]>
203203
Austin Rochford <[email protected]>
204204
Osvaldo Martin <[email protected]>
205+
Shashank Shekhar <[email protected]>
205206

206207
In addition, the following community members contributed to this release:
207208

@@ -250,7 +251,6 @@ Patricio Benavente <[email protected]>
250251
Raymond Roberts
251252
Rodrigo Benenson <[email protected]>
252253
Sergei Lebedev <[email protected]>
253-
Shashank Shekhar <[email protected]>
254254
Skipper Seabold <[email protected]>
255255
Thomas Kluyver <[email protected]>
256256
Tobias Knuth <[email protected]>

docs/source/examples.rst

+9
Original file line numberDiff line numberDiff line change
@@ -78,3 +78,12 @@ Variational Inference
7878
notebooks/convolutional_vae_keras_advi.ipynb
7979
notebooks/empirical-approx-overview.ipynb
8080
notebooks/normalizing_flows_overview.ipynb
81+
82+
83+
Stochastic Gradient
84+
===================
85+
86+
.. toctree::
87+
notebooks/constant_stochastic_gradient.ipynb
88+
notebooks/sgfs_simple_optimization.ipynb
89+
notebooks/bayesian_neural_network_with_sgfs.ipynb

docs/source/notebooks/constant_stochastic_gradient.ipynb

+9-15
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# Introduction"
7+
"# Constant Stochastic Gradient"
88
]
99
},
1010
{
@@ -59,14 +59,8 @@
5959
"cell_type": "markdown",
6060
"metadata": {},
6161
"source": [
62-
"# Sampling: Constant Stochastic Gradient"
63-
]
64-
},
65-
{
66-
"cell_type": "markdown",
67-
"metadata": {},
68-
"source": [
69-
"# Problem: A multivariate regression problem on the Protein Structure Properties dataset available at the [uci repo](https://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure)."
62+
"We will take a regression on a protein dataset that is used to show results in Figure 1. \n",
63+
"It is a multivariate regression problem on the Protein Structure Properties dataset available at the [uci repo](https://archive.ics.uci.edu/ml/datasets/Physicochemical+Properties+of+Protein+Tertiary+Structure)."
7064
]
7165
},
7266
{
@@ -378,7 +372,7 @@
378372
"cell_type": "markdown",
379373
"metadata": {},
380374
"source": [
381-
"## NUTS Trace Plot"
375+
"### NUTS Trace Plot"
382376
]
383377
},
384378
{
@@ -420,7 +414,7 @@
420414
"cell_type": "markdown",
421415
"metadata": {},
422416
"source": [
423-
"## Preconditioned CSG Trace Plot"
417+
"### Preconditioned CSG Trace Plot"
424418
]
425419
},
426420
{
@@ -462,7 +456,7 @@
462456
"cell_type": "markdown",
463457
"metadata": {},
464458
"source": [
465-
"## SGFS Trace Plot"
459+
"### SGFS Trace Plot"
466460
]
467461
},
468462
{
@@ -504,7 +498,7 @@
504498
"cell_type": "markdown",
505499
"metadata": {},
506500
"source": [
507-
"## Mean Absolute Error on Test Dataset"
501+
"### Mean Absolute Error on Test Dataset"
508502
]
509503
},
510504
{
@@ -622,7 +616,7 @@
622616
"cell_type": "markdown",
623617
"metadata": {},
624618
"source": [
625-
"## Sample covariance projections on the smallest and largest components"
619+
"### Sample covariance projections on the smallest and largest components"
626620
]
627621
},
628622
{
@@ -760,7 +754,7 @@
760754
"cell_type": "markdown",
761755
"metadata": {},
762756
"source": [
763-
"# Result"
757+
"### Result"
764758
]
765759
},
766760
{

docs/source/notebooks/sgfs_simple_optimization.ipynb

+12-5
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,12 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Stochastic Gradient Fisher Scoring"
8+
]
9+
},
310
{
411
"cell_type": "markdown",
512
"metadata": {},
@@ -185,21 +192,21 @@
185192
],
186193
"metadata": {
187194
"kernelspec": {
188-
"display_name": "Python 3",
195+
"display_name": "Python 2",
189196
"language": "python",
190-
"name": "python3"
197+
"name": "python2"
191198
},
192199
"language_info": {
193200
"codemirror_mode": {
194201
"name": "ipython",
195-
"version": 3
202+
"version": 2
196203
},
197204
"file_extension": ".py",
198205
"mimetype": "text/x-python",
199206
"name": "python",
200207
"nbconvert_exporter": "python",
201-
"pygments_lexer": "ipython3",
202-
"version": "3.6.1"
208+
"pygments_lexer": "ipython2",
209+
"version": "2.7.13"
203210
},
204211
"toc": {
205212
"colors": {

pymc3/sampling.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
from .backends.base import BaseTrace, MultiTrace
1111
from .backends.ndarray import NDArray
1212
from .model import modelcontext, Point
13-
from .step_methods import (NUTS, HamiltonianMC, SGFS, CSG, Metropolis, BinaryMetropolis,
13+
from .step_methods import (NUTS, HamiltonianMC, Metropolis, BinaryMetropolis,
1414
BinaryGibbsMetropolis, CategoricalGibbsMetropolis,
1515
Slice, CompoundStep)
1616
from .util import update_start_vals
@@ -23,7 +23,7 @@
2323

2424
__all__ = ['sample', 'iter_sample', 'sample_ppc', 'sample_ppc_w', 'init_nuts']
2525

26-
STEP_METHODS = (NUTS, HamiltonianMC, SGFS, CSG, Metropolis, BinaryMetropolis,
26+
STEP_METHODS = (NUTS, HamiltonianMC, Metropolis, BinaryMetropolis,
2727
BinaryGibbsMetropolis, Slice, CategoricalGibbsMetropolis)
2828

2929

pymc3/step_methods/sgmcmc.py

+9-5
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ class BaseStochasticGradient(ArrayStepShared):
7070
variables of type `GeneratorOp`
7171
7272
Parameters
73-
-------
73+
----------
7474
vars : list
7575
List of variables for sampler
7676
batch_size`: int
@@ -206,7 +206,7 @@ class SGFS(BaseStochasticGradient):
206206
StochasticGradientFisherScoring
207207
208208
Parameters
209-
-----
209+
----------
210210
vars : list
211211
model variables
212212
B : np.array
@@ -216,10 +216,11 @@ class SGFS(BaseStochasticGradient):
216216
to the half of the previous step size
217217
218218
References
219-
-----
219+
----------
220220
- Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring
221221
Implements Algorithm 1 from the publication http://people.ee.duke.edu/%7Elcarin/782.pdf
222222
"""
223+
name = 'stochastic_gradient_fisher_scoring'
223224

224225
def __init__(self, vars=None, B=None, step_size_decay=100, **kwargs):
225226
"""
@@ -323,17 +324,20 @@ class CSG(BaseStochasticGradient):
323324
discusses a proof for the optimal preconditioning matrix
324325
based on variational inference, so there is no parameter tuning required
325326
like in the case of 'B' matrix used for preconditioning in SGFS.
327+
Take a look at this example notebook
328+
https://github.com/pymc-devs/pymc3/tree/master/docs/source/notebooks/constant_stochastic_gradient.ipynb
326329
327330
Parameters
328-
-----
331+
----------
329332
vars : list
330333
model variables
331334
332335
References
333-
-----
336+
----------
334337
- Stochastic Gradient Descent as Approximate Bayesian Inference
335338
https://arxiv.org/pdf/1704.04289v1.pdf
336339
"""
340+
name = 'constant_stochastic_gradient'
337341

338342
def __init__(self, vars=None, **kwargs):
339343
"""

0 commit comments

Comments
 (0)