@@ -4,8 +4,8 @@ This directory contains proposals and design documents for turnkey inference.
4
4
5
5
Goal: user specifies how many MCMC samples (or effective samples) they want, and
6
6
the sampling method takes care of the rest. This includes the definition of
7
- ` target_log_prob_fn ` , inital states, and choosing the optimal
8
- (paramterization of) the ` TransitionKernel ` .
7
+ ` target_log_prob_fn ` , initial states, and choosing the optimal
8
+ (parameterization of) the ` TransitionKernel ` .
9
9
10
10
### An expanding window tuning for HMC/NUTS
11
11
@@ -24,14 +24,14 @@ posterior.
24
24
Currently, the TFP NUTS implementation has a speed bottleneck of waiting for the
25
25
slowest chain/batch (due to the SIMD nature), and it could seriously hinder
26
26
performance, especially when the (initial) step size is poorly chosen. Thus,
27
- our strategy here is to run very few chains in the inital warm up (1 or 2).
27
+ our strategy here is to run very few chains in the initial warm up (1 or 2).
28
28
Moreover, by analogy to Stan's expanding memoryless windows (stage II of Stan's
29
- automatic parameter tuning), we implmented an expanding batch, fixed step count
29
+ automatic parameter tuning), we implemented an expanding batch, fixed step count
30
30
method.
31
31
32
32
It is worth noting that, in TFP HMC step sizes are defined per dimension of the
33
33
target_log_prob_fn. To separate the tuning of the step size (a scalar) and the
34
- mass matrix (a vector for diagnoal mass matrix), we apply an inner transform
34
+ mass matrix (a vector for diagonal mass matrix), we apply an inner transform
35
35
transition kernel (recall that the covariance matrix Σ acts as a Euclidean
36
36
metric to rotate and scale the target_log_prob_fn) using a shift and scale
37
37
bijector.
0 commit comments