Skip to content

Commit 1166fe9

Browse files
committed
Improve Readability
Signed-off-by: Ryan Russell <[email protected]>
1 parent bc6c411 commit 1166fe9

File tree

11 files changed

+19
-19
lines changed

11 files changed

+19
-19
lines changed

CONTRIBUTING.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -87,7 +87,7 @@ example invocation (presumed to run from the root of the TFP repo:
8787

8888
To run the unit tests, you'll need several packages installed (again, we
8989
strongly recommend you work in a virtualenv). We include a script to do this for
90-
you, which also does some sanity checks on the environtment:
90+
you, which also does some sanity checks on the environment:
9191

9292
```shell
9393
./testing/install_test_dependencies.sh

discussion/technical_note_on_unrolled_nuts.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ two reasons:
1313

1414
To accomodate these concerns our implementation makes the following
1515
novel observations:
16-
- We *offline enumerate* the recursion possibilties and note all read/write
16+
- We *offline enumerate* the recursion possibilities and note all read/write
1717
operations.
1818
- We pre-allocate requisite memory (for what would otherwise be the recursion
1919
stack).
@@ -258,7 +258,7 @@ step 1(0): x0 ==> U([x0], [1]) ==> x1 --> MH([x',x1], 1/1) --> x''
258258
## Performance Optimization
259259

260260
Using a memory slot of the size 2^max_tree_depth like above is quite
261-
convenient for both sampling and u turn check, as we have the whole history avaiable
261+
convenient for both sampling and u turn check, as we have the whole history available
262262
and can easily index to it. In practice, while it works well for small
263263
problem, users could quickly ran into memory problem with large batch size (i.e.,
264264
number of chains), large latent size (i.e., dimension of the free parameters),

discussion/turnkey_inference_candidate/README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ This directory contains proposals and design documents for turnkey inference.
44

55
Goal: user specifies how many MCMC samples (or effective samples) they want, and
66
the sampling method takes care of the rest. This includes the definition of
7-
`target_log_prob_fn`, inital states, and choosing the optimal
8-
(paramterization of) the `TransitionKernel`.
7+
`target_log_prob_fn`, initial states, and choosing the optimal
8+
(parameterization of) the `TransitionKernel`.
99

1010
### An expanding window tuning for HMC/NUTS
1111

@@ -24,14 +24,14 @@ posterior.
2424
Currently, the TFP NUTS implementation has a speed bottleneck of waiting for the
2525
slowest chain/batch (due to the SIMD nature), and it could seriously hinder
2626
performance, especially when the (initial) step size is poorly chosen. Thus,
27-
our strategy here is to run very few chains in the inital warm up (1 or 2).
27+
our strategy here is to run very few chains in the initial warm up (1 or 2).
2828
Moreover, by analogy to Stan's expanding memoryless windows (stage II of Stan's
29-
automatic parameter tuning), we implmented an expanding batch, fixed step count
29+
automatic parameter tuning), we implemented an expanding batch, fixed step count
3030
method.
3131

3232
It is worth noting that, in TFP HMC step sizes are defined per dimension of the
3333
target_log_prob_fn. To separate the tuning of the step size (a scalar) and the
34-
mass matrix (a vector for diagnoal mass matrix), we apply an inner transform
34+
mass matrix (a vector for diagonal mass matrix), we apply an inner transform
3535
transition kernel (recall that the covariance matrix Σ acts as a Euclidean
3636
metric to rotate and scale the target_log_prob_fn) using a shift and scale
3737
bijector.

discussion/turnkey_inference_candidate/window_tune_nuts_sampling.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -365,13 +365,13 @@ def window_tune_nuts_sampling(target_log_prob,
365365
(possibly unnormalized) log-density under the target distribution.
366366
prior_samples: Nested structure of `Tensor`s, each of shape `[batches,
367367
latent_part_event_shape]` and should be sample from the prior. They are
368-
used to generate an inital chain position if `init_state` is not supplied.
368+
used to generate an initial chain position if `init_state` is not supplied.
369369
constraining_bijectors: `tfp.distributions.Bijector` or list of
370370
`tfp.distributions.Bijector`s. These bijectors use `forward` to map the
371371
state on the real space to the constrained state expected by
372372
`target_log_prob`.
373373
init_state: (Optional) `Tensor` or Python `list` of `Tensor`s representing
374-
the inital state(s) of the Markov chain(s).
374+
the initial state(s) of the Markov chain(s).
375375
num_samples: Integer number of the Markov chain draws after tuning.
376376
nchains: Integer number of the Markov chains after tuning.
377377
init_nchains: Integer number of the Markov chains in the first phase of
@@ -380,7 +380,7 @@ def window_tune_nuts_sampling(target_log_prob,
380380
probability for step size adaptation.
381381
max_tree_depth: Maximum depth of the tree implicitly built by NUTS. See
382382
`tfp.mcmc.NoUTurnSampler` for more details
383-
use_scaled_init: Boolean. If `True`, generate inital state within [-1, 1]
383+
use_scaled_init: Boolean. If `True`, generate initial state within [-1, 1]
384384
scaled by prior sample standard deviation in the unconstrained real space.
385385
This kwarg is ignored if `init_state` is not None
386386
tuning_window_schedule: List-like sequence of integers that specify the

spinoffs/inference_gym/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Check out the [tutorial].
3636

3737
```bash
3838
pip install tfp-nightly inference_gym
39-
# Install at least one the folowing
39+
# Install at least one the following
4040
pip install tf-nightly # For the TensorFlow backend.
4141
pip install jax jaxlib # For the JAX backend.
4242
# Install to support external datasets

spinoffs/inference_gym/model_contract.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ others for future refinement.
1414
The primary use case of a model is to be able to run an inference algorithm on
1515
it. The secondary goal is to be able to verify the accuracy of the algorithm.
1616
There are other finer points of usability which also matter, but the overarching
17-
princple of the contract for models is that it's better to have a model usable
17+
principle of the contract for models is that it's better to have a model usable
1818
for its primary use case without all the nice-to-haves, rather than not have
1919
the model available at all.
2020

@@ -82,7 +82,7 @@ argument for inclusion of the model.
8282
example, regression models should support computing held-out negative
8383
log-likelihood. Rationale: This is similar to having a standard
8484
parameterization. In this case, there are certain transformations which are
85-
natural to look at when analyizing a model.
85+
natural to look at when analyzing a model.
8686

8787
4. If the model has analytic ground truth values, they should be filled in.
8888
Rationale: Ground truth values enable one way of measuring the bias of an

tensorflow_probability/examples/jupyter_notebooks/Undocumented_Infection_and_the_Dissemination_of_SARS-CoV2.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -7623,7 +7623,7 @@
76237623
" exposed[..., WUHAN_IDX] = wuhan_exposed\n",
76247624
" undocumented_infectious[..., WUHAN_IDX] = wuhan_undocumented_infectious\n",
76257625
"\n",
7626-
" # Following Li et al, we do not remove the inital exposed and infectious\n",
7626+
" # Following Li et al, we do not remove the initial exposed and infectious\n",
76277627
" # persons from the susceptible population.\n",
76287628
" return SEIRComponents(\n",
76297629
" susceptible=tf.constant(susceptible),\n",

tensorflow_probability/python/experimental/distributions/marginal_fns.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -167,7 +167,7 @@ def retrying_cholesky(
167167
168168
Args:
169169
matrix: A batch of symmetric square matrices, with shape `[..., n, n]`.
170-
jitter: Initial jitter to add to the diagnoal. Default: 1e-6, unless
170+
jitter: Initial jitter to add to the diagonal. Default: 1e-6, unless
171171
`matrix.dtype` is float64, in which case the default is 1e-10.
172172
max_iters: Maximum number of times to retry the Cholesky decomposition
173173
with larger diagonal jitter. Default: 5.

tensorflow_probability/python/experimental/mcmc/preconditioned_nuts.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -731,7 +731,7 @@ def _build_sub_tree(self,
731731
name=None):
732732
with tf.name_scope('build_sub_tree'):
733733
batch_shape = ps.shape(current_step_meta_info.init_energy)
734-
# We never want to select the inital state
734+
# We never want to select the initial state
735735
if MULTINOMIAL_SAMPLE:
736736
init_weight = tf.fill(
737737
batch_shape,

tensorflow_probability/python/mcmc/internal/slice_sampler_utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,7 @@ def _left_doubling_increments(batch_shape, max_doublings, step_size, seed=None,
8484
widths = width_multipliers * step_size
8585

8686
# Take the cumulative sum of the left side increments in slice width to give
87-
# the resulting distance from the inital lower bound.
87+
# the resulting distance from the initial lower bound.
8888
left_increments = tf.cumsum(widths * expand_left, exclusive=True, axis=0)
8989
return left_increments, widths
9090

tensorflow_probability/python/mcmc/nuts.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -722,7 +722,7 @@ def _build_sub_tree(self,
722722
name=None):
723723
with tf.name_scope('build_sub_tree'):
724724
batch_shape = ps.shape(current_step_meta_info.init_energy)
725-
# We never want to select the inital state
725+
# We never want to select the initial state
726726
if MULTINOMIAL_SAMPLE:
727727
init_weight = tf.fill(
728728
batch_shape,

0 commit comments

Comments
 (0)