Skip to content

Commit 081b3a5

Browse files
committed
add the AverageLearner1D/2D to the docs
1 parent e94dbb7 commit 081b3a5

7 files changed

+264
-59
lines changed

docs/source/docs.rst

+2-1
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@ The following learners are implemented:
1616
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
1717
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
1818
- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
19-
- `~adaptive.AverageLearner`, For stochastic functions where you want to
19+
- `~adaptive.AverageLearner`, for stochastic functions where you want to
2020
average the result over many evaluations,
21+
- `~adaptive.AverageLearner1D` and `~adaptive.AverageLearner2D`, like the ``Learner1D/2D`` but where every point is averaged over many evaluations,
2122
- `~adaptive.IntegratorLearner`, for
2223
when you want to intergrate a 1D function ``f: ℝ → ℝ``.
2324

Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
adaptive.AverageLearner1D
2+
=========================
3+
4+
.. autoclass:: adaptive.AverageLearner1D
5+
:members:
6+
:undoc-members:
7+
:show-inheritance:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
adaptive.AverageLearner2D
2+
=========================
3+
4+
.. autoclass:: adaptive.AverageLearner2D
5+
:members:
6+
:undoc-members:
7+
:show-inheritance:

docs/source/reference/adaptive.rst

+2
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,8 @@ Learners
1414
adaptive.learner.learner1D
1515
adaptive.learner.learner2D
1616
adaptive.learner.learnerND
17+
adaptive.learner.average1D
18+
adaptive.learner.average2D
1719
adaptive.learner.skopt_learner
1820

1921
Runners

docs/source/tutorial/tutorial.AverageLearner.rst

-57
This file was deleted.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,245 @@
1+
Tutorial AverageLearners (0D, 1D, and 2D)
2+
-----------------------------------------
3+
4+
.. note::
5+
Because this documentation consists of static html, the ``live_plot``
6+
and ``live_info`` widget is not live. Download the notebook
7+
in order to see the real behaviour.
8+
9+
.. seealso::
10+
The complete source code of this tutorial can be found in
11+
:jupyter-download:notebook:`tutorial.AverageLearners`
12+
13+
.. jupyter-execute::
14+
:hide-code:
15+
16+
import adaptive
17+
adaptive.notebook_extension(_inline_js=False)
18+
19+
`~adaptive.AverageLearner` (0D)
20+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
21+
22+
The next type of learner averages a function until the uncertainty in
23+
the average meets some condition.
24+
25+
This is useful for sampling a random variable. The function passed to
26+
the learner must formally take a single parameter, which should be used
27+
like a “seed” for the (pseudo-) random variable (although in the current
28+
implementation the seed parameter can be ignored by the function).
29+
30+
.. jupyter-execute::
31+
32+
def g(n):
33+
import random
34+
from time import sleep
35+
sleep(random.random() / 1000)
36+
# Properly save and restore the RNG state
37+
state = random.getstate()
38+
random.seed(n)
39+
val = random.gauss(0.5, 1)
40+
random.setstate(state)
41+
return val
42+
43+
.. jupyter-execute::
44+
45+
learner = adaptive.AverageLearner(g, atol=None, rtol=0.05)
46+
# `loss < 1` means that we reached the `rtol` or `atol`
47+
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1)
48+
49+
.. jupyter-execute::
50+
:hide-code:
51+
52+
await runner.task # This is not needed in a notebook environment!
53+
54+
.. jupyter-execute::
55+
56+
runner.live_info()
57+
58+
.. jupyter-execute::
59+
60+
runner.live_plot(update_interval=0.1)
61+
62+
`~adaptive.AverageLearner1D` and `~adaptive.AverageLearner2D`
63+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
64+
65+
This learner is a combination between the `~adaptive.Learner1D` (or `~adaptive.Learner2D`)
66+
and the `~adaptive.AverageLearner`, in a way such that it handles
67+
stochastic functions with one (or two) variables.
68+
69+
Here, when chosing points the learner can either:
70+
71+
* add more values/seeds to existing points
72+
* add more intervals (or triangles)
73+
74+
So, the ``learner`` compares **the loss of intervals (or triangles)** with the **standard error** of an existing point.
75+
76+
The relative importance of both can be adjusted by a hyperparameter ``learner.average_priority``, see the doc-string for more information.
77+
78+
See the following plot for a visual explanation.
79+
80+
.. jupyter-execute::
81+
:hide-code:
82+
83+
import numpy as np
84+
import matplotlib.pyplot as plt
85+
from matplotlib import rcParams
86+
%matplotlib inline
87+
rcParams['figure.dpi'] = 300
88+
rcParams['text.usetex'] = True
89+
90+
np.random.seed(1)
91+
xs = np.sort(np.random.uniform(-1, 1, 3))
92+
errs = np.abs(np.random.randn(3))
93+
ys = xs**3
94+
means = lambda x: np.convolve(x, np.ones(2) / 2, mode='valid')
95+
xs_means = means(xs)
96+
ys_means = means(ys)
97+
98+
fig, ax = plt.subplots()
99+
plt.scatter(xs, ys, c='k')
100+
ax.errorbar(xs, ys, errs, capsize=5, c='k')
101+
ax.annotate(
102+
s=r'$L_{1,2} = \sqrt{\Delta x^2 + \Delta \bar{y}^2}$',
103+
xy=(np.mean([xs[0], xs[1], xs[1]]),
104+
np.mean([ys[0], ys[1], ys[1]])),
105+
xytext=(xs_means[0], ys_means[0] + 1),
106+
arrowprops=dict(arrowstyle='->'),
107+
ha='center',
108+
)
109+
110+
for i, (x, y, err) in enumerate(zip(xs, ys, errs)):
111+
err_str = fr'${{\sigma}}_{{\bar {{y}}_{i+1}}}$'
112+
ax.annotate(
113+
s=err_str,
114+
xy=(x, y + err/2),
115+
xytext=(x + 0.1, y + err + 0.5),
116+
arrowprops=dict(arrowstyle='->'),
117+
ha='center',
118+
)
119+
120+
ax.annotate(
121+
s=fr'$x_{i+1}, \bar{{y}}_{i+1}$',
122+
xy=(x, y),
123+
xytext=(x + 0.1, y - 0.5),
124+
arrowprops=dict(arrowstyle='->'),
125+
ha='center',
126+
)
127+
128+
129+
ax.scatter(xs, ys, c='green', s=5, zorder=5, label='more seeds')
130+
ax.scatter(xs_means, ys_means, c='red', s=5, zorder=5, label='new point')
131+
ax.legend()
132+
133+
ax.text(
134+
x=0.5,
135+
y=0.0,
136+
s=(r'$\textrm{if}\; \max{(L_{i,i+1})} > \textrm{average\_priority} \cdot \max{\sigma_{\bar{y}_{i}}} \rightarrow,\;\textrm{add new point}$'
137+
'\n'
138+
r'$\textrm{if}\; \max{(L_{i,i+1})} < \textrm{average\_priority} \cdot \max{\sigma_{\bar{y}_{i}}} \rightarrow,\;\textrm{add new seeds}$'),
139+
horizontalalignment='center',
140+
verticalalignment='center',
141+
transform=ax.transAxes
142+
)
143+
ax.set_title('AverageLearner1D')
144+
ax.axis('off')
145+
plt.show()
146+
147+
148+
In this plot :math:`L_{i,i+1}` is the default ``learner.loss_per_interval`` and :math:`\sigma_{\bar{y}_i}` is the standard error of the mean.
149+
150+
Basically, we put all losses per interval and standard errors (scaled by ``average_priority``) in a list.
151+
The point of the maximal value will be chosen.
152+
153+
It is important to note that all :math:`x`, :math:`y`, (and :math:`z` in 2D) are scaled to be inside
154+
the unit square (or cube) in both the ``loss_per_interval`` and the standard error.
155+
156+
157+
.. warning::
158+
If you choose the ``average_priority`` too low, the standard errors :math:`\sigma_{\bar{y}_i}` will be high.
159+
This leads to incorrectly estimated averages :math:`\bar{y}_i` and therefore points that are closeby, can appear to be far away.
160+
This in turn results in new points unnecessarily being added and an unstable sampling algorithm!
161+
162+
163+
Let's again try to learn some functions but now with uniform (and `heteroscedastic <https://en.wikipedia.org/wiki/Heteroscedasticity>`_ in 2D) noise. We start with 1D and then go to 2D.
164+
165+
`~adaptive.AverageLearner1D`
166+
............................
167+
168+
.. jupyter-execute::
169+
170+
def noisy_peak(x_seed):
171+
import random
172+
x, seed = x_seed
173+
random.seed(x_seed) # to make the random function deterministic
174+
a = 0.01
175+
peak = x + a**2 / (a**2 + x**2)
176+
noise = random.uniform(-0.5, 0.5)
177+
return peak + noise
178+
179+
learner = adaptive.AverageLearner1D(noisy_peak, bounds=(-1, 1), average_priority=40)
180+
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.05)
181+
runner.live_info()
182+
183+
.. jupyter-execute::
184+
:hide-code:
185+
186+
await runner.task # This is not needed in a notebook environment!
187+
188+
.. jupyter-execute::
189+
190+
%%opts Image {+axiswise} [colorbar=True]
191+
# We plot the average
192+
193+
def plotter(learner):
194+
plot = learner.plot()
195+
number_of_points = learner.mean_values_per_point()
196+
title = f'loss={learner.loss():.3f}, mean_npoints={number_of_points}'
197+
return plot.opts(plot=dict(title_format=title))
198+
199+
runner.live_plot(update_interval=0.1, plotter=plotter)
200+
201+
`~adaptive.AverageLearner2D`
202+
............................
203+
204+
.. jupyter-execute::
205+
206+
def noisy_ring(xy_seed):
207+
import numpy as np
208+
import random
209+
(x, y), seed = xy_seed
210+
random.seed(xy_seed) # to make the random function deterministic
211+
a = 0.2
212+
z = (x**2 + y**2 - 0.75**2) / a**2
213+
plateau = np.arctan(z)
214+
noise = random.uniform(-2, 2) * np.exp(-z**2)
215+
return plateau + noise
216+
217+
learner = adaptive.AverageLearner2D(noisy_ring, bounds=[(-1, 1), (-1, 1)])
218+
runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01)
219+
runner.live_info()
220+
221+
.. jupyter-execute::
222+
:hide-code:
223+
224+
await runner.task # This is not needed in a notebook environment!
225+
226+
See the average number of values per point with:
227+
228+
.. jupyter-execute::
229+
230+
learner.mean_values_per_point()
231+
232+
Let's plot the average and the number of values per point.
233+
Because the noise lies on a circle we expect the number of values per
234+
to be higher on the circle.
235+
236+
.. jupyter-execute::
237+
238+
%%opts Image {+axiswise} [colorbar=True]
239+
# We plot the average and the standard deviation
240+
def plotter(learner):
241+
return (learner.plot_std_or_n('mean')
242+
+ learner.plot_std_or_n('std')
243+
+ learner.plot_std_or_n('n')).cols(2)
244+
245+
runner.live_plot(update_interval=0.1, plotter=plotter)

docs/source/tutorial/tutorial.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ We recommend to start with the :ref:`Tutorial `~adaptive.Learner1D``.
3535
tutorial.Learner1D
3636
tutorial.Learner2D
3737
tutorial.custom_loss
38-
tutorial.AverageLearner
38+
tutorial.AverageLearners
3939
tutorial.BalancingLearner
4040
tutorial.DataSaver
4141
tutorial.IntegratorLearner

0 commit comments

Comments
 (0)