diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index bee37e90e..0042031bc 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -1,27 +1,27 @@ repos: -- repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.2.0 - hooks: - - id: trailing-whitespace - - id: end-of-file-fixer - - id: check-docstring-first - - id: check-yaml - - id: debug-statements - - id: check-ast -- repo: https://github.com/ambv/black - rev: 22.3.0 - hooks: - - id: black -- repo: https://github.com/asottile/pyupgrade - rev: v2.32.1 - hooks: - - id: pyupgrade - args: ['--py37-plus'] -- repo: https://github.com/timothycrosley/isort - rev: 5.10.1 - hooks: - - id: isort -- repo: https://gitlab.com/pycqa/flake8 - rev: 3.9.2 - hooks: - - id: flake8 + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v4.2.0 + hooks: + - id: trailing-whitespace + - id: end-of-file-fixer + - id: check-docstring-first + - id: check-yaml + - id: debug-statements + - id: check-ast + - repo: https://github.com/ambv/black + rev: 22.3.0 + hooks: + - id: black + - repo: https://github.com/asottile/pyupgrade + rev: v2.32.1 + hooks: + - id: pyupgrade + args: ["--py37-plus"] + - repo: https://github.com/timothycrosley/isort + rev: 5.10.1 + hooks: + - id: isort + - repo: https://gitlab.com/pycqa/flake8 + rev: 3.9.2 + hooks: + - id: flake8 diff --git a/AUTHORS.md b/AUTHORS.md index 415875dc8..69a2ba67e 100644 --- a/AUTHORS.md +++ b/AUTHORS.md @@ -2,18 +2,22 @@ The current maintainers of Adaptive are: -- [Bas Nijholt]() -- [Joseph Weston]() -- [Anton Akhmerov]() +- [Bas Nijholt]() ([@basnijholt](https://github.com/basnijholt)) +- [Joseph Weston]() ([@jbweston](https://github.com/jbweston)) +- [Anton Akhmerov]() ([@akhmerov](https://github.com/akhmerov)) Other contributors to Adaptive include: -- Andrey E. Antipov +- Andrey E. Antipov ([@aeantipov](https://github.com/aeantipov)) - [Christoph Groth]() -- Jorn Hoofwijk -- Philippe Solodov (@philippeitis) -- Victor Negîrneac (@caenrigen) -- Thomas A Caswell (@tacaswell) -- Álvaro Gómez Iñesta (@AlvaroGI) -- Sultan Orazbayev (@SultanOrazbayev) -- Thomas Aarholt (@thomasaarholt) +- Jorn Hoofwijk ([@jhoofwijk](https://github.com/jhoofwijk)) +- Philippe Solodov ([@philippeitis](https://github.com/philippeitis)) +- Victor Negîrneac ([@caenrigen](https://github.com/caenrigen)) +- Thomas A Caswell ([@tacaswell](https://github.com/tacaswell)) +- Álvaro Gómez Iñesta ([@AlvaroGI](https://github.com/AlvaroGI)) +- Sultan Orazbayev ([@SultanOrazbayev](https://github.com/SultanOrazbayev)) +- Thomas Aarholt ([@thomasaarholt](https://github.com/thomasaarholt)) +- Andrea Maiani ([@maiani](https://github.com/maiani)) +- Juan Daniel Torres ([@juandaanieel](https://github.com/juandaanieel)) +- Davide Sandonà ([@juandaanieel](https://github.com/Davide-sd)) +- Pieter Eendebak ([@eendebakpt](https://github.com/eendebakpt)) diff --git a/README.md b/README.md new file mode 100644 index 000000000..5032b0cbc --- /dev/null +++ b/README.md @@ -0,0 +1,167 @@ + + +# ![logo](https://adaptive.readthedocs.io/en/latest/_static/logo.png) adaptive + +[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=example-notebook.ipynb) +[![Conda](https://img.shields.io/badge/install%20with-conda-green.svg)](https://anaconda.org/conda-forge/adaptive) +[![Coverage](https://img.shields.io/codecov/c/github/python-adaptive/adaptive)](https://codecov.io/gh/python-adaptive/adaptive) +[![DOI](https://img.shields.io/badge/doi-10.5281%2Fzenodo.1182437-blue.svg)](https://doi.org/10.5281/zenodo.1182437) +[![Documentation](https://readthedocs.org/projects/adaptive/badge/?version=latest)](https://adaptive.readthedocs.io/en/latest/?badge=latest) +[![Downloads](https://img.shields.io/conda/dn/conda-forge/adaptive.svg)](https://anaconda.org/conda-forge/adaptive) +[![GitHub](https://img.shields.io/github/stars/python-adaptive/adaptive.svg?style=social)](https://github.com/python-adaptive/adaptive/stargazers) +[![Gitter](https://img.shields.io/gitter/room/nwjs/nw.js.svg)](https://gitter.im/python-adaptive/adaptive) +[![Pipeline-status](https://dev.azure.com/python-adaptive/adaptive/_apis/build/status/python-adaptive.adaptive?branchName=master)](https://dev.azure.com/python-adaptive/adaptive/_build/latest?definitionId=6?branchName=master) +[![PyPI](https://img.shields.io/pypi/v/adaptive.svg)](https://pypi.python.org/pypi/adaptive) + +> *Adaptive*: parallel active learning of mathematical functions. + + + + + +`adaptive` is an open-source Python library designed to make adaptive parallel function evaluation simple. With `adaptive` you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing *all* points on a dense grid. +With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm. + +`adaptive` excels on computations where each function evaluation takes *at least* ≈50ms due to the overhead of picking potentially interesting points. + +Run the `adaptive` example notebook [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=example-notebook.ipynb) to see examples of how to use `adaptive` or visit the [tutorial on Read the Docs](https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html). + + + +## Implemented algorithms + +The core concept in `adaptive` is that of a *learner*. +A *learner* samples a function at the best places in its parameter space to get maximum “information” about the function. +As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next. + +Of course, what qualifies as the “best places” will depend on your application domain! `adaptive` makes some reasonable default choices, but the details of the adaptive sampling are completely customizable. + +The following learners are implemented: + + + +- `Learner1D`, for 1D functions `f: ℝ → ℝ^N`, +- `Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`, +- `LearnerND`, for ND functions `f: ℝ^N → ℝ^M`, +- `AverageLearner`, for random variables where you want to average the result over many evaluations, +- `AverageLearner1D`, for stochastic 1D functions where you want to estimate the mean value of the function at each point, +- `IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`. +- `BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points. + +Meta-learners (to be used with other learners): + +- `BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points, +- `DataSaver`, for when your function doesn't just return a scalar or a vector. + +In addition to the learners, `adaptive` also provides primitives for running the sampling across several cores and even several machines, with built-in support for +[concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html), +[mpi4py](https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html), +[loky](https://loky.readthedocs.io/en/stable/), +[ipyparallel](https://ipyparallel.readthedocs.io/en/latest/), and +[distributed](https://distributed.readthedocs.io/en/latest/). + +## Examples + +Adaptively learning a 1D function (the `gif` below) and live-plotting the process in a Jupyter notebook is as easy as + +```python +from adaptive import notebook_extension, Runner, Learner1D + +notebook_extension() + + +def peak(x, a=0.01): + return x + a**2 / (a**2 + x**2) + + +learner = Learner1D(peak, bounds=(-1, 1)) +runner = Runner(learner, goal=lambda l: l.loss() < 0.01) +runner.live_info() +runner.live_plot() +``` + + + + + +## Installation + +`adaptive` works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook. + +The recommended way to install adaptive is using `conda`: + +```bash +conda install -c conda-forge adaptive +``` + +`adaptive` is also available on PyPI: + +```bash +pip install "adaptive[notebook]" +``` + +The `[notebook]` above will also install the optional dependencies for running `adaptive` inside a Jupyter notebook. + +To use Adaptive in Jupyterlab, you need to install the following labextensions. + +```bash +jupyter labextension install @jupyter-widgets/jupyterlab-manager +jupyter labextension install @pyviz/jupyterlab_pyviz +``` + +## Development + +Clone the repository and run `pip install -e ".[notebook,testing,other]"` to add a link to the cloned repo into your Python path: + +```bash +git clone git@github.com:python-adaptive/adaptive.git +cd adaptive +pip install -e ".[notebook,testing,other]" +``` + +We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on `adaptive`. + +In order to not pollute the history with the output of the notebooks, please setup the git filter by executing + +```bash +python ipynb_filter.py +``` + +in the repository. + +We implement several other checks in order to maintain a consistent code style. We do this using [pre-commit](https://pre-commit.com), execute + +```bash +pre-commit install +``` + +in the repository. + +## Citing + +If you used Adaptive in a scientific work, please cite it as follows. + +```bib +@misc{Nijholt2019, + doi = {10.5281/zenodo.1182437}, + author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov}, + title = {\textit{Adaptive}: parallel active learning of mathematical functions}, + publisher = {Zenodo}, + year = {2019} +} +``` + +## Credits + +We would like to give credits to the following people: + +- Pedro Gonnet for his implementation of [CQUAD](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010. +- Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the `adaptive.Learner2D`. + + + +For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive). If you find any bugs or have any feature suggestions please file a GitHub [issue](https://github.com/python-adaptive/adaptive/issues/new) or submit a [pull request](https://github.com/python-adaptive/adaptive/pulls). + + + + diff --git a/README.rst b/README.rst deleted file mode 100644 index 7c910a59c..000000000 --- a/README.rst +++ /dev/null @@ -1,219 +0,0 @@ -.. summary-start - -|logo| adaptive -=============== - -|PyPI| |Conda| |Downloads| |Pipeline status| |DOI| |Binder| |Gitter| -|Documentation| |Coverage| |GitHub| - - *Adaptive*: parallel active learning of mathematical functions. - -.. include:: logo.rst - -``adaptive`` is an open-source Python library designed to -make adaptive parallel function evaluation simple. With ``adaptive`` you -just supply a function with its bounds, and it will be evaluated at the -“best” points in parameter space, rather than unnecessarily computing *all* points on a dense grid. -With just a few lines of code you can evaluate functions on a computing cluster, -live-plot the data as it returns, and fine-tune the adaptive sampling algorithm. - -``adaptive`` shines on computations where each evaluation of the function -takes *at least* ≈100ms due to the overhead of picking potentially interesting points. - -Run the ``adaptive`` example notebook `live on -Binder `_ -to see examples of how to use ``adaptive`` or visit the -`tutorial on Read the Docs `__. - -.. summary-end - -Implemented algorithms ----------------------- - -The core concept in ``adaptive`` is that of a *learner*. A *learner* -samples a function at the best places in its parameter space to get -maximum “information” about the function. As it evaluates the function -at more and more points in the parameter space, it gets a better idea of -where the best places are to sample next. - -Of course, what qualifies as the “best places” will depend on your -application domain! ``adaptive`` makes some reasonable default choices, -but the details of the adaptive sampling are completely customizable. - -The following learners are implemented: - -.. not-in-documentation-start - -- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``, -- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``, -- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``, -- ``AverageLearner``, for random variables where you want to - average the result over many evaluations, -- ``AverageLearner1D``, for stochastic 1D functions where you want to - estimate the mean value of the function at each point, -- ``IntegratorLearner``, for - when you want to intergrate a 1D function ``f: ℝ → ℝ``. -- ``BalancingLearner``, for when you want to run several learners at once, - selecting the “best” one each time you get more points. - -Meta-learners (to be used with other learners): - -- ``BalancingLearner``, for when you want to run several learners at once, - selecting the “best” one each time you get more points, -- ``DataSaver``, for when your function doesn't just return a scalar or a vector. - -In addition to the learners, ``adaptive`` also provides primitives for -running the sampling across several cores and even several machines, -with built-in support for -`concurrent.futures `_, -`mpi4py `_, -`loky `_, -`ipyparallel `_ and -`distributed `_. - -Examples --------- - -Adaptively learning a 1D function (the `gif` below) and live-plotting the process in a Jupyter notebook is as easy as - -.. code:: python - - from adaptive import notebook_extension, Runner, Learner1D - notebook_extension() - - def peak(x, a=0.01): - return x + a**2 / (a**2 + x**2) - - learner = Learner1D(peak, bounds=(-1, 1)) - runner = Runner(learner, goal=lambda l: l.loss() < 0.01) - runner.live_info() - runner.live_plot() - - -.. raw:: html - - - -.. not-in-documentation-end - -Installation ------------- - -``adaptive`` works with Python 3.7 and higher on Linux, Windows, or Mac, -and provides optional extensions for working with the Jupyter/IPython -Notebook. - -The recommended way to install adaptive is using ``conda``: - -.. code:: bash - - conda install -c conda-forge adaptive - -``adaptive`` is also available on PyPI: - -.. code:: bash - - pip install adaptive[notebook] - -The ``[notebook]`` above will also install the optional dependencies for -running ``adaptive`` inside a Jupyter notebook. - -To use Adaptive in Jupyterlab, you need to install the following labextensions. - -.. code:: bash - - jupyter labextension install @jupyter-widgets/jupyterlab-manager - jupyter labextension install @pyviz/jupyterlab_pyviz - -Development ------------ - -Clone the repository and run ``setup.py develop`` to add a link to the -cloned repo into your Python path: - -.. code:: bash - - git clone git@github.com:python-adaptive/adaptive.git - cd adaptive - python3 setup.py develop - -We highly recommend using a Conda environment or a virtualenv to manage -the versions of your installed packages while working on ``adaptive``. - -In order to not pollute the history with the output of the notebooks, -please setup the git filter by executing - -.. code:: bash - - python ipynb_filter.py - -in the repository. - -We implement several other checks in order to maintain a consistent code style. We do this using `pre-commit `_, execute - -.. code:: bash - - pre-commit install - -in the repository. - -Citing ------- - -If you used Adaptive in a scientific work, please cite it as follows. - -.. code:: bib - - @misc{Nijholt2019, - doi = {10.5281/zenodo.1182437}, - author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov}, - title = {\textit{Adaptive}: parallel active learning of mathematical functions}, - publisher = {Zenodo}, - year = {2019} - } - -Credits -------- - -We would like to give credits to the following people: - -- Pedro Gonnet for his implementation of `CQUAD `_, - “Algorithm 4” as described in “Increasing the Reliability of Adaptive - Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on - Mathematical Software, 37 (3), art. no. 26, 2010. -- Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer - available online since SciPy Central went down) which served as - inspiration for the `~adaptive.Learner2D`. - -.. credits-end - -For general discussion, we have a `Gitter chat -channel `_. If you find any -bugs or have any feature suggestions please file a GitHub -`issue `_ -or submit a `pull -request `_. - -.. references-start -.. |logo| image:: https://adaptive.readthedocs.io/en/latest/_static/logo.png -.. |PyPI| image:: https://img.shields.io/pypi/v/adaptive.svg - :target: https://pypi.python.org/pypi/adaptive -.. |Conda| image:: https://img.shields.io/badge/install%20with-conda-green.svg - :target: https://anaconda.org/conda-forge/adaptive -.. |Downloads| image:: https://img.shields.io/conda/dn/conda-forge/adaptive.svg - :target: https://anaconda.org/conda-forge/adaptive -.. |Pipeline status| image:: https://dev.azure.com/python-adaptive/adaptive/_apis/build/status/python-adaptive.adaptive?branchName=master - :target: https://dev.azure.com/python-adaptive/adaptive/_build/latest?definitionId=6?branchName=master -.. |DOI| image:: https://img.shields.io/badge/doi-10.5281%2Fzenodo.1182437-blue.svg - :target: https://doi.org/10.5281/zenodo.1182437 -.. |Binder| image:: https://mybinder.org/badge.svg - :target: https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=example-notebook.ipynb -.. |Gitter| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg - :target: https://gitter.im/python-adaptive/adaptive -.. |Documentation| image:: https://readthedocs.org/projects/adaptive/badge/?version=latest - :target: https://adaptive.readthedocs.io/en/latest/?badge=latest -.. |GitHub| image:: https://img.shields.io/github/stars/python-adaptive/adaptive.svg?style=social - :target: https://github.com/python-adaptive/adaptive/stargazers -.. |Coverage| image:: https://img.shields.io/codecov/c/github/python-adaptive/adaptive - :target: https://codecov.io/gh/python-adaptive/adaptive -.. references-end diff --git a/adaptive/learner/average_learner1D.py b/adaptive/learner/average_learner1D.py index 23ce0fbe0..fb4b4d5c5 100644 --- a/adaptive/learner/average_learner1D.py +++ b/adaptive/learner/average_learner1D.py @@ -454,8 +454,9 @@ def _set_data(self, data: dict[Real, Real]) -> None: self.tell_many_at_point(x, samples) def plot(self): - """Returns a plot of the evaluated data with error bars (not implemented - for vector functions, i.e., it requires vdim=1). + """Returns a plot of the evaluated data with error bars. + + This is only implemented for scalar functions, i.e., it requires ``vdim=1``. Returns ------- diff --git a/docs/Makefile b/docs/Makefile index 6e2fdeea8..2c7e55394 100644 --- a/docs/Makefile +++ b/docs/Makefile @@ -2,7 +2,7 @@ # # You can set these variables from the command line. -SPHINXOPTS = +SPHINXOPTS = -j auto SPHINXBUILD = sphinx-build SPHINXPROJ = adaptive SOURCEDIR = source diff --git a/docs/environment.yml b/docs/environment.yml index d7e8aa940..43163ee6b 100644 --- a/docs/environment.yml +++ b/docs/environment.yml @@ -8,17 +8,17 @@ dependencies: - sortedcollections=2.1.0 - scikit-optimize=0.8.1 - scikit-learn=0.24.2 - - scipy=1.7.1 + - scipy=1.9.1 - holoviews=1.14.6 - bokeh=2.4.0 - panel=0.12.7 - plotly=5.3.1 - ipywidgets=7.6.5 - - jupyter-sphinx=0.3.2 + - myst-nb=0.16.0 - sphinx_fontawesome=0.0.6 - sphinx=4.2.0 - - m2r2=0.3.1 - - sphinx_rtd_theme=1.0.0 - ffmpeg=4.3.2 - cloudpickle - loky + - furo + - myst-parser diff --git a/docs/source/_templates/layout.html b/docs/source/_templates/layout.html deleted file mode 100644 index 918958d7b..000000000 --- a/docs/source/_templates/layout.html +++ /dev/null @@ -1,6 +0,0 @@ -{% extends "!layout.html" %} -{% block extrahead %} - {%- for scriptfile in holoviews_js_files %} - {{ js_tag(scriptfile) }} - {%- endfor %} -{% endblock %} diff --git a/docs/source/algorithms_and_examples.md b/docs/source/algorithms_and_examples.md new file mode 100644 index 000000000..c15dea0c0 --- /dev/null +++ b/docs/source/algorithms_and_examples.md @@ -0,0 +1,212 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +execution: + timeout: 300 +--- + +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` + +- {class}`~adaptive.Learner1D`, for 1D functions `f: ℝ → ℝ^N`, +- {class}`~adaptive.Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`, +- {class}`~adaptive.LearnerND`, for ND functions `f: ℝ^N → ℝ^M`, +- {class}`~adaptive.AverageLearner`, for random variables where you want to average the result over many evaluations, +- {class}`~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to estimate the mean value of the function at each point, +- {class}`~adaptive.IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`. +- {class}`~adaptive.BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points. + +Meta-learners (to be used with other learners): + +- {class}`~adaptive.BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points, +- {class}`~adaptive.DataSaver`, for when your function doesn't just return a scalar or a vector. + +In addition to the learners, `adaptive` also provides primitives for running the sampling across several cores and even several machines, with built-in support for +[concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html), +[mpi4py](https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html), +[loky](https://loky.readthedocs.io/en/stable/), +[ipyparallel](https://ipyparallel.readthedocs.io/en/latest/), and +[distributed](https://distributed.readthedocs.io/en/latest/). + +# Examples + +Here are some examples of how Adaptive samples vs. homogeneous sampling. +Click on the *Play* {fa}`play` button or move the sliders. + +```{code-cell} ipython3 +:tags: [hide-cell] + +import itertools +import adaptive +from adaptive.learner.learner1D import uniform_loss, default_loss +import holoviews as hv +import numpy as np + +adaptive.notebook_extension() +hv.output(holomap="scrubber") +``` + +## {class}`adaptive.Learner1D` + +Adaptively learning a 1D function (the plot below) and live-plotting the process in a Jupyter notebook is as easy as + +```python +from adaptive import notebook_extension, Runner, Learner1D + +notebook_extension() # enables notebook integration + + +def peak(x, a=0.01): # function to "learn" + return x + a**2 / (a**2 + x**2) + + +learner = Learner1D(peak, bounds=(-1, 1)) + + +def goal(learner): + return learner.loss() < 0.01 # continue until loss is small enough + + +runner = Runner(learner, goal) # start calculation on all CPU cores +runner.live_info() # shows a widget with status information +runner.live_plot() +``` + +```{code-cell} ipython3 +:tags: [hide-input] + +def f(x, offset=0.07357338543088588): + a = 0.01 + return x + a**2 / (a**2 + (x - offset) ** 2) + + +def plot_loss_interval(learner): + if learner.npoints >= 2: + x_0, x_1 = max(learner.losses, key=learner.losses.get) + y_0, y_1 = learner.data[x_0], learner.data[x_1] + x, y = [x_0, x_1], [y_0, y_1] + else: + x, y = [], [] + return hv.Scatter((x, y)).opts(style=dict(size=6, color="r")) + + +def plot(learner, npoints): + adaptive.runner.simple(learner, lambda l: l.npoints == npoints) + return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1] + + +def get_hm(loss_per_interval, N=101): + learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval) + plots = {n: plot(learner, n) for n in range(N)} + return hv.HoloMap(plots, kdims=["npoints"]) + +plot_homo = get_hm(uniform_loss).relabel("homogeneous samping") +plot_adaptive = get_hm(default_loss).relabel("with adaptive") +layout = plot_homo + plot_adaptive +layout.opts(plot=dict(toolbar=None)) +``` + +## {class}`adaptive.Learner2D` + +```{code-cell} ipython3 +:tags: [hide-input] + + +def ring(xy): + import numpy as np + + x, y = xy + a = 0.2 + return x + np.exp(-((x**2 + y**2 - 0.75**2) ** 2) / a**4) + + +def plot(learner, npoints): + adaptive.runner.simple(learner, lambda l: l.npoints == npoints) + learner2 = adaptive.Learner2D(ring, bounds=learner.bounds) + xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5)) + xys = list(itertools.product(xs, ys)) + learner2.tell_many(xys, map(ring, xys)) + return ( + learner2.plot().relabel("homogeneous grid") + + learner.plot().relabel("with adaptive") + + learner2.plot(tri_alpha=0.5).relabel("homogeneous sampling") + + learner.plot(tri_alpha=0.5).relabel("with adaptive") + ).cols(2) + + +learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)]) +plots = {n: plot(learner, n) for n in range(4, 1010, 20)} +hv.HoloMap(plots, kdims=["npoints"]).collate() +``` + +## {class}`adaptive.AverageLearner` + +```{code-cell} ipython3 +:tags: [hide-input] + + +def g(n): + import random + + random.seed(n) + val = random.gauss(0.5, 0.5) + return val + + +learner = adaptive.AverageLearner(g, atol=None, rtol=0.01) + + +def plot(learner, npoints): + adaptive.runner.simple(learner, lambda l: l.npoints == npoints) + return learner.plot().relabel(f"loss={learner.loss():.2f}") + + +plots = {n: plot(learner, n) for n in range(10, 10000, 200)} +hv.HoloMap(plots, kdims=["npoints"]) +``` + +## {class}`adaptive.LearnerND` + +```{code-cell} ipython3 +:tags: [hide-input] + + +def sphere(xyz): + import numpy as np + + x, y, z = xyz + a = 0.4 + return np.exp(-((x**2 + y**2 + z**2 - 0.75**2) ** 2) / a**4) + + +learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)]) +adaptive.runner.simple(learner, lambda l: l.npoints == 5000) + +fig = learner.plot_3D(return_fig=True) + +# Remove a slice from the plot to show the inside of the sphere +scatter = fig.data[0] +coords_col = [ + (x, y, z, color) + for x, y, z, color in zip( + scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] + ) + if not (x > 0 and y > 0) +] +scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] = zip(*coords_col) + +fig +``` + +see more in the {ref}`Tutorial Adaptive`. diff --git a/docs/source/algorithms_and_examples.rst b/docs/source/algorithms_and_examples.rst deleted file mode 100644 index a47673243..000000000 --- a/docs/source/algorithms_and_examples.rst +++ /dev/null @@ -1,183 +0,0 @@ -.. include:: ../../README.rst - :start-after: summary-end - :end-before: not-in-documentation-start - -- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``, -- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``, -- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``, -- `~adaptive.AverageLearner`, for random variables where you want to - average the result over many evaluations, -- `~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to - estimate the mean value of the function at each point, -- `~adaptive.IntegratorLearner`, for - when you want to intergrate a 1D function ``f: ℝ → ℝ``. -- `~adaptive.BalancingLearner`, for when you want to run several learners at once, - selecting the “best” one each time you get more points. - -Meta-learners (to be used with other learners): - -- `~adaptive.BalancingLearner`, for when you want to run several learners at once, - selecting the “best” one each time you get more points, -- `~adaptive.DataSaver`, for when your function doesn't just return a scalar or a vector. - -In addition to the learners, ``adaptive`` also provides primitives for -running the sampling across several cores and even several machines, -with built-in support for -`concurrent.futures `_, -`mpi4py `_, -`loky `_, -`ipyparallel `_ and -`distributed `_. - -Examples --------- - -Here are some examples of how Adaptive samples vs. homogeneous sampling. Click -on the *Play* :fa:`play` button or move the sliders. - -.. jupyter-execute:: - :hide-code: - - import itertools - import adaptive - from adaptive.learner.learner1D import uniform_loss, default_loss - import holoviews as hv - import numpy as np - - adaptive.notebook_extension() - hv.output(holomap="scrubber") - -`adaptive.Learner1D` -~~~~~~~~~~~~~~~~~~~~ - -Adaptively learning a 1D function (the plot below) and live-plotting the process in a Jupyter notebook is as easy as - -.. code:: python - - from adaptive import notebook_extension, Runner, Learner1D - notebook_extension() # enables notebook integration - - def peak(x, a=0.01): # function to "learn" - return x + a**2 / (a**2 + x**2) - - learner = Learner1D(peak, bounds=(-1, 1)) - - def goal(learner): - return learner.loss() < 0.01 # continue until loss is small enough - - runner = Runner(learner, goal) # start calculation on all CPU cores - runner.live_info() # shows a widget with status information - runner.live_plot() - - -.. jupyter-execute:: - :hide-code: - - def f(x, offset=0.07357338543088588): - a = 0.01 - return x + a**2 / (a**2 + (x - offset)**2) - - def plot_loss_interval(learner): - if learner.npoints >= 2: - x_0, x_1 = max(learner.losses, key=learner.losses.get) - y_0, y_1 = learner.data[x_0], learner.data[x_1] - x, y = [x_0, x_1], [y_0, y_1] - else: - x, y = [], [] - return hv.Scatter((x, y)).opts(style=dict(size=6, color="r")) - - def plot(learner, npoints): - adaptive.runner.simple(learner, lambda l: l.npoints == npoints) - return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1] - - def get_hm(loss_per_interval, N=101): - learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval) - plots = {n: plot(learner, n) for n in range(N)} - return hv.HoloMap(plots, kdims=["npoints"]) - - layout = ( - get_hm(uniform_loss).relabel("homogeneous samping") - + get_hm(default_loss).relabel("with adaptive") - ) - - layout.opts(plot=dict(toolbar=None)) - -`adaptive.Learner2D` -~~~~~~~~~~~~~~~~~~~~ - -.. jupyter-execute:: - :hide-code: - - def ring(xy): - import numpy as np - x, y = xy - a = 0.2 - return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4) - - def plot(learner, npoints): - adaptive.runner.simple(learner, lambda l: l.npoints == npoints) - learner2 = adaptive.Learner2D(ring, bounds=learner.bounds) - xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5)) - xys = list(itertools.product(xs, ys)) - learner2.tell_many(xys, map(ring, xys)) - return (learner2.plot().relabel('homogeneous grid') - + learner.plot().relabel('with adaptive') - + learner2.plot(tri_alpha=0.5).relabel('homogeneous sampling') - + learner.plot(tri_alpha=0.5).relabel('with adaptive')).cols(2) - - learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)]) - plots = {n: plot(learner, n) for n in range(4, 1010, 20)} - hv.HoloMap(plots, kdims=['npoints']).collate() - -`adaptive.AverageLearner` -~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. jupyter-execute:: - :hide-code: - - def g(n): - import random - random.seed(n) - val = random.gauss(0.5, 0.5) - return val - - learner = adaptive.AverageLearner(g, atol=None, rtol=0.01) - - def plot(learner, npoints): - adaptive.runner.simple(learner, lambda l: l.npoints == npoints) - return learner.plot().relabel(f'loss={learner.loss():.2f}') - - plots = {n: plot(learner, n) for n in range(10, 10000, 200)} - hv.HoloMap(plots, kdims=['npoints']) - -`adaptive.LearnerND` -~~~~~~~~~~~~~~~~~~~~ - -.. jupyter-execute:: - :hide-code: - - def sphere(xyz): - import numpy as np - x, y, z = xyz - a = 0.4 - return np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4) - - learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)]) - adaptive.runner.simple(learner, lambda l: l.npoints == 5000) - - fig = learner.plot_3D(return_fig=True) - - # Remove a slice from the plot to show the inside of the sphere - scatter = fig.data[0] - coords_col = [ - (x, y, z, color) - for x, y, z, color in zip( - scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] - ) - if not (x > 0 and y > 0) - ] - scatter["x"], scatter["y"], scatter["z"], scatter.marker["color"] = zip(*coords_col) - - fig - -see more in the :ref:`Tutorial Adaptive`. diff --git a/docs/source/conf.py b/docs/source/conf.py index ff6b77437..ac51a3886 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -1,16 +1,5 @@ -# -# Configuration file for the Sphinx documentation builder. -# -# This file does only contain a selection of the most common options. For a -# full list see the documentation: -# http://www.sphinx-doc.org/en/master/config - # -- Path setup -------------------------------------------------------------- -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# import os import sys @@ -32,20 +21,11 @@ author = "Adaptive Authors" # The short X.Y version -version = adaptive.__version__ +version = ".".join(adaptive.__version__.split(".")[:3]) +version = version # The full version, including alpha/beta/rc tags -release = adaptive.__version__ - - -# -- General configuration --------------------------------------------------- - -# If your documentation needs a minimal Sphinx version, state it here. -# -# needs_sphinx = '1.0' +release = version -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.autosummary", @@ -54,81 +34,19 @@ "sphinx.ext.mathjax", "sphinx.ext.viewcode", "sphinx.ext.napoleon", - "jupyter_sphinx", + "myst_nb", "sphinx_fontawesome", - "m2r2", ] - source_parsers = {} - -# Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] - -# The suffix(es) of source filenames. -# You can specify multiple suffix as a list of string: -# source_suffix = [".rst", ".md"] -# source_suffix = '.rst' - -# The master toctree document. master_doc = "index" - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path . exclude_patterns = [] - -# The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" - - -jupyter_sphinx_thebelab_config = { - "requestKernel": True, - "binderOptions": {"repo": "python-adaptive/adaptive"}, -} - -jupyter_execute_disable_stderr = True - -# -- Options for HTML output ------------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -html_theme = "sphinx_rtd_theme" - - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". +# TODO: change to "furo" when https://github.com/executablebooks/MyST-NB/issues/54 is fixed (again) +html_theme = "furo" html_static_path = ["_static"] - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# The default sidebars (for documents that don't match any pattern) are -# defined by theme itself. Builtin themes are using these templates by -# default: ``['localtoc.html', 'relations.html', 'sourcelink.html', -# 'searchbox.html']``. -# -# html_sidebars = {} - - -# -- Options for HTMLHelp output --------------------------------------------- - -# Output file base name for HTML help builder. htmlhelp_basename = "adaptivedoc" @@ -144,17 +62,21 @@ "scipy": ("https://docs.scipy.org/doc/scipy/reference", None), "loky": ("https://loky.readthedocs.io/en/stable/", None), } - html_js_files = [ "https://cdn.bokeh.org/bokeh/release/bokeh-2.4.0.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-widgets-2.4.0.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-tables-2.4.0.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-gl-2.4.0.min.js", "https://cdn.bokeh.org/bokeh/release/bokeh-mathjax-2.4.0.min.js", + "https://cdnjs.cloudflare.com/ajax/libs/require.js/2.3.4/require.min.js", ] +html_logo = "_static/logo_docs.png" -html_logo = "_static/logo_docs.png" +# myst-nb configuration +nb_execution_mode = "cache" +nb_execution_timeout = 180 +nb_execution_fail_on_error = True def setup(app): diff --git a/docs/source/docs.md b/docs/source/docs.md new file mode 100644 index 000000000..b07a97e95 --- /dev/null +++ b/docs/source/docs.md @@ -0,0 +1,16 @@ +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` + +```{include} ../../AUTHORS.md +``` + +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` diff --git a/docs/source/docs.rst b/docs/source/docs.rst deleted file mode 100644 index 13d26178f..000000000 --- a/docs/source/docs.rst +++ /dev/null @@ -1,10 +0,0 @@ - -.. include:: ../../README.rst - :start-after: not-in-documentation-end - :end-before: credits-end - -.. mdinclude:: ../../AUTHORS.md - -.. include:: ../../README.rst - :start-after: credits-end - :end-before: references-start diff --git a/docs/source/faq.md b/docs/source/faq.md new file mode 100644 index 000000000..6c47e7906 --- /dev/null +++ b/docs/source/faq.md @@ -0,0 +1,87 @@ +# FAQ: frequently asked questions + +## Where can I learn more about the algorithm used? + +Read our [draft paper](https://gitlab.kwant-project.org/qt/adaptive-paper/builds/artifacts/master/file/paper.pdf?job=make) or the source code on [GitHub](https://github.com/python-adaptive/adaptive/). + +## How do I get the data? + +Check `learner.data`. + +## How do I learn more than one value per point? + +Use the {class}`adaptive.DataSaver`. + +## My runner failed, how do I get the error message? + +Check `runner.task.print_stack()`. + +## How do I get a {class}`~adaptive.Learner2D`'s data on a grid? + +Use `learner.interpolated_on_grid()` optionally with a argument `n` to specify the the amount of points in `x` and `y`. + +## Why can I not use a `lambda` with a learner? + +When using the {class}`~adaptive.Runner` the learner's function is evaluated in different Python processes. +Therefore, the `function` needs to be serialized (pickled) and send to the other Python processes; `lambda`s cannot be pickled. +Instead you can probably use `functools.partial` to accomplish what you want to do. + +## How do I run multiple runners? + +Check out [Adaptive scheduler](http://adaptive-scheduler.readthedocs.io), which solves the following problem of needing to run more learners than you can run with a single runner. +It easily runs on tens of thousands of cores. + +## What is the difference with FEM? + +The main difference with FEM (Finite Element Method) is that one needs to globally update the mesh at every time step. + +For Adaptive, we want to be able to parallelize the function evaluation and that requires an algorithm that can quickly return a new suggested point. +This means that, to minimize the time that Adaptive spends on adding newly calculated points to the data strucute, we only want to update the data of the points that are close to the new point. + +## What is the difference with Bayesian optimization? + +Indeed there are similarities between what Adaptive does and Bayesian optimization. + +The choice of new points is based on the previous ones. + +There is a tuneable algorithm for performing this selection, and the easiest way to formulate this algorithm is by defining a loss function. + +Bayesian optimization is a perfectly fine algorithm for choosing new points within adaptive. As an experiment we have interfaced `scikit-optimize` and implemented a learner that just wraps it. + +However there are important differences why Bayesian optimization doesn't cover all the needs. +Often our aim is to explore the function and not minimize it. +Further, Bayesian optimization is most often combined with Gaussian processes because it is then possible to compute the posteriour exactly and formulate a rigorous optimization strategy. +Unfortunately Gaussian processes are computationally expensive and won't be useful with tens of thousands of points. +Adaptive is much more simple-minded and it relies only on the local properties of the data, rather than fitting it globally. + +We'd say that Bayesian modeling is good for really computationally expensive data, regular grids for really cheap data, and local adaptive algorithms are somewhere in the middle. + +% I get "``concurrent.futures.process.BrokenProcessPool``: A process in the process pool was terminated abruptly while the future was running or pending." what does it mean? +% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +% +% XXX: add answer! +% +% What is the difference with Kriging? +% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +% +% XXX: add answer! +% +% +% What is the difference with adaptive meshing in CFD or computer graphics? +% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +% +% XXX: add answer! +% +% +% Can I use this to tune my hyper parameters for machine learning models? +% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +% +% XXX: add answer! +% +% +% How to use Adaptive with MATLAB? +% ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +% +% XXX: add answer! + +Missing a question that you think belongs here? Let us [know](https://github.com/python-adaptive/adaptive/issues/new). diff --git a/docs/source/faq.rst b/docs/source/faq.rst deleted file mode 100644 index 6e9a37842..000000000 --- a/docs/source/faq.rst +++ /dev/null @@ -1,107 +0,0 @@ -FAQ: frequently asked questions -------------------------------- - - -Where can I learn more about the algorithm used? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Read our `draft paper `_ or the source code on `GitHub `_. - - -How do I get the data? -~~~~~~~~~~~~~~~~~~~~~~ - -Check ``learner.data``. - - -How do I learn more than one value per point? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Use the `adaptive.DataSaver`. - - -My runner failed, how do I get the error message? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Check ``runner.task.print_stack()``. - - -How do I get a `~adaptive.Learner2D`\'s data on a grid? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Use ``learner.interpolated_on_grid()`` optionally with a argument ``n`` to specify the the amount of points in ``x`` and ``y``. - - -Why can I not use a ``lambda`` with a learner? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When using the `~adaptive.Runner` the learner's function is evaluated in different Python processes. -Therefore, the ``function`` needs to be serialized (pickled) and send to the other Python processes; ``lambda``\s cannot be pickled. -Instead you can probably use ``functools.partial`` to accomplish what you want to do. - - -How do I run multiple runners? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Check out `Adaptive scheduler `_, which solves the following problem of needing to run more learners than you can run with a single runner. -It easily runs on tens of thousands of cores. - - -What is the difference with FEM? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The main difference with FEM (Finite Element Method) is that one needs to globally update the mesh at every time step. - -For Adaptive, we want to be able to parallelize the function evaluation and that requires an algorithm that can quickly return a new suggested point. -This means that, to minimize the time that Adaptive spends on adding newly calculated points to the data strucute, we only want to update the data of the points that are close to the new point. - - -What is the difference with Bayesian optimization? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Indeed there are similarities between what Adaptive does and Bayesian optimization. - -The choice of new points is based on the previous ones. - -There is a tuneable algorithm for performing this selection, and the easiest way to formulate this algorithm is by defining a loss function. - -Bayesian optimization is a perfectly fine algorithm for choosing new points within adaptive. As an experiment we have interfaced ``scikit-optimize`` and implemented a learner that just wraps it. - -However there are important differences why Bayesian optimization doesn't cover all the needs. -Often our aim is to explore the function and not minimize it. -Further, Bayesian optimization is most often combined with Gaussian processes because it is then possible to compute the posteriour exactly and formulate a rigorous optimization strategy. -Unfortunately Gaussian processes are computationally expensive and won't be useful with tens of thousands of points. -Adaptive is much more simple-minded and it relies only on the local properties of the data, rather than fitting it globally. - -We'd say that Bayesian modeling is good for really computationally expensive data, regular grids for really cheap data, and local adaptive algorithms are somewhere in the middle. - -.. I get "``concurrent.futures.process.BrokenProcessPool``: A process in the process pool was terminated abruptly while the future was running or pending." what does it mean? - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - XXX: add answer! - - What is the difference with Kriging? - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - XXX: add answer! - - - What is the difference with adaptive meshing in CFD or computer graphics? - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - XXX: add answer! - - - Can I use this to tune my hyper parameters for machine learning models? - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - XXX: add answer! - - - How to use Adaptive with MATLAB? - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - XXX: add answer! - - -Missing a question that you think belongs here? Let us `know `_. diff --git a/docs/source/gallery.md b/docs/source/gallery.md new file mode 100644 index 000000000..0f1757ce7 --- /dev/null +++ b/docs/source/gallery.md @@ -0,0 +1,49 @@ +# Gallery + +Adaptive has been used in the following scientific publications: + +**Reproducing topological properties with quasi-Majorana states** + +by A. Vuik, B. Nijholt, A. R. Akhmerov, M. Wimmer [arXiv:1806.02801](https://arxiv.org/abs/1806.02801) and the [source-code](https://zenodo.org/record/1285177) + +```{image} _static/example_uses/quasi_majorana_paper.jpeg +:alt: Reproducing topological properties with quasi-Majorana states +:width: 500 +``` + +**Enhanced proximity effect in zigzag-shaped Majorana Josephson junctions** + +by Tom Laeven, Bas Nijholt, Anton R. Akhmerov, Michael Wimmer [arXiv:1903.06168](https://arxiv.org/abs/1903.06168) and the [source-code](https://zenodo.org/record/2578027) + +```{image} _static/example_uses/zigzag_paper.jpeg +:alt: Enhanced proximity effect in zigzag-shaped Majorana Josephson junctions +:width: 500 +``` + +**Spin-Orbit Protection of Induced Superconductivity in Majorana Nanowires** + +by Jouri D.S. Bommer, Hao Zhang, Önder Gül, Bas Nijholt, Michael Wimmer, Filipp N. Rybakov, Julien Garaud, Donjan Rodic, Egor Babaev, Matthias Troyer, Diana Car, Sébastien R. | Plissard, Erik P.A.M. Bakkers, Kenji Watanabe, Takashi Taniguchi, Leo P. Kouwenhoven + +[arXiv:1807.01940](https://arxiv.org/abs/1807.01940) + +```{image} _static/example_uses/spin_orbit_paper.jpeg +:alt: Spin-Orbit Protection of Induced Superconductivity in Majorana Nanowires +:width: 500 +``` + +Other examples: + +**Battle for Majoranas** + +by + +[Bas Nijholt](https://github.com/basnijholt) + +Accidentally leaving the cluster running over the weekend without a `runner.goal` and with a bug in the simulation parameters led to this beautiful mess. + +```{image} _static/example_uses/battle_for_majoranas.jpeg +:alt: Battle for Majoranas +:width: 500 +``` + +Did you use Adaptive for something cool? Let us [know](https://github.com/python-adaptive/adaptive/issues/new) and we will add it to this gallery. diff --git a/docs/source/gallery.rst b/docs/source/gallery.rst deleted file mode 100644 index d8b498330..000000000 --- a/docs/source/gallery.rst +++ /dev/null @@ -1,44 +0,0 @@ -Gallery -------- - -Adaptive has been used in the following scientific publications: - -| **Reproducing topological properties with quasi-Majorana states** -| by A. Vuik, B. Nijholt, A. R. Akhmerov, M. Wimmer -| `arXiv:1806.02801 `_ and the `source-code `_ - -.. image:: _static/example_uses/quasi_majorana_paper.jpeg - :width: 500 - :alt: Reproducing topological properties with quasi-Majorana states - - -| **Enhanced proximity effect in zigzag-shaped Majorana Josephson junctions** -| by Tom Laeven, Bas Nijholt, Anton R. Akhmerov, Michael Wimmer -| `arXiv:1903.06168 `_ and the `source-code `_ - -.. image:: _static/example_uses/zigzag_paper.jpeg - :width: 500 - :alt: Enhanced proximity effect in zigzag-shaped Majorana Josephson junctions - - -| **Spin-Orbit Protection of Induced Superconductivity in Majorana Nanowires** -| by Jouri D.S. Bommer, Hao Zhang, Önder Gül, Bas Nijholt, Michael Wimmer, Filipp N. Rybakov, Julien Garaud, Donjan Rodic, Egor Babaev, Matthias Troyer, Diana Car, Sébastien R. | Plissard, Erik P.A.M. Bakkers, Kenji Watanabe, Takashi Taniguchi, Leo P. Kouwenhoven -`arXiv:1807.01940 `_ - -.. image:: _static/example_uses/spin_orbit_paper.jpeg - :width: 500 - :alt: Spin-Orbit Protection of Induced Superconductivity in Majorana Nanowires - - -Other examples: - -| **Battle for Majoranas** -| by `Bas Nijholt `_ -| Accidentally leaving the cluster running over the weekend without a ``runner.goal`` and with a bug in the simulation parameters led to this beautiful mess. - -.. image:: _static/example_uses/battle_for_majoranas.jpeg - :width: 500 - :alt: Battle for Majoranas - - -Did you use Adaptive for something cool? Let us `know `_ and we will add it to this gallery. diff --git a/docs/source/index.md b/docs/source/index.md new file mode 100644 index 000000000..a5c043adb --- /dev/null +++ b/docs/source/index.md @@ -0,0 +1,46 @@ +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` + +```{include} logo.md +``` + +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` + +```{tip} +Start with the {ref}`1D function learning tutorial`. +``` + +```{include} ../../README.md +--- +start-after: +end-before: +--- +``` + +```{toctree} +:hidden: true + +self +``` + +```{toctree} +:hidden: true +:maxdepth: 2 + +algorithms_and_examples +docs +tutorial/tutorial +gallery +reference/adaptive +CHANGELOG +faq +``` diff --git a/docs/source/index.rst b/docs/source/index.rst deleted file mode 100644 index 148dbfdca..000000000 --- a/docs/source/index.rst +++ /dev/null @@ -1,25 +0,0 @@ -.. include:: ../../README.rst - :start-after: summary-start - :end-before: summary-end - -.. include:: ../../README.rst - :start-after: references-start - :end-before: references-end - - -.. toctree:: - :hidden: - - self - -.. toctree:: - :maxdepth: 2 - :hidden: - - algorithms_and_examples - docs - tutorial/tutorial - gallery - reference/adaptive - CHANGELOG - faq diff --git a/docs/logo_animated.py b/docs/source/logo.md similarity index 86% rename from docs/logo_animated.py rename to docs/source/logo.md index ca3bf4d71..012c03c51 100644 --- a/docs/logo_animated.py +++ b/docs/source/logo.md @@ -1,3 +1,18 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- + +```{code-cell} ipython3 +:tags: [remove-input] + import os import matplotlib.tri as mtri @@ -101,3 +116,13 @@ def main(fname="source/_static/logo_docs.mp4"): fname = "_static/logo_docs.mp4" if not os.path.exists(fname): main(fname) +``` + +```{eval-rst} +.. raw:: html + +
+``` diff --git a/docs/source/logo.rst b/docs/source/logo.rst deleted file mode 100644 index edf64d62c..000000000 --- a/docs/source/logo.rst +++ /dev/null @@ -1,10 +0,0 @@ -.. jupyter-execute:: ../logo_animated.py - :hide-code: - :hide-output: - -.. raw:: html - -
diff --git a/docs/source/reference/adaptive.learner.average_learner.rst b/docs/source/reference/adaptive.learner.average_learner.md similarity index 66% rename from docs/source/reference/adaptive.learner.average_learner.rst rename to docs/source/reference/adaptive.learner.average_learner.md index 341df8260..a299797b6 100644 --- a/docs/source/reference/adaptive.learner.average_learner.rst +++ b/docs/source/reference/adaptive.learner.average_learner.md @@ -1,7 +1,8 @@ -adaptive.AverageLearner -======================= +# adaptive.AverageLearner +```{eval-rst} .. autoclass:: adaptive.AverageLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.average_learner1D.rst b/docs/source/reference/adaptive.learner.average_learner1D.md similarity index 65% rename from docs/source/reference/adaptive.learner.average_learner1D.rst rename to docs/source/reference/adaptive.learner.average_learner1D.md index f1e7cb75c..baf2780e8 100644 --- a/docs/source/reference/adaptive.learner.average_learner1D.rst +++ b/docs/source/reference/adaptive.learner.average_learner1D.md @@ -1,7 +1,8 @@ -adaptive.AverageLearner1D -========================= +# adaptive.AverageLearner1D +```{eval-rst} .. autoclass:: adaptive.AverageLearner1D :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.balancing_learner.rst b/docs/source/reference/adaptive.learner.balancing_learner.md similarity index 65% rename from docs/source/reference/adaptive.learner.balancing_learner.rst rename to docs/source/reference/adaptive.learner.balancing_learner.md index 0cc7611b0..f977a0aa8 100644 --- a/docs/source/reference/adaptive.learner.balancing_learner.rst +++ b/docs/source/reference/adaptive.learner.balancing_learner.md @@ -1,7 +1,8 @@ -adaptive.BalancingLearner -========================= +# adaptive.BalancingLearner +```{eval-rst} .. autoclass:: adaptive.BalancingLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.base_learner.rst b/docs/source/reference/adaptive.learner.base_learner.md similarity index 67% rename from docs/source/reference/adaptive.learner.base_learner.rst rename to docs/source/reference/adaptive.learner.base_learner.md index 7a908ab57..1b3845ebd 100644 --- a/docs/source/reference/adaptive.learner.base_learner.rst +++ b/docs/source/reference/adaptive.learner.base_learner.md @@ -1,7 +1,8 @@ -adaptive.BaseLearner -============================ +# adaptive.BaseLearner +```{eval-rst} .. autoclass:: adaptive.learner.BaseLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.data_saver.md b/docs/source/reference/adaptive.learner.data_saver.md new file mode 100644 index 000000000..38f573601 --- /dev/null +++ b/docs/source/reference/adaptive.learner.data_saver.md @@ -0,0 +1,17 @@ +# adaptive.DataSaver + +## The `DataSaver` class + +```{eval-rst} +.. autoclass:: adaptive.DataSaver + :members: + :undoc-members: + :show-inheritance: + +``` + +## The `make_datasaver` function + +```{eval-rst} +.. autofunction:: adaptive.make_datasaver +``` diff --git a/docs/source/reference/adaptive.learner.data_saver.rst b/docs/source/reference/adaptive.learner.data_saver.rst deleted file mode 100644 index 81fd9a54a..000000000 --- a/docs/source/reference/adaptive.learner.data_saver.rst +++ /dev/null @@ -1,16 +0,0 @@ -adaptive.DataSaver -================== - -The ``DataSaver`` class ------------------------ - -.. autoclass:: adaptive.DataSaver - :members: - :undoc-members: - :show-inheritance: - - -The ``make_datasaver`` function -------------------------------- - -.. autofunction:: adaptive.make_datasaver diff --git a/docs/source/reference/adaptive.learner.integrator_learner.rst b/docs/source/reference/adaptive.learner.integrator_learner.md similarity index 64% rename from docs/source/reference/adaptive.learner.integrator_learner.rst rename to docs/source/reference/adaptive.learner.integrator_learner.md index 3d05a212e..f659f75a6 100644 --- a/docs/source/reference/adaptive.learner.integrator_learner.rst +++ b/docs/source/reference/adaptive.learner.integrator_learner.md @@ -1,7 +1,8 @@ -adaptive.IntegratorLearner -========================== +# adaptive.IntegratorLearner +```{eval-rst} .. autoclass:: adaptive.IntegratorLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.learner1D.rst b/docs/source/reference/adaptive.learner.learner1D.md similarity index 73% rename from docs/source/reference/adaptive.learner.learner1D.rst rename to docs/source/reference/adaptive.learner.learner1D.md index b4308a44d..3d7b78c66 100644 --- a/docs/source/reference/adaptive.learner.learner1D.rst +++ b/docs/source/reference/adaptive.learner.learner1D.md @@ -1,24 +1,39 @@ -adaptive.Learner1D -================== +# adaptive.Learner1D +```{eval-rst} .. autoclass:: adaptive.Learner1D :members: :undoc-members: :show-inheritance: +``` -Custom loss functions ---------------------- +## Custom loss functions + +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.default_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.uniform_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.uses_nth_neighbors +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.triangle_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.curvature_loss_function +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.abs_min_log_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner1D.resolution_loss_function +``` diff --git a/docs/source/reference/adaptive.learner.learner2D.rst b/docs/source/reference/adaptive.learner.learner2D.md similarity index 70% rename from docs/source/reference/adaptive.learner.learner2D.rst rename to docs/source/reference/adaptive.learner.learner2D.md index 11d14e3c2..270e0ac06 100644 --- a/docs/source/reference/adaptive.learner.learner2D.rst +++ b/docs/source/reference/adaptive.learner.learner2D.md @@ -1,25 +1,38 @@ -adaptive.Learner2D -================== +# adaptive.Learner2D +```{eval-rst} .. autoclass:: adaptive.Learner2D :members: :undoc-members: :show-inheritance: +``` -Custom loss functions ---------------------- +## Custom loss functions + +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.default_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.minimize_triangle_surface_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.uniform_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.resolution_loss_function +``` + +## Helper functions -Helper functions ----------------- +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.areas +``` +```{eval-rst} .. autofunction:: adaptive.learner.learner2D.deviations +``` diff --git a/docs/source/reference/adaptive.learner.learnerND.rst b/docs/source/reference/adaptive.learner.learnerND.md similarity index 69% rename from docs/source/reference/adaptive.learner.learnerND.rst rename to docs/source/reference/adaptive.learner.learnerND.md index 7af223b05..f2e36e1cf 100644 --- a/docs/source/reference/adaptive.learner.learnerND.rst +++ b/docs/source/reference/adaptive.learner.learnerND.md @@ -1,15 +1,22 @@ -adaptive.LearnerND -================== +# adaptive.LearnerND +```{eval-rst} .. autoclass:: adaptive.LearnerND :members: :undoc-members: :show-inheritance: +``` -Custom loss functions ---------------------- +## Custom loss functions + +```{eval-rst} .. autofunction:: adaptive.learner.learnerND.default_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learnerND.uniform_loss +``` +```{eval-rst} .. autofunction:: adaptive.learner.learnerND.std_loss +``` diff --git a/docs/source/reference/adaptive.learner.sequence_learner.rst b/docs/source/reference/adaptive.learner.sequence_learner.md similarity index 66% rename from docs/source/reference/adaptive.learner.sequence_learner.rst rename to docs/source/reference/adaptive.learner.sequence_learner.md index a48addfee..91be8372d 100644 --- a/docs/source/reference/adaptive.learner.sequence_learner.rst +++ b/docs/source/reference/adaptive.learner.sequence_learner.md @@ -1,7 +1,8 @@ -adaptive.SequenceLearner -======================== +# adaptive.SequenceLearner +```{eval-rst} .. autoclass:: adaptive.SequenceLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.skopt_learner.rst b/docs/source/reference/adaptive.learner.skopt_learner.md similarity index 68% rename from docs/source/reference/adaptive.learner.skopt_learner.rst rename to docs/source/reference/adaptive.learner.skopt_learner.md index d05f2099d..d02da3dbe 100644 --- a/docs/source/reference/adaptive.learner.skopt_learner.rst +++ b/docs/source/reference/adaptive.learner.skopt_learner.md @@ -1,7 +1,8 @@ -adaptive.SKOptLearner -===================== +# adaptive.SKOptLearner +```{eval-rst} .. autoclass:: adaptive.SKOptLearner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.learner.triangulation.rst b/docs/source/reference/adaptive.learner.triangulation.md similarity index 58% rename from docs/source/reference/adaptive.learner.triangulation.rst rename to docs/source/reference/adaptive.learner.triangulation.md index 8e4e4dfc2..e5f8012f3 100644 --- a/docs/source/reference/adaptive.learner.triangulation.rst +++ b/docs/source/reference/adaptive.learner.triangulation.md @@ -1,7 +1,8 @@ -adaptive.learner.triangulation module -===================================== +# adaptive.learner.triangulation module +```{eval-rst} .. automodule:: adaptive.learner.triangulation :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.md b/docs/source/reference/adaptive.md new file mode 100644 index 000000000..740ecf325 --- /dev/null +++ b/docs/source/reference/adaptive.md @@ -0,0 +1,34 @@ +# API documentation + +## Learners + +```{toctree} +adaptive.learner.average_learner +adaptive.learner.average_learner1D +adaptive.learner.base_learner +adaptive.learner.balancing_learner +adaptive.learner.data_saver +adaptive.learner.integrator_learner +adaptive.learner.learner1D +adaptive.learner.learner2D +adaptive.learner.learnerND +adaptive.learner.sequence_learner +adaptive.learner.skopt_learner +``` + +## Runners + +```{toctree} +adaptive.runner.Runner +adaptive.runner.AsyncRunner +adaptive.runner.BlockingRunner +adaptive.runner.BaseRunner +adaptive.runner.extras +``` + +## Other + +```{toctree} +adaptive.utils +adaptive.notebook_integration +``` diff --git a/docs/source/reference/adaptive.notebook_integration.rst b/docs/source/reference/adaptive.notebook_integration.md similarity index 57% rename from docs/source/reference/adaptive.notebook_integration.rst rename to docs/source/reference/adaptive.notebook_integration.md index 3836a8c36..0a8114b67 100644 --- a/docs/source/reference/adaptive.notebook_integration.rst +++ b/docs/source/reference/adaptive.notebook_integration.md @@ -1,7 +1,8 @@ -adaptive.notebook\_integration module -===================================== +# adaptive.notebook_integration module +```{eval-rst} .. automodule:: adaptive.notebook_integration :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.rst b/docs/source/reference/adaptive.rst deleted file mode 100644 index 3d020240f..000000000 --- a/docs/source/reference/adaptive.rst +++ /dev/null @@ -1,35 +0,0 @@ -API documentation -================= - -Learners --------- - -.. toctree:: - - adaptive.learner.average_learner - adaptive.learner.average_learner1D - adaptive.learner.base_learner - adaptive.learner.balancing_learner - adaptive.learner.data_saver - adaptive.learner.integrator_learner - adaptive.learner.learner1D - adaptive.learner.learner2D - adaptive.learner.learnerND - adaptive.learner.sequence_learner - adaptive.learner.skopt_learner - -Runners -------- - -.. toctree:: - adaptive.runner.Runner - adaptive.runner.AsyncRunner - adaptive.runner.BlockingRunner - adaptive.runner.BaseRunner - adaptive.runner.extras - -Other ------ -.. toctree:: - adaptive.utils - adaptive.notebook_integration diff --git a/docs/source/reference/adaptive.runner.AsyncRunner.rst b/docs/source/reference/adaptive.runner.AsyncRunner.md similarity index 70% rename from docs/source/reference/adaptive.runner.AsyncRunner.rst rename to docs/source/reference/adaptive.runner.AsyncRunner.md index c5b5f25a7..1cc1a6f77 100644 --- a/docs/source/reference/adaptive.runner.AsyncRunner.rst +++ b/docs/source/reference/adaptive.runner.AsyncRunner.md @@ -1,7 +1,8 @@ -adaptive.AsyncRunner -==================== +# adaptive.AsyncRunner +```{eval-rst} .. autoclass:: adaptive.runner.AsyncRunner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.runner.BaseRunner.rst b/docs/source/reference/adaptive.runner.BaseRunner.md similarity index 64% rename from docs/source/reference/adaptive.runner.BaseRunner.rst rename to docs/source/reference/adaptive.runner.BaseRunner.md index 9ba894fe9..d92054af3 100644 --- a/docs/source/reference/adaptive.runner.BaseRunner.rst +++ b/docs/source/reference/adaptive.runner.BaseRunner.md @@ -1,7 +1,8 @@ -adaptive.runner.BaseRunner -========================== +# adaptive.runner.BaseRunner +```{eval-rst} .. autoclass:: adaptive.runner.BaseRunner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.runner.BlockingRunner.rst b/docs/source/reference/adaptive.runner.BlockingRunner.md similarity index 66% rename from docs/source/reference/adaptive.runner.BlockingRunner.rst rename to docs/source/reference/adaptive.runner.BlockingRunner.md index 3ea138053..0777f7b79 100644 --- a/docs/source/reference/adaptive.runner.BlockingRunner.rst +++ b/docs/source/reference/adaptive.runner.BlockingRunner.md @@ -1,7 +1,8 @@ -adaptive.BlockingRunner -======================= +# adaptive.BlockingRunner +```{eval-rst} .. autoclass:: adaptive.BlockingRunner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.runner.Runner.rst b/docs/source/reference/adaptive.runner.Runner.md similarity index 71% rename from docs/source/reference/adaptive.runner.Runner.rst rename to docs/source/reference/adaptive.runner.Runner.md index 2a4cdc586..132705429 100644 --- a/docs/source/reference/adaptive.runner.Runner.rst +++ b/docs/source/reference/adaptive.runner.Runner.md @@ -1,7 +1,8 @@ -adaptive.Runner -=============== +# adaptive.Runner +```{eval-rst} .. autoclass:: adaptive.Runner :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/reference/adaptive.runner.extras.rst b/docs/source/reference/adaptive.runner.extras.md similarity index 53% rename from docs/source/reference/adaptive.runner.extras.rst rename to docs/source/reference/adaptive.runner.extras.md index 5b1680e22..510c17ec5 100644 --- a/docs/source/reference/adaptive.runner.extras.rst +++ b/docs/source/reference/adaptive.runner.extras.md @@ -1,29 +1,32 @@ -Runner extras -============= +# Runner extras -Stopping Criteria ------------------ +## Stopping Criteria Runners allow you to specify the stopping criterion by providing -a ``goal`` as a function that takes the learner and returns a boolean: ``False`` -for "continue running" and ``True`` for "stop". This gives you a lot of flexibility +a `goal` as a function that takes the learner and returns a boolean: `False` +for "continue running" and `True` for "stop". This gives you a lot of flexibility for defining your own stopping conditions, however we also provide some common stopping conditions as a convenience. +```{eval-rst} .. autofunction:: adaptive.runner.stop_after +``` -Simple executor ---------------- +## Simple executor +```{eval-rst} .. autofunction:: adaptive.runner.simple +``` -Sequential excecutor --------------------- +## Sequential excecutor +```{eval-rst} .. autoclass:: adaptive.runner.SequentialExecutor +``` -Replay log ----------- +## Replay log +```{eval-rst} .. autofunction:: adaptive.runner.replay_log +``` diff --git a/docs/source/reference/adaptive.utils.rst b/docs/source/reference/adaptive.utils.md similarity index 66% rename from docs/source/reference/adaptive.utils.rst rename to docs/source/reference/adaptive.utils.md index aa6f539ee..937298a2f 100644 --- a/docs/source/reference/adaptive.utils.rst +++ b/docs/source/reference/adaptive.utils.md @@ -1,7 +1,8 @@ -adaptive.utils module -===================== +# adaptive.utils module +```{eval-rst} .. automodule:: adaptive.utils :members: :undoc-members: :show-inheritance: +``` diff --git a/docs/source/tutorial/tutorial.AverageLearner.md b/docs/source/tutorial/tutorial.AverageLearner.md new file mode 100644 index 000000000..5bdd5ae27 --- /dev/null +++ b/docs/source/tutorial/tutorial.AverageLearner.md @@ -0,0 +1,66 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.AverageLearner` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() +``` + +The next type of learner averages a function until the uncertainty in the average meets some condition. + +This is useful for sampling a random variable. +The function passed to the learner must formally take a single parameter, which should be used like a “seed” for the (pseudo-) random variable (although in the current implementation the seed parameter can be ignored by the function). + +```{code-cell} ipython3 +def g(n): + import random + from time import sleep + + sleep(random.random() / 1000) + # Properly save and restore the RNG state + state = random.getstate() + random.seed(n) + val = random.gauss(0.5, 1) + random.setstate(state) + return val +``` + +```{code-cell} ipython3 +learner = adaptive.AverageLearner(g, atol=None, rtol=0.01) +# `loss < 1` means that we reached the `rtol` or `atol` +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.AverageLearner.ipynb`** and {download}`tutorial.AverageLearner.md`. diff --git a/docs/source/tutorial/tutorial.AverageLearner.rst b/docs/source/tutorial/tutorial.AverageLearner.rst deleted file mode 100644 index e71040c49..000000000 --- a/docs/source/tutorial/tutorial.AverageLearner.rst +++ /dev/null @@ -1,57 +0,0 @@ -Tutorial `~adaptive.AverageLearner` ------------------------------------ - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.AverageLearner` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - -The next type of learner averages a function until the uncertainty in -the average meets some condition. - -This is useful for sampling a random variable. The function passed to -the learner must formally take a single parameter, which should be used -like a “seed” for the (pseudo-) random variable (although in the current -implementation the seed parameter can be ignored by the function). - -.. jupyter-execute:: - - def g(n): - import random - from time import sleep - sleep(random.random() / 1000) - # Properly save and restore the RNG state - state = random.getstate() - random.seed(n) - val = random.gauss(0.5, 1) - random.setstate(state) - return val - -.. jupyter-execute:: - - learner = adaptive.AverageLearner(g, atol=None, rtol=0.01) - # `loss < 1` means that we reached the `rtol` or `atol` - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) diff --git a/docs/source/tutorial/tutorial.AverageLearner1D.md b/docs/source/tutorial/tutorial.AverageLearner1D.md new file mode 100644 index 000000000..799338f82 --- /dev/null +++ b/docs/source/tutorial/tutorial.AverageLearner1D.md @@ -0,0 +1,146 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.AverageLearner1D` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np +from functools import partial +``` + +## General use + +First, we define the (noisy) function to be sampled. Note that the parameter `sigma` corresponds to the standard deviation of the Gaussian noise. + +```{code-cell} ipython3 +def noisy_peak(seed_x, sigma=0, peak_width=0.05, offset=-0.5): + seed, x = seed_x # tuple with seed and `x` value + y = x**3 - x + 3 * peak_width**2 / (peak_width**2 + (x - offset) ** 2) + rng = np.random.RandomState(seed) + noise = rng.normal(scale=sigma) + return y + noise +``` + +This is how the function looks in the absence of noise: + +```{code-cell} ipython3 +xs = np.linspace(-2, 2, 500) +ys = [noisy_peak((seed, x), sigma=0) for seed, x in enumerate(xs)] +hv.Path((xs, ys)) +``` + +And an example of a single realization of the noisy function: + +```{code-cell} ipython3 +ys = [noisy_peak((seed, x), sigma=1) for seed, x in enumerate(xs)] +hv.Path((xs, ys)) +``` + +To obtain an estimate of the mean value of the function at each point `x`, we take many samples at `x` and calculate the sample mean. +The learner will autonomously determine whether the next samples should be taken at an old point (to improve the estimate of the mean at that point) or at a new one. + +We start by initializing a 1D average learner: + +```{code-cell} ipython3 +learner = adaptive.AverageLearner1D(partial(noisy_peak, sigma=1), bounds=(-2, 2)) +``` + +As with other types of learners, we need to initialize a runner with a certain goal to run our learner. +In this case, we set 10000 samples as the goal (the second condition ensures that we have at least 20 samples at each point): + +```{code-cell} ipython3 +def goal(nsamples): + def _goal(learner): + return learner.nsamples >= nsamples and learner.min_samples_per_point >= 20 + + return _goal + + +runner = adaptive.Runner(learner, goal=goal(10_000)) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +runner.live_plot(update_interval=0.1) +``` + +## Fine tuning + +In some cases, the default configuration of the 1D average learner can be sub-optimal. +One can then tune the internal parameters of the learner. +The most relevant are: + +- `loss_per_interval`: loss function (see {class}`~adaptive.Learner1D`). +- `delta`: this parameter is the most relevant and controls the balance between resampling existing points (exploitation) and sampling new ones (exploration). Its value should remain between 0 and 1 (the default value is 0.2). Large values favor the "exploration" behavior, although this can make the learner to sample noise. Small values favor the "exploitation" behavior, leading the learner to thoroughly resample existing points. In general, the optimal value of `delta` is between 0.1 and 0.4. +- `neighbor_sampling`: each new point is initially sampled a fraction `neighbor_sampling` of the number of samples of its nearest neighbor. We recommend to keep the value of `neighbor_sampling` below 1 to prevent oversampling. +- `min_samples`: minimum number of samples that are initially taken at a new point. This parameter can prevent the learner from sampling noise in case we accidentally set a too large value of `delta`. +- `max_samples`: maximum number of samples at each point. If a point has been sampled `max_samples` times, it will not be sampled again. This prevents the "exploitation" to drastically dominate over the "exploration" behavior in case we set a too small `delta`. +- `min_error`: minimum uncertainty at each point (this uncertainty corresponds to the standard deviation in the estimate of the mean). As `max_samples`, this parameter can prevent the "exploitation" to drastically dominate over the "exploration" behavior. + +As an example, assume that we wanted to resample the points from the previous learner. +We can decrease `delta` to 0.1 and set `min_error` to 0.05 if we do not require accuracy beyond this value: + +```{code-cell} ipython3 +learner.delta = 0.1 +learner.min_error = 0.05 +runner = adaptive.Runner(learner, goal=goal(20_000)) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +runner.live_plot(update_interval=0.1) +``` + +On the contrary, if we want to push forward the "exploration", we can set a larger `delta` and limit the maximum number of samples taken at each point: + +```{code-cell} ipython3 +learner.delta = 0.3 +learner.max_samples = 1000 + +runner = adaptive.Runner(learner, goal=goal(25_000)) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +runner.live_plot(update_interval=0.1) +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.AverageLearner1D.ipynb`** and {download}`tutorial.AverageLearner1D.md`. diff --git a/docs/source/tutorial/tutorial.AverageLearner1D.rst b/docs/source/tutorial/tutorial.AverageLearner1D.rst deleted file mode 100644 index b4c18b021..000000000 --- a/docs/source/tutorial/tutorial.AverageLearner1D.rst +++ /dev/null @@ -1,139 +0,0 @@ -Tutorial `~adaptive.AverageLearner1D` -------------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.AverageLearner1D` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - from functools import partial - -General use -.......................... - -First, we define the (noisy) function to be sampled. Note that the parameter -``sigma`` corresponds to the standard deviation of the Gaussian noise. - -.. jupyter-execute:: - - def noisy_peak(seed_x, sigma=0, peak_width=0.05, offset=-0.5): - seed, x = seed_x # tuple with seed and `x` value - y = x ** 3 - x + 3 * peak_width ** 2 / (peak_width ** 2 + (x - offset) ** 2) - rng = np.random.RandomState(seed) - noise = rng.normal(scale=sigma) - return y + noise - -This is how the function looks in the absence of noise: - -.. jupyter-execute:: - - xs = np.linspace(-2, 2, 500) - ys = [noisy_peak((seed, x), sigma=0) for seed, x in enumerate(xs)] - hv.Path((xs, ys)) - -And an example of a single realization of the noisy function: - -.. jupyter-execute:: - - ys = [noisy_peak((seed, x), sigma=1) for seed, x in enumerate(xs)] - hv.Path((xs, ys)) - -To obtain an estimate of the mean value of the function at each point ``x``, we -take many samples at ``x`` and calculate the sample mean. The learner will -autonomously determine whether the next samples should be taken at an old -point (to improve the estimate of the mean at that point) or at a new one. - -We start by initializing a 1D average learner: - -.. jupyter-execute:: - - learner = adaptive.AverageLearner1D(partial(noisy_peak, sigma=1), bounds=(-2, 2)) - -As with other types of learners, we need to initialize a runner with a certain -goal to run our learner. In this case, we set 10000 samples as the goal (the -second condition ensures that we have at least 20 samples at each point): - -.. jupyter-execute:: - - def goal(nsamples): - def _goal(learner): - return learner.nsamples >= nsamples and learner.min_samples_per_point >= 20 - return _goal - - runner = adaptive.Runner(learner, goal=goal(10_000)) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - runner.live_plot(update_interval=0.1) - -Fine tuning -........... - -In some cases, the default configuration of the 1D average learner can be -sub-optimal. One can then tune the internal parameters of the learner. The most -relevant are: - -- ``loss_per_interval``: loss function (see Learner1D). -- ``delta``: this parameter is the most relevant and controls the balance between resampling existing points (exploitation) and sampling new ones (exploration). Its value should remain between 0 and 1 (the default value is 0.2). Large values favor the "exploration" behavior, although this can make the learner to sample noise. Small values favor the "exploitation" behavior, leading the learner to thoroughly resample existing points. In general, the optimal value of ``delta`` is between 0.1 and 0.4. -- ``neighbor_sampling``: each new point is initially sampled a fraction ``neighbor_sampling`` of the number of samples of its nearest neighbor. We recommend to keep the value of ``neighbor_sampling`` below 1 to prevent oversampling. -- ``min_samples``: minimum number of samples that are initially taken at a new point. This parameter can prevent the learner from sampling noise in case we accidentally set a too large value of ``delta``. -- ``max_samples``: maximum number of samples at each point. If a point has been sampled ``max_samples`` times, it will not be sampled again. This prevents the "exploitation" to drastically dominate over the "exploration" behavior in case we set a too small ``delta``. -- ``min_error``: minimum uncertainty at each point (this uncertainty corresponds to the standard deviation in the estimate of the mean). As ``max_samples``, this parameter can prevent the "exploitation" to drastically dominate over the "exploration" behavior. - -As an example, assume that we wanted to resample the points from the previous -learner. We can decrease ``delta`` to 0.1 and set ``min_error`` to 0.05 if we do -not require accuracy beyond this value: - -.. jupyter-execute:: - - learner.delta = 0.1 - learner.min_error = 0.05 - runner = adaptive.Runner(learner, goal=goal(20_000)) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - runner.live_plot(update_interval=0.1) - -On the contrary, if we want to push forward the "exploration", we can set a larger -``delta`` and limit the maximum number of samples taken at each point: - -.. jupyter-execute:: - - learner.delta = 0.3 - learner.max_samples = 1000 - - runner = adaptive.Runner(learner, goal=goal(25_000)) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - runner.live_plot(update_interval=0.1) diff --git a/docs/source/tutorial/tutorial.BalancingLearner.md b/docs/source/tutorial/tutorial.BalancingLearner.md new file mode 100644 index 000000000..4276dd0d4 --- /dev/null +++ b/docs/source/tutorial/tutorial.BalancingLearner.md @@ -0,0 +1,96 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.BalancingLearner` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np +from functools import partial +import random +``` + +The balancing learner is a “meta-learner” that takes a list of learners. +When you request a point from the balancing learner, it will query all of its “children” to figure out which one will give the most improvement. + +The balancing learner can for example be used to implement a poor-man’s 2D learner by using the {class}`~adaptive.Learner1D`. + +```{code-cell} ipython3 +def h(x, offset=0): + a = 0.01 + return x + a**2 / (a**2 + (x - offset) ** 2) + + +learners = [ + adaptive.Learner1D(partial(h, offset=random.uniform(-1, 1)), bounds=(-1, 1)) + for i in range(10) +] + +bal_learner = adaptive.BalancingLearner(learners) +runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners]) +runner.live_plot(plotter=plotter, update_interval=0.1) +``` + +Often one wants to create a set of `learner`s for a cartesian product of parameters. +For that particular case we’ve added a `classmethod` called {class}`~adaptive.BalancingLearner.from_product`. +See how it works below + +```{code-cell} ipython3 +from scipy.special import eval_jacobi + + +def jacobi(x, n, alpha, beta): + return eval_jacobi(n, alpha, beta, x) + + +combos = { + "n": [1, 2, 4, 8], + "alpha": np.linspace(0, 2, 3), + "beta": np.linspace(0, 1, 5), +} + +learner = adaptive.BalancingLearner.from_product( + jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos +) + +runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) + +# The `cdims` will automatically be set when using `from_product`, so +# `plot()` will return a HoloMap with correctly labeled sliders. +learner.plot().overlay("beta").grid().select(y=(-1, 3)) +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.BalancingLearner.ipynb`** and {download}`tutorial.BalancingLearner.md`. diff --git a/docs/source/tutorial/tutorial.BalancingLearner.rst b/docs/source/tutorial/tutorial.BalancingLearner.rst deleted file mode 100644 index ab139e9f5..000000000 --- a/docs/source/tutorial/tutorial.BalancingLearner.rst +++ /dev/null @@ -1,82 +0,0 @@ -Tutorial `~adaptive.BalancingLearner` -------------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.BalancingLearner` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - from functools import partial - import random - -The balancing learner is a “meta-learner” that takes a list of learners. -When you request a point from the balancing learner, it will query all -of its “children” to figure out which one will give the most -improvement. - -The balancing learner can for example be used to implement a poor-man’s -2D learner by using the `~adaptive.Learner1D`. - -.. jupyter-execute:: - - def h(x, offset=0): - a = 0.01 - return x + a**2 / (a**2 + (x - offset)**2) - - learners = [adaptive.Learner1D(partial(h, offset=random.uniform(-1, 1)), - bounds=(-1, 1)) for i in range(10)] - - bal_learner = adaptive.BalancingLearner(learners) - runner = adaptive.Runner(bal_learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners]) - runner.live_plot(plotter=plotter, update_interval=0.1) - -Often one wants to create a set of ``learner``\ s for a cartesian -product of parameters. For that particular case we’ve added a -``classmethod`` called `~adaptive.BalancingLearner.from_product`. -See how it works below - -.. jupyter-execute:: - - from scipy.special import eval_jacobi - - def jacobi(x, n, alpha, beta): return eval_jacobi(n, alpha, beta, x) - - combos = { - 'n': [1, 2, 4, 8], - 'alpha': np.linspace(0, 2, 3), - 'beta': np.linspace(0, 1, 5), - } - - learner = adaptive.BalancingLearner.from_product( - jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos) - - runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) - - # The `cdims` will automatically be set when using `from_product`, so - # `plot()` will return a HoloMap with correctly labeled sliders. - learner.plot().overlay('beta').grid().select(y=(-1, 3)) diff --git a/docs/source/tutorial/tutorial.DataSaver.md b/docs/source/tutorial/tutorial.DataSaver.md new file mode 100644 index 000000000..13a6666ce --- /dev/null +++ b/docs/source/tutorial/tutorial.DataSaver.md @@ -0,0 +1,81 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.DataSaver` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() +``` + +If the function that you want to learn returns a value along with some metadata, you can wrap your learner in an {class}`adaptive.DataSaver`. + +In the following example the function to be learned returns its result and the execution time in a dictionary: + +```{code-cell} ipython3 +from operator import itemgetter + + +def f_dict(x): + """The function evaluation takes roughly the time we `sleep`.""" + import random + from time import sleep + + waiting_time = random.random() + sleep(waiting_time) + a = 0.01 + y = x + a**2 / (a**2 + x**2) + return {"y": y, "waiting_time": waiting_time} + + +# Create the learner with the function that returns a 'dict' +# This learner cannot be run directly, as Learner1D does not know what to do with the 'dict' +_learner = adaptive.Learner1D(f_dict, bounds=(-1, 1)) + +# Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn +learner = adaptive.DataSaver(_learner, arg_picker=itemgetter("y")) +``` + +`learner.learner` is the original learner, so `learner.learner.loss()` will call the correct loss method. + +```{code-cell} ipython3 +runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.1) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1) +``` + +Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values. + +```{code-cell} ipython3 +learner.extra_data +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.DataSaver.ipynb`** and {download}`tutorial.DataSaver.md`. diff --git a/docs/source/tutorial/tutorial.DataSaver.rst b/docs/source/tutorial/tutorial.DataSaver.rst deleted file mode 100644 index 0ba5dbb72..000000000 --- a/docs/source/tutorial/tutorial.DataSaver.rst +++ /dev/null @@ -1,73 +0,0 @@ -Tutorial `~adaptive.DataSaver` ------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.DataSaver` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - -If the function that you want to learn returns a value along with some -metadata, you can wrap your learner in an `adaptive.DataSaver`. - -In the following example the function to be learned returns its result -and the execution time in a dictionary: - -.. jupyter-execute:: - - from operator import itemgetter - - def f_dict(x): - """The function evaluation takes roughly the time we `sleep`.""" - import random - from time import sleep - - waiting_time = random.random() - sleep(waiting_time) - a = 0.01 - y = x + a**2 / (a**2 + x**2) - return {'y': y, 'waiting_time': waiting_time} - - # Create the learner with the function that returns a 'dict' - # This learner cannot be run directly, as Learner1D does not know what to do with the 'dict' - _learner = adaptive.Learner1D(f_dict, bounds=(-1, 1)) - - # Wrapping the learner with 'adaptive.DataSaver' and tell it which key it needs to learn - learner = adaptive.DataSaver(_learner, arg_picker=itemgetter('y')) - -``learner.learner`` is the original learner, so -``learner.learner.loss()`` will call the correct loss method. - -.. jupyter-execute:: - - runner = adaptive.Runner(learner, goal=lambda l: l.learner.loss() < 0.1) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1) - -Now the ``DataSavingLearner`` will have an dictionary attribute -``extra_data`` that has ``x`` as key and the data that was returned by -``learner.function`` as values. - -.. jupyter-execute:: - - learner.extra_data diff --git a/docs/source/tutorial/tutorial.IntegratorLearner.md b/docs/source/tutorial/tutorial.IntegratorLearner.md new file mode 100644 index 000000000..0686344c9 --- /dev/null +++ b/docs/source/tutorial/tutorial.IntegratorLearner.md @@ -0,0 +1,97 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.IntegratorLearner` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np +``` + +This learner learns a 1D function and calculates the integral and error of the integral with it. +It is based on Pedro Gonnet’s [implementation](https://www.academia.edu/1976055/Adaptive_quadrature_re-revisited). + +Let’s try the following function with cusps (that is difficult to integrate): + +```{code-cell} ipython3 +def f24(x): + return np.floor(np.exp(x)) + + +xs = np.linspace(0, 3, 200) +hv.Scatter((xs, [f24(x) for x in xs])) +``` + +Just to prove that this really is a difficult to integrate function, let’s try a familiar function integrator `scipy.integrate.quad`, which will give us warnings that it encounters difficulties (if we run it in a notebook.) + +```{code-cell} ipython3 +import scipy.integrate + +scipy.integrate.quad(f24, 0, 3) +``` + +We initialize a learner again and pass the bounds and relative tolerance we want to reach. +Then in the {class}`~adaptive.Runner` we pass `goal=lambda l: l.done()` where `learner.done()` is `True` when the relative tolerance has been reached. + +```{code-cell} ipython3 +from adaptive.runner import SequentialExecutor + +learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8) + +# We use a SequentialExecutor, which runs the function to be learned in +# *this* process only. This means we don't pay +# the overhead of evaluating the function in another process. +runner = adaptive.Runner( + learner, executor=SequentialExecutor(), goal=lambda l: l.done() +) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +Now we could do the live plotting again, but lets just wait untill the +runner is done. + +```{code-cell} ipython3 +if not runner.task.done(): + raise RuntimeError( + "Wait for the runner to finish before executing the cells below!" + ) +``` + +```{code-cell} ipython3 +print( + "The integral value is {} with the corresponding error of {}".format( + learner.igral, learner.err + ) +) +learner.plot() +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.IntegratorLearner.ipynb`** and {download}`tutorial.IntegratorLearner.md`. diff --git a/docs/source/tutorial/tutorial.IntegratorLearner.rst b/docs/source/tutorial/tutorial.IntegratorLearner.rst deleted file mode 100644 index 5e287ddaf..000000000 --- a/docs/source/tutorial/tutorial.IntegratorLearner.rst +++ /dev/null @@ -1,83 +0,0 @@ -Tutorial `~adaptive.IntegratorLearner` --------------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.IntegratorLearner` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - -This learner learns a 1D function and calculates the integral and error -of the integral with it. It is based on Pedro Gonnet’s -`implementation `__. - -Let’s try the following function with cusps (that is difficult to -integrate): - -.. jupyter-execute:: - - def f24(x): - return np.floor(np.exp(x)) - - xs = np.linspace(0, 3, 200) - hv.Scatter((xs, [f24(x) for x in xs])) - -Just to prove that this really is a difficult to integrate function, -let’s try a familiar function integrator `scipy.integrate.quad`, which -will give us warnings that it encounters difficulties (if we run it -in a notebook.) - -.. jupyter-execute:: - - import scipy.integrate - scipy.integrate.quad(f24, 0, 3) - -We initialize a learner again and pass the bounds and relative tolerance -we want to reach. Then in the `~adaptive.Runner` we pass -``goal=lambda l: l.done()`` where ``learner.done()`` is ``True`` when -the relative tolerance has been reached. - -.. jupyter-execute:: - - from adaptive.runner import SequentialExecutor - - learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8) - - # We use a SequentialExecutor, which runs the function to be learned in - # *this* process only. This means we don't pay - # the overhead of evaluating the function in another process. - runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.done()) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -Now we could do the live plotting again, but lets just wait untill the -runner is done. - -.. jupyter-execute:: - - if not runner.task.done(): - raise RuntimeError('Wait for the runner to finish before executing the cells below!') - -.. jupyter-execute:: - - print('The integral value is {} with the corresponding error of {}'.format(learner.igral, learner.err)) - learner.plot() diff --git a/docs/source/tutorial/tutorial.Learner1D.md b/docs/source/tutorial/tutorial.Learner1D.md new file mode 100644 index 000000000..3b4c20363 --- /dev/null +++ b/docs/source/tutorial/tutorial.Learner1D.md @@ -0,0 +1,205 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +(TutorialLearner1D)= +# Tutorial {class}`~adaptive.Learner1D` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import numpy as np +from functools import partial +import random +``` + +## scalar output: `f:ℝ → ℝ` + +We start with the most common use-case: sampling a 1D function `f: ℝ → ℝ`. + +We will use the following function, which is a smooth (linear) background with a sharp peak at a random location: + +```{code-cell} ipython3 +offset = random.uniform(-0.5, 0.5) + + +def f(x, offset=offset, wait=True): + from time import sleep + from random import random + + a = 0.01 + if wait: + sleep(random() / 10) + return x + a**2 / (a**2 + (x - offset) ** 2) +``` + +We start by initializing a 1D “learner”, which will suggest points to evaluate, and adapt its suggestions as more and more points are evaluated. + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +``` + +Next we create a “runner” that will request points from the learner and evaluate ‘f’ on them. + +By default on Unix-like systems the runner will evaluate the points in parallel using local processes {class}`concurrent.futures.ProcessPoolExecutor`. + +On Windows systems the runner will use a {class}`loky.get_reusable_executor`. +A {class}`~concurrent.futures.ProcessPoolExecutor` cannot be used on Windows for reasons. + +```{code-cell} ipython3 +# The end condition is when the "loss" is less than 0.1. In the context of the +# 1D learner this means that we will resolve features in 'func' with width 0.1 or wider. +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +When instantiated in a Jupyter notebook the runner does its job in the background and does not block the IPython kernel. +We can use this to create a plot that updates as new data arrives: + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +We can now compare the adaptive sampling to a homogeneous sampling with the same number of points: + +```{code-cell} ipython3 +if not runner.task.done(): + raise RuntimeError( + "Wait for the runner to finish before executing the cells below!" + ) +``` + +```{code-cell} ipython3 +learner2 = adaptive.Learner1D(f, bounds=learner.bounds) + +xs = np.linspace(*learner.bounds, len(learner.data)) +learner2.tell_many(xs, map(partial(f, wait=False), xs)) + +learner.plot() + learner2.plot() +``` + +## vector output: `f:ℝ → ℝ^N` + +Sometimes you may want to learn a function with vector output: + +```{code-cell} ipython3 +random.seed(0) +offsets = [random.uniform(-0.8, 0.8) for _ in range(3)] + +# sharp peaks at random locations in the domain +def f_levels(x, offsets=offsets): + a = 0.01 + return np.array( + [offset + x + a**2 / (a**2 + (x - offset) ** 2) for offset in offsets] + ) +``` + +`adaptive` has you covered! +The `Learner1D` can be used for such functions: + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f_levels, bounds=(-1, 1)) +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +## Looking at curvature + +By default `adaptive` will sample more points where the (normalized) euclidean distance between the neighboring points is large. +You may achieve better results sampling more points in regions with high curvature. +To do this, you need to tell the learner to look at the curvature by specifying `loss_per_interval`. + +```{code-cell} ipython3 +from adaptive.learner.learner1D import ( + curvature_loss_function, + uniform_loss, + default_loss, +) + +curvature_loss = curvature_loss_function() +learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=curvature_loss) +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +We may see the difference of homogeneous sampling vs only one interval vs including the nearest neighboring intervals in this plot. +We will look at 100 points. + +```{code-cell} ipython3 +def sin_exp(x): + from math import exp, sin + + return sin(15 * x) * exp(-(x**2) * 2) + + +learner_h = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=uniform_loss) +learner_1 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=default_loss) +learner_2 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=curvature_loss) + +npoints_goal = lambda l: l.npoints >= 100 +# adaptive.runner.simple is a non parallel blocking runner. +adaptive.runner.simple(learner_h, goal=npoints_goal) +adaptive.runner.simple(learner_1, goal=npoints_goal) +adaptive.runner.simple(learner_2, goal=npoints_goal) + +( + learner_h.plot().relabel("homogeneous") + + learner_1.plot().relabel("euclidean loss") + + learner_2.plot().relabel("curvature loss") +).cols(2) +``` + +More info about using custom loss functions can be found in {ref}`Custom adaptive logic for 1D and 2D`. + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.Learner1D.ipynb`** and {download}`tutorial.Learner1D.md`. diff --git a/docs/source/tutorial/tutorial.Learner1D.rst b/docs/source/tutorial/tutorial.Learner1D.rst deleted file mode 100644 index 490ed6067..000000000 --- a/docs/source/tutorial/tutorial.Learner1D.rst +++ /dev/null @@ -1,196 +0,0 @@ -Tutorial `~adaptive.Learner1D` ------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.Learner1D` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import numpy as np - from functools import partial - import random - -scalar output: ``f:ℝ → ℝ`` -.......................... - -We start with the most common use-case: sampling a 1D function -:math:`\ f: ℝ → ℝ`. - -We will use the following function, which is a smooth (linear) -background with a sharp peak at a random location: - -.. jupyter-execute:: - - offset = random.uniform(-0.5, 0.5) - - def f(x, offset=offset, wait=True): - from time import sleep - from random import random - - a = 0.01 - if wait: - sleep(random() / 10) - return x + a**2 / (a**2 + (x - offset)**2) - -We start by initializing a 1D “learner”, which will suggest points to -evaluate, and adapt its suggestions as more and more points are -evaluated. - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - -Next we create a “runner” that will request points from the learner and -evaluate ‘f’ on them. - -By default on Unix-like systems the runner will evaluate the points in -parallel using local processes `concurrent.futures.ProcessPoolExecutor`. - -On Windows systems the runner will use a `loky.get_reusable_executor`. -A `~concurrent.futures.ProcessPoolExecutor` cannot be used on Windows for reasons. - -.. jupyter-execute:: - - # The end condition is when the "loss" is less than 0.1. In the context of the - # 1D learner this means that we will resolve features in 'func' with width 0.1 or wider. - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -When instantiated in a Jupyter notebook the runner does its job in the -background and does not block the IPython kernel. We can use this to -create a plot that updates as new data arrives: - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) - -We can now compare the adaptive sampling to a homogeneous sampling with -the same number of points: - -.. jupyter-execute:: - - if not runner.task.done(): - raise RuntimeError('Wait for the runner to finish before executing the cells below!') - -.. jupyter-execute:: - - learner2 = adaptive.Learner1D(f, bounds=learner.bounds) - - xs = np.linspace(*learner.bounds, len(learner.data)) - learner2.tell_many(xs, map(partial(f, wait=False), xs)) - - learner.plot() + learner2.plot() - - -vector output: ``f:ℝ → ℝ^N`` -............................ - -Sometimes you may want to learn a function with vector output: - -.. jupyter-execute:: - - random.seed(0) - offsets = [random.uniform(-0.8, 0.8) for _ in range(3)] - - # sharp peaks at random locations in the domain - def f_levels(x, offsets=offsets): - a = 0.01 - return np.array([offset + x + a**2 / (a**2 + (x - offset)**2) - for offset in offsets]) - -``adaptive`` has you covered! The ``Learner1D`` can be used for such -functions: - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f_levels, bounds=(-1, 1)) - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) - - -Looking at curvature -.................... - -By default ``adaptive`` will sample more points where the (normalized) -euclidean distance between the neighboring points is large. -You may achieve better results sampling more points in regions with high -curvature. To do this, you need to tell the learner to look at the curvature -by specifying ``loss_per_interval``. - -.. jupyter-execute:: - - from adaptive.learner.learner1D import (curvature_loss_function, - uniform_loss, - default_loss) - curvature_loss = curvature_loss_function() - learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=curvature_loss) - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) - -We may see the difference of homogeneous sampling vs only one interval vs -including nearest neighboring intervals in this plot: We will look at 100 points. - -.. jupyter-execute:: - - def sin_exp(x): - from math import exp, sin - return sin(15 * x) * exp(-x**2*2) - - learner_h = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=uniform_loss) - learner_1 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=default_loss) - learner_2 = adaptive.Learner1D(sin_exp, (-1, 1), loss_per_interval=curvature_loss) - - npoints_goal = lambda l: l.npoints >= 100 - # adaptive.runner.simple is a non parallel blocking runner. - adaptive.runner.simple(learner_h, goal=npoints_goal) - adaptive.runner.simple(learner_1, goal=npoints_goal) - adaptive.runner.simple(learner_2, goal=npoints_goal) - - (learner_h.plot().relabel('homogeneous') - + learner_1.plot().relabel('euclidean loss') - + learner_2.plot().relabel('curvature loss')).cols(2) - -More info about using custom loss functions can be found -in :ref:`Custom adaptive logic for 1D and 2D`. diff --git a/docs/source/tutorial/tutorial.Learner2D.md b/docs/source/tutorial/tutorial.Learner2D.md new file mode 100644 index 000000000..c2f6ddba5 --- /dev/null +++ b/docs/source/tutorial/tutorial.Learner2D.md @@ -0,0 +1,89 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.Learner2D` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive +import holoviews as hv +import numpy as np + +from functools import partial +adaptive.notebook_extension() +``` + +Besides 1D functions, we can also learn 2D functions: $f: ℝ^2 → ℝ$. + +```{code-cell} ipython3 +def ring(xy, wait=True): + import numpy as np + from time import sleep + from random import random + + if wait: + sleep(random() / 10) + x, y = xy + a = 0.2 + return x + np.exp(-((x**2 + y**2 - 0.75**2) ** 2) / a**4) + + +learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)]) +``` + +```{code-cell} ipython3 +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +def plot(learner): + plot = learner.plot(tri_alpha=0.2) + return (plot.Image + plot.EdgePaths.I + plot).cols(2) + + +runner.live_plot(plotter=plot, update_interval=0.1) +``` + +```{code-cell} ipython3 +import itertools + +# Create a learner and add data on homogeneous grid, so that we can plot it +learner2 = adaptive.Learner2D(ring, bounds=learner.bounds) +n = int(learner.npoints**0.5) +xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds] +xys = list(itertools.product(xs, ys)) +learner2.tell_many(xys, map(partial(ring, wait=False), xys)) + +( + learner2.plot(n).relabel("Homogeneous grid") + + learner.plot().relabel("With adaptive") + + learner2.plot(n, tri_alpha=0.4) + + learner.plot(tri_alpha=0.4) +).cols(2).opts(hv.opts.EdgePaths(color="w")) +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.Learner2D.ipynb`** and {download}`tutorial.Learner2D.md`. diff --git a/docs/source/tutorial/tutorial.Learner2D.rst b/docs/source/tutorial/tutorial.Learner2D.rst deleted file mode 100644 index a5c9f1831..000000000 --- a/docs/source/tutorial/tutorial.Learner2D.rst +++ /dev/null @@ -1,74 +0,0 @@ -Tutorial `~adaptive.Learner2D` ------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.Learner2D` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import numpy as np - from functools import partial - -Besides 1D functions, we can also learn 2D functions: -:math:`\ f: ℝ^2 → ℝ`. - -.. jupyter-execute:: - - def ring(xy, wait=True): - import numpy as np - from time import sleep - from random import random - if wait: - sleep(random()/10) - x, y = xy - a = 0.2 - return x + np.exp(-(x**2 + y**2 - 0.75**2)**2/a**4) - - learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)]) - -.. jupyter-execute:: - - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - def plot(learner): - plot = learner.plot(tri_alpha=0.2) - return (plot.Image + plot.EdgePaths.I + plot).cols(2) - - runner.live_plot(plotter=plot, update_interval=0.1) - -.. jupyter-execute:: - - %%opts EdgePaths (color='w') - - import itertools - - # Create a learner and add data on homogeneous grid, so that we can plot it - learner2 = adaptive.Learner2D(ring, bounds=learner.bounds) - n = int(learner.npoints**0.5) - xs, ys = [np.linspace(*bounds, n) for bounds in learner.bounds] - xys = list(itertools.product(xs, ys)) - learner2.tell_many(xys, map(partial(ring, wait=False), xys)) - - (learner2.plot(n).relabel('Homogeneous grid') + learner.plot().relabel('With adaptive') + - learner2.plot(n, tri_alpha=0.4) + learner.plot(tri_alpha=0.4)).cols(2) diff --git a/docs/source/tutorial/tutorial.LearnerND.md b/docs/source/tutorial/tutorial.LearnerND.md new file mode 100644 index 000000000..aca8f187e --- /dev/null +++ b/docs/source/tutorial/tutorial.LearnerND.md @@ -0,0 +1,129 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +execution: + timeout: 300 +--- +# Tutorial {class}`~adaptive.LearnerND` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.LearnerND.ipynb`** and {download}`tutorial.LearnerND.md`. + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np + + +def dynamicmap_to_holomap(dm): + # XXX: change when https://github.com/ioam/holoviews/issues/3085 + # is fixed. + vals = {d.name: d.values for d in dm.dimensions() if d.values} + return hv.HoloMap(dm.select(**vals)) +``` + +Besides 1 and 2 dimensional functions, we can also learn N-D functions: $f: ℝ^N → ℝ^M, N \ge 2, M \ge 1$. + +Do keep in mind the speed and [effectiveness](https://en.wikipedia.org/wiki/Curse_of_dimensionality) of the learner drops quickly with increasing number of dimensions. + +```{code-cell} ipython3 +def sphere(xyz): + x, y, z = xyz + a = 0.4 + return x + z**2 + np.exp(-((x**2 + y**2 + z**2 - 0.75**2) ** 2) / a**4) + + +learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)]) +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1e-3) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +Let’s plot 2D slices of the 3D function + +```{code-cell} ipython3 +def plot_cut(x, direction, learner=learner): + cut_mapping = {"XYZ".index(direction): x} + return learner.plot_slice(cut_mapping, n=100) + + +dm = hv.DynamicMap(plot_cut, kdims=["val", "direction"]) +dm = dm.redim.values(val=np.linspace(-1, 1, 11), direction=list("XYZ")) + +# In a notebook one would run `dm` however we want a statically generated +# html, so we use a HoloMap to display it here +dynamicmap_to_holomap(dm) +``` + +Or we can plot 1D slices + +```{code-cell} ipython3 +def plot_cut(x1, x2, directions, learner=learner): + cut_mapping = {"xyz".index(d): x for d, x in zip(directions, [x1, x2])} + return learner.plot_slice(cut_mapping) + + +dm = hv.DynamicMap(plot_cut, kdims=["v1", "v2", "directions"]) +dm = dm.redim.values( + v1=np.linspace(-1, 1, 6), v2=np.linspace(-1, 1, 6), directions=["xy", "xz", "yz"] +) + +# In a notebook one would run `dm` however we want a statically generated +# html, so we use a HoloMap to display it here +dynamicmap_to_holomap(dm).options(hv.opts.Path(framewise=True)) +``` + +The plots show some wobbles while the original function was smooth, this is a result of the fact that the learner chooses points in 3 dimensions and the simplices are not in the same face as we try to interpolate our lines. +However, as always, when you sample more points the graph will become gradually smoother. + +## Using any convex shape as domain + +Suppose you do not simply want to sample your function on a square (in 2D) or in a cube (in 3D). The LearnerND supports using a `scipy.spatial.ConvexHull` as your domain. +This is best illustrated in the following example. + +Suppose you would like to sample you function in a cube split in half diagonally. +You could use the following code as an example: + +```{code-cell} ipython3 +import scipy + +def f(xyz): + x, y, z = xyz + return x**4 + y**4 + z**4 - (x**2 + y**2 + z**2) ** 2 + + +# set the bound points, you can change this to be any shape +b = [(-1, -1, -1), (-1, 1, -1), (-1, -1, 1), (-1, 1, 1), (1, 1, -1), (1, -1, -1)] + +# you have to convert the points into a scipy.spatial.ConvexHull +hull = scipy.spatial.ConvexHull(b) + +learner = adaptive.LearnerND(f, hull) +adaptive.BlockingRunner(learner, goal=lambda l: l.npoints > 2000) + +learner.plot_isosurface(-0.5) +``` diff --git a/docs/source/tutorial/tutorial.LearnerND.rst b/docs/source/tutorial/tutorial.LearnerND.rst deleted file mode 100644 index e4915e86c..000000000 --- a/docs/source/tutorial/tutorial.LearnerND.rst +++ /dev/null @@ -1,125 +0,0 @@ -Tutorial `~adaptive.LearnerND` ------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.LearnerND` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - - def dynamicmap_to_holomap(dm): - # XXX: change when https://github.com/ioam/holoviews/issues/3085 - # is fixed. - vals = {d.name: d.values for d in dm.dimensions() if d.values} - return hv.HoloMap(dm.select(**vals)) - -Besides 1 and 2 dimensional functions, we can also learn N-D functions: -:math:`\ f: ℝ^N → ℝ^M, N \ge 2, M \ge 1`. - -Do keep in mind the speed and -`effectiveness `__ -of the learner drops quickly with increasing number of dimensions. - -.. jupyter-execute:: - - def sphere(xyz): - x, y, z = xyz - a = 0.4 - return x + z**2 + np.exp(-(x**2 + y**2 + z**2 - 0.75**2)**2/a**4) - - learner = adaptive.LearnerND(sphere, bounds=[(-1, 1), (-1, 1), (-1, 1)]) - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 1e-3) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -Let’s plot 2D slices of the 3D function - -.. jupyter-execute:: - - def plot_cut(x, direction, learner=learner): - cut_mapping = {'XYZ'.index(direction): x} - return learner.plot_slice(cut_mapping, n=100) - - dm = hv.DynamicMap(plot_cut, kdims=['val', 'direction']) - dm = dm.redim.values(val=np.linspace(-1, 1, 11), direction=list('XYZ')) - - # In a notebook one would run `dm` however we want a statically generated - # html, so we use a HoloMap to display it here - dynamicmap_to_holomap(dm) - -Or we can plot 1D slices - -.. jupyter-execute:: - - %%opts Path {+framewise} - def plot_cut(x1, x2, directions, learner=learner): - cut_mapping = {'xyz'.index(d): x for d, x in zip(directions, [x1, x2])} - return learner.plot_slice(cut_mapping) - - dm = hv.DynamicMap(plot_cut, kdims=['v1', 'v2', 'directions']) - dm = dm.redim.values(v1=np.linspace(-1, 1, 6), - v2=np.linspace(-1, 1, 6), - directions=['xy', 'xz', 'yz']) - - # In a notebook one would run `dm` however we want a statically generated - # html, so we use a HoloMap to display it here - dynamicmap_to_holomap(dm) - -The plots show some wobbles while the original function was smooth, this -is a result of the fact that the learner chooses points in 3 dimensions -and the simplices are not in the same face as we try to interpolate our -lines. However, as always, when you sample more points the graph will -become gradually smoother. - -Using any convex shape as domain -................................ - -Suppose you do not simply want to sample your function on a square (in 2D) or in -a cube (in 3D). The LearnerND supports using a `scipy.spatial.ConvexHull` as -your domain. This is best illustrated in the following example. - -Suppose you would like to sample you function in a cube split in half diagonally. -You could use the following code as an example: - -.. jupyter-execute:: - - import scipy - - def f(xyz): - x, y, z = xyz - return x**4 + y**4 + z**4 - (x**2+y**2+z**2)**2 - - # set the bound points, you can change this to be any shape - b = [(-1, -1, -1), - (-1, 1, -1), - (-1, -1, 1), - (-1, 1, 1), - ( 1, 1, -1), - ( 1, -1, -1)] - - # you have to convert the points into a scipy.spatial.ConvexHull - hull = scipy.spatial.ConvexHull(b) - - learner = adaptive.LearnerND(f, hull) - adaptive.BlockingRunner(learner, goal=lambda l: l.npoints > 2000) - - learner.plot_isosurface(-0.5) diff --git a/docs/source/tutorial/tutorial.SKOptLearner.md b/docs/source/tutorial/tutorial.SKOptLearner.md new file mode 100644 index 000000000..fb82bca17 --- /dev/null +++ b/docs/source/tutorial/tutorial.SKOptLearner.md @@ -0,0 +1,71 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.SKOptLearner` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np +``` + +We have wrapped the `Optimizer` class from [scikit-optimize](https://github.com/scikit-optimize/scikit-optimize), to show how existing libraries can be integrated with `adaptive`. + +The {class}`~adaptive.SKOptLearner` attempts to “optimize” the given function `g` (i.e. find the global minimum of `g` in the window of interest). + +Here we use the same example as in the `scikit-optimize` [tutorial](https://github.com/scikit-optimize/scikit-optimize/blob/master/examples/ask-and-tell.ipynb). +Although `SKOptLearner` can optimize functions of arbitrary dimensionality, we can only plot the learner if a 1D function is being learned. + +```{code-cell} ipython3 +def F(x, noise_level=0.1): + return np.sin(5 * x) * (1 - np.tanh(x**2)) + np.random.randn() * noise_level +``` + +```{code-cell} ipython3 +learner = adaptive.SKOptLearner( + F, + dimensions=[(-2.0, 2.0)], + base_estimator="GP", + acq_func="gp_hedge", + acq_optimizer="lbfgs", +) +runner = adaptive.Runner(learner, ntasks=1, goal=lambda l: l.npoints > 40) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +xs = np.linspace(*learner.space.bounds[0]) +to_learn = hv.Curve((xs, [F(x, 0) for x in xs]), label="to learn") + +plot = runner.live_plot().relabel("prediction", depth=2) * to_learn +plot.opts(legend_position="top") +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.SKOptLearner.ipynb`** and {download}`tutorial.SKOptLearner.md`. diff --git a/docs/source/tutorial/tutorial.SKOptLearner.rst b/docs/source/tutorial/tutorial.SKOptLearner.rst deleted file mode 100644 index fb35025c0..000000000 --- a/docs/source/tutorial/tutorial.SKOptLearner.rst +++ /dev/null @@ -1,65 +0,0 @@ -Tutorial `~adaptive.SKOptLearner` ---------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.SKOptLearner` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - -We have wrapped the ``Optimizer`` class from -`scikit-optimize `__, -to show how existing libraries can be integrated with ``adaptive``. - -The ``SKOptLearner`` attempts to “optimize” the given function ``g`` -(i.e. find the global minimum of ``g`` in the window of interest). - -Here we use the same example as in the ``scikit-optimize`` -`tutorial `__. -Although ``SKOptLearner`` can optimize functions of arbitrary -dimensionality, we can only plot the learner if a 1D function is being -learned. - -.. jupyter-execute:: - - def F(x, noise_level=0.1): - return (np.sin(5 * x) * (1 - np.tanh(x ** 2)) - + np.random.randn() * noise_level) - -.. jupyter-execute:: - - learner = adaptive.SKOptLearner(F, dimensions=[(-2., 2.)], - base_estimator="GP", - acq_func="gp_hedge", - acq_optimizer="lbfgs", - ) - runner = adaptive.Runner(learner, ntasks=1, goal=lambda l: l.npoints > 40) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - %%opts Overlay [legend_position='top'] - xs = np.linspace(*learner.space.bounds[0]) - to_learn = hv.Curve((xs, [F(x, 0) for x in xs]), label='to learn') - - runner.live_plot().relabel('prediction', depth=2) * to_learn diff --git a/docs/source/tutorial/tutorial.SequenceLearner.md b/docs/source/tutorial/tutorial.SequenceLearner.md new file mode 100644 index 000000000..0d6bb71cc --- /dev/null +++ b/docs/source/tutorial/tutorial.SequenceLearner.md @@ -0,0 +1,77 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial {class}`~adaptive.SequenceLearner` + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import holoviews as hv +import numpy as np +``` + +This learner will learn a sequence. It simply returns the points in the provided sequence when asked. + +This is useful when your problem cannot be formulated in terms of another adaptive learner, but you still want to use Adaptive's routines to run, (periodically) save, and plot. + +```{code-cell} ipython3 +from adaptive import SequenceLearner + + +def f(x): + return int(x) ** 2 + + +seq = np.linspace(-15, 15, 1000) +learner = SequenceLearner(f, seq) + +runner = adaptive.Runner(learner, SequenceLearner.done) +# that goal is same as `lambda learner: learner.done()` +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +def plotter(learner): + data = learner.data if learner.data else [] + return hv.Scatter(data) + + +runner.live_plot(plotter=plotter) +``` + +`learner.data` contains a dictionary that maps the index of the point of `learner.sequence` to the value at that point. + +To get the values in the same order as the input sequence (`learner.sequence`) use + +```{code-cell} ipython3 +result = learner.result() +print(result[:10]) # print the 10 first values +``` + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.SequenceLearner.ipynb`** and {download}`tutorial.SequenceLearner.md`. diff --git a/docs/source/tutorial/tutorial.SequenceLearner.rst b/docs/source/tutorial/tutorial.SequenceLearner.rst deleted file mode 100644 index b5c81ac73..000000000 --- a/docs/source/tutorial/tutorial.SequenceLearner.rst +++ /dev/null @@ -1,66 +0,0 @@ -Tutorial `~adaptive.SequenceLearner` ---------------------------------- - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.SequenceLearner` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import holoviews as hv - import numpy as np - -This learner will learn a sequence. It simply returns -the points in the provided sequence when asked. - -This is useful when your problem cannot be formulated in terms of -another adaptive learner, but you still want to use Adaptive's -routines to run, (periodically) save, and plot. - -.. jupyter-execute:: - - from adaptive import SequenceLearner - - def f(x): - return int(x) ** 2 - - seq = np.linspace(-15, 15, 1000) - learner = SequenceLearner(f, seq) - - runner = adaptive.Runner(learner, SequenceLearner.done) - # that goal is same as `lambda learner: learner.done()` - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - def plotter(learner): - data = learner.data if learner.data else [] - return hv.Scatter(data) - - runner.live_plot(plotter=plotter) - -``learner.data`` contains a dictionary that maps the index of the point of ``learner.sequence`` to the value at that point. - -To get the values in the same order as the input sequence (``learner.sequence``) use - -.. jupyter-execute:: - - result = learner.result() - print(result[:10]) # print the 10 first values diff --git a/docs/source/tutorial/tutorial.advanced-topics.md b/docs/source/tutorial/tutorial.advanced-topics.md new file mode 100644 index 000000000..de3848855 --- /dev/null +++ b/docs/source/tutorial/tutorial.advanced-topics.md @@ -0,0 +1,398 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Advanced Topics + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +import asyncio +from functools import partial +import random + +offset = random.uniform(-0.5, 0.5) + + +def f(x, offset=offset): + a = 0.01 + return x + a**2 / (a**2 + (x - offset) ** 2) +``` + +## Saving and loading learners + +Every learner has a {class}`~adaptive.BaseLearner.save` and {class}`~adaptive.BaseLearner.load` method that can be used to save and load **only** the data of a learner. + +Use the `fname` argument in `learner.save(fname=...)`. + +Or, when using a {class}`~adaptive.BalancingLearner` one can use either a callable that takes the child learner and returns a filename **or** a list of filenames. + +By default the resulting pickle files are compressed, to turn this off use `learner.save(fname=..., compress=False)` + +```{code-cell} ipython3 +# Let's create two learners and run only one. +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +control = adaptive.Learner1D(f, bounds=(-1, 1)) + +# Let's only run the learner +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +fname = "data/example_file.p" +learner.save(fname) +control.load(fname) + +(learner.plot().relabel("saved learner") + control.plot().relabel("loaded learner")) +``` + +Or just (without saving): + +```{code-cell} ipython3 +control = adaptive.Learner1D(f, bounds=(-1, 1)) +control.copy_from(learner) +``` + +One can also periodically save the learner while running in a {class}`~adaptive.Runner`. Use it like: + +```{code-cell} ipython3 +def slow_f(x): + from time import sleep + + sleep(5) + return x + + +learner = adaptive.Learner1D(slow_f, bounds=[0, 1]) +runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 100) +runner.start_periodic_saving( + save_kwargs=dict(fname="data/periodic_example.p"), interval=6 +) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await asyncio.sleep(6) # This is not needed in a notebook environment! +runner.cancel() +``` + +```{code-cell} ipython3 +runner.live_info() # we cancelled it after 6 seconds +``` + +```{code-cell} ipython3 +# See the data 6 later seconds with +#!ls -lah data # only works on macOS and Linux systems +``` + +## A watched pot never boils! + +The {class}`adaptive.Runner` does its work in an `asyncio` task that runs concurrently with the IPython kernel, when using `adaptive` from a Jupyter notebook. +This is advantageous because it allows us to do things like live-updating plots, however it can trip you up if you’re not careful. + +Notably: **if you block the IPython kernel, the runner will not do any work**. + +For example if you wanted to wait for a runner to complete, **do not wait in a busy loop**: + +```python +while not runner.task.done(): + pass +``` + +If you do this then **the runner will never finish**. + +What to do if you don’t care about live plotting, and just want to run something until its done? + +The simplest way to accomplish this is to use {class}`adaptive.BlockingRunner`: + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) +# This will only get run after the runner has finished +learner.plot() +``` + +## Reproducibility + +By default `adaptive` runners evaluate the learned function in parallel across several cores. +The runners are also opportunistic, in that as soon as a result is available they will feed it to the learner and request another point to replace the one that just finished. + +Because the order in which computations complete is non-deterministic, this means that the runner behaves in a non-deterministic way. +Adaptive makes this choice because in many cases the speedup from parallel execution is worth sacrificing the “purity” of exactly reproducible computations. + +Nevertheless it is still possible to run a learner in a deterministic way with adaptive. + +The simplest way is to use {class}`adaptive.runner.simple` to run your learner: + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f, bounds=(-1, 1)) + +# blocks until completion +adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.01) + +learner.plot() +``` + +Note that unlike {class}`adaptive.Runner`, {class}`adaptive.runner.simple` *blocks* until it is finished. + +If you want to enable determinism, want to continue using the non-blocking {class}`adaptive.Runner`, you can use the {class}`adaptive.runner.SequentialExecutor`: + +```{code-cell} ipython3 +from adaptive.runner import SequentialExecutor + +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner( + learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.01 +) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +## Cancelling a runner + +Sometimes you want to interactively explore a parameter space, and want the function to be evaluated at finer and finer resolution and manually control when the calculation stops. + +If no `goal` is provided to a runner then the runner will run until cancelled. + +`runner.live_info()` will provide a button that can be clicked to stop the runner. +You can also stop the runner programatically using `runner.cancel()`. + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await asyncio.sleep(0.1) # This is not needed in the notebook! +``` + +```{code-cell} ipython3 +runner.cancel() # Let's execute this after 0.1 seconds +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot(update_interval=0.1) +``` + +```{code-cell} ipython3 +print(runner.status()) +``` + +## Debugging Problems + +Runners work in the background with respect to the IPython kernel, which makes it convenient, but also means that inspecting errors is more difficult because exceptions will not be raised directly in the notebook. +Often the only indication you will have that something has gone wrong is that nothing will be happening. + +Let’s look at the following example, where the function to be learned will raise an exception 10% of the time. + +```{code-cell} ipython3 +def will_raise(x): + from random import random + from time import sleep + + sleep(random()) + if random() < 0.1: + raise RuntimeError("something went wrong!") + return x**2 + + +learner = adaptive.Learner1D(will_raise, (-1, 1)) +runner = adaptive.Runner( + learner +) # without 'goal' the runner will run forever unless cancelled +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await asyncio.sleep(4) # in 4 seconds it will surely have failed +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +runner.live_plot() +``` + +The above runner should continue forever, but we notice that it stops after a few points are evaluated. + +First we should check that the runner has really finished: + +```{code-cell} ipython3 +runner.task.done() +``` + +If it has indeed finished then we should check the `result` of the runner. +This should be `None` if the runner stopped successfully. +If the runner stopped due to an exception then asking for the result will raise the exception with the stack trace: + +```{code-cell} ipython3 +:tags: [raises-exception] +runner.task.result() +``` + +You can also check `runner.tracebacks` which is a list of tuples with `(point, traceback)`. + +```{code-cell} ipython3 +for point, tb in runner.tracebacks: + print(f"point: {point}:\n {tb}") +``` + +### Logging runners + +Runners do their job in the background, which makes introspection quite cumbersome. +One way to inspect runners is to instantiate one with `log=True`: + +```{code-cell} ipython3 +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, log=True) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +This gives a the runner a `log` attribute, which is a list of the `learner` methods that were called, as well as their arguments. +This is useful because executors typically execute their tasks in a non-deterministic order. + +This can be used with {class}`adaptive.runner.replay_log` to perfom the same set of operations on another runner: + +```{code-cell} ipython3 +reconstructed_learner = adaptive.Learner1D(f, bounds=learner.bounds) +adaptive.runner.replay_log(reconstructed_learner, runner.log) +``` + +```{code-cell} ipython3 +learner.plot().Scatter.I.opts(style=dict(size=6)) * reconstructed_learner.plot() +``` + +## Adding coroutines + +In the following example we'll add a {class}`~asyncio.Task` that times the runner. +This is *only* for demonstration purposes because one can simply check `runner.elapsed_time()` or use the `runner.live_info()` widget to see the time since the runner has started. + +So let's get on with the example. To time the runner you **cannot** simply use + +```python +now = datetime.now() +runner = adaptive.Runner(...) +print(datetime.now() - now) +``` + +because this will be done immediately. Also blocking the kernel with `while not runner.task.done()` will not work because the runner will not do anything when the kernel is blocked. + +Therefore you need to create an `async` function and hook it into the `ioloop` like so: + +```{code-cell} ipython3 +import asyncio + + +async def time(runner): + from datetime import datetime + + now = datetime.now() + await runner.task + return datetime.now() - now + + +ioloop = asyncio.get_event_loop() + +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) + +timer = ioloop.create_task(time(runner)) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +# The result will only be set when the runner is done. +timer.result() +``` + +## Using Runners from a script + +Runners can also be used from a Python script independently of the notebook. + +The simplest way to accomplish this is simply to use the {class}`~adaptive.BlockingRunner`: + +```python +import adaptive + + +def f(x): + return x + + +learner = adaptive.Learner1D(f, (-1, 1)) + +adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.1) +``` + +If you use `asyncio` already in your script and want to integrate `adaptive` into it, then you can use the default {class}`~adaptive.Runner` as you would from a notebook. +If you want to wait for the runner to finish, then you can simply + +```python +await runner.task +``` + +from within a coroutine. + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.advanced-topics.ipynb`** and {download}`tutorial.advanced-topics.md`. diff --git a/docs/source/tutorial/tutorial.advanced-topics.rst b/docs/source/tutorial/tutorial.advanced-topics.rst deleted file mode 100644 index e6a8e2f1c..000000000 --- a/docs/source/tutorial/tutorial.advanced-topics.rst +++ /dev/null @@ -1,427 +0,0 @@ -Advanced Topics -=============== - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.advanced-topics` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - import asyncio - from functools import partial - import random - - offset = random.uniform(-0.5, 0.5) - - def f(x, offset=offset): - a = 0.01 - return x + a**2 / (a**2 + (x - offset)**2) - - -Saving and loading learners ---------------------------- - -Every learner has a `~adaptive.BaseLearner.save` and `~adaptive.BaseLearner.load` -method that can be used to save and load **only** the data of a learner. - -Use the ``fname`` argument in ``learner.save(fname=...)``. - -Or, when using a `~adaptive.BalancingLearner` one can use either a callable -that takes the child learner and returns a filename **or** a list of filenames. - -By default the resulting pickle files are compressed, to turn this off -use ``learner.save(fname=..., compress=False)`` - -.. jupyter-execute:: - - # Let's create two learners and run only one. - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - control = adaptive.Learner1D(f, bounds=(-1, 1)) - - # Let's only run the learner - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - fname = 'data/example_file.p' - learner.save(fname) - control.load(fname) - - (learner.plot().relabel('saved learner') - + control.plot().relabel('loaded learner')) - -Or just (without saving): - -.. jupyter-execute:: - - control = adaptive.Learner1D(f, bounds=(-1, 1)) - control.copy_from(learner) - -One can also periodically save the learner while running in a -`~adaptive.Runner`. Use it like: - -.. jupyter-execute:: - - def slow_f(x): - from time import sleep - sleep(5) - return x - - learner = adaptive.Learner1D(slow_f, bounds=[0, 1]) - runner = adaptive.Runner(learner, goal=lambda l: l.npoints > 100) - runner.start_periodic_saving(save_kwargs=dict(fname='data/periodic_example.p'), interval=6) - -.. jupyter-execute:: - :hide-code: - - await asyncio.sleep(6) # This is not needed in a notebook environment! - runner.cancel() - -.. jupyter-execute:: - - runner.live_info() # we cancelled it after 6 seconds - -.. jupyter-execute:: - - # See the data 6 later seconds with - !ls -lah data # only works on macOS and Linux systems - - -A watched pot never boils! --------------------------- - -`adaptive.Runner` does its work in an `asyncio` task that runs -concurrently with the IPython kernel, when using ``adaptive`` from a -Jupyter notebook. This is advantageous because it allows us to do things -like live-updating plots, however it can trip you up if you’re not -careful. - -Notably: **if you block the IPython kernel, the runner will not do any -work**. - -For example if you wanted to wait for a runner to complete, **do not -wait in a busy loop**: - -.. code:: python - - while not runner.task.done(): - pass - -If you do this then **the runner will never finish**. - -What to do if you don’t care about live plotting, and just want to run -something until its done? - -The simplest way to accomplish this is to use -`adaptive.BlockingRunner`: - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) - # This will only get run after the runner has finished - learner.plot() - -Reproducibility ---------------- - -By default ``adaptive`` runners evaluate the learned function in -parallel across several cores. The runners are also opportunistic, in -that as soon as a result is available they will feed it to the learner -and request another point to replace the one that just finished. - -Because the order in which computations complete is non-deterministic, -this means that the runner behaves in a non-deterministic way. Adaptive -makes this choice because in many cases the speedup from parallel -execution is worth sacrificing the “purity” of exactly reproducible -computations. - -Nevertheless it is still possible to run a learner in a deterministic -way with adaptive. - -The simplest way is to use `adaptive.runner.simple` to run your -learner: - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - - # blocks until completion - adaptive.runner.simple(learner, goal=lambda l: l.loss() < 0.01) - - learner.plot() - -Note that unlike `adaptive.Runner`, `adaptive.runner.simple` -*blocks* until it is finished. - -If you want to enable determinism, want to continue using the -non-blocking `adaptive.Runner`, you can use the -`adaptive.runner.SequentialExecutor`: - -.. jupyter-execute:: - - from adaptive.runner import SequentialExecutor - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - - runner = adaptive.Runner(learner, executor=SequentialExecutor(), goal=lambda l: l.loss() < 0.01) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) - -Cancelling a runner -------------------- - -Sometimes you want to interactively explore a parameter space, and want -the function to be evaluated at finer and finer resolution and manually -control when the calculation stops. - -If no ``goal`` is provided to a runner then the runner will run until -cancelled. - -``runner.live_info()`` will provide a button that can be clicked to stop -the runner. You can also stop the runner programatically using -``runner.cancel()``. - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner) - -.. jupyter-execute:: - :hide-code: - - await asyncio.sleep(0.1) # This is not needed in the notebook! - -.. jupyter-execute:: - - runner.cancel() # Let's execute this after 0.1 seconds - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot(update_interval=0.1) - -.. jupyter-execute:: - - print(runner.status()) - -Debugging Problems ------------------- - -Runners work in the background with respect to the IPython kernel, which -makes it convenient, but also means that inspecting errors is more -difficult because exceptions will not be raised directly in the -notebook. Often the only indication you will have that something has -gone wrong is that nothing will be happening. - -Let’s look at the following example, where the function to be learned -will raise an exception 10% of the time. - -.. jupyter-execute:: - - def will_raise(x): - from random import random - from time import sleep - - sleep(random()) - if random() < 0.1: - raise RuntimeError('something went wrong!') - return x**2 - - learner = adaptive.Learner1D(will_raise, (-1, 1)) - runner = adaptive.Runner(learner) # without 'goal' the runner will run forever unless cancelled - - -.. jupyter-execute:: - :hide-code: - - await asyncio.sleep(4) # in 4 seconds it will surely have failed - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - runner.live_plot() - -The above runner should continue forever, but we notice that it stops -after a few points are evaluated. - -First we should check that the runner has really finished: - -.. jupyter-execute:: - - runner.task.done() - -If it has indeed finished then we should check the ``result`` of the -runner. This should be ``None`` if the runner stopped successfully. If -the runner stopped due to an exception then asking for the result will -raise the exception with the stack trace: - -.. jupyter-execute:: - :raises: - - runner.task.result() - - -You can also check ``runner.tracebacks`` which is a list of tuples with -(point, traceback). - -.. jupyter-execute:: - - for point, tb in runner.tracebacks: - print(f'point: {point}:\n {tb}') - -Logging runners -~~~~~~~~~~~~~~~ - -Runners do their job in the background, which makes introspection quite -cumbersome. One way to inspect runners is to instantiate one with -``log=True``: - -.. jupyter-execute:: - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, - log=True) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -This gives a the runner a ``log`` attribute, which is a list of the -``learner`` methods that were called, as well as their arguments. This -is useful because executors typically execute their tasks in a -non-deterministic order. - -This can be used with `adaptive.runner.replay_log` to perfom the same -set of operations on another runner: - -.. jupyter-execute:: - - reconstructed_learner = adaptive.Learner1D(f, bounds=learner.bounds) - adaptive.runner.replay_log(reconstructed_learner, runner.log) - -.. jupyter-execute:: - - learner.plot().Scatter.I.opts(style=dict(size=6)) * reconstructed_learner.plot() - -Adding coroutines ------------------ - -In the following example we'll add a `~asyncio.Task` that times the runner. -This is *only* for demonstration purposes because one can simply -check ``runner.elapsed_time()`` or use the ``runner.live_info()`` -widget to see the time since the runner has started. - -So let's get on with the example. To time the runner -you **cannot** simply use - -.. code:: python - - now = datetime.now() - runner = adaptive.Runner(...) - print(datetime.now() - now) - -because this will be done immediately. Also blocking the kernel with -``while not runner.task.done()`` will not work because the runner will -not do anything when the kernel is blocked. - -Therefore you need to create an ``async`` function and hook it into the -``ioloop`` like so: - -.. jupyter-execute:: - - import asyncio - - async def time(runner): - from datetime import datetime - now = datetime.now() - await runner.task - return datetime.now() - now - - ioloop = asyncio.get_event_loop() - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01) - - timer = ioloop.create_task(time(runner)) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - # The result will only be set when the runner is done. - timer.result() - -Using Runners from a script ---------------------------- - -Runners can also be used from a Python script independently of the -notebook. - -The simplest way to accomplish this is simply to use the -`~adaptive.BlockingRunner`: - -.. code:: python - - import adaptive - - def f(x): - return x - - learner = adaptive.Learner1D(f, (-1, 1)) - - adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.1) - -If you use `asyncio` already in your script and want to integrate -``adaptive`` into it, then you can use the default `~adaptive.Runner` as you -would from a notebook. If you want to wait for the runner to finish, -then you can simply - -.. code:: python - - await runner.task - -from within a coroutine. diff --git a/docs/source/tutorial/tutorial.custom_loss.md b/docs/source/tutorial/tutorial.custom_loss.md new file mode 100644 index 000000000..be1f78669 --- /dev/null +++ b/docs/source/tutorial/tutorial.custom_loss.md @@ -0,0 +1,167 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Custom adaptive logic for 1D and 2D + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebook in order to see the real behaviour. [^download] +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +import adaptive + +adaptive.notebook_extension() + +# Import modules that are used in multiple cells +import numpy as np +from functools import partial +import holoviews as hv +``` + +{class}`~adaptive.Learner1D` and {class}`~adaptive.Learner2D` both work on the principle of subdividing their domain into subdomains, and assigning a property to each subdomain, which we call the *loss*. +The algorithm for choosing the best place to evaluate our function is then simply *take the subdomain with the largest loss and add a point in the center, creating new subdomains around this point*. + +The *loss function* that defines the loss per subdomain is the canonical place to define what regions of the domain are “interesting”. +The default loss function for {class}`~adaptive.Learner1D` and {class}`~adaptive.Learner2D` is sufficient for a wide range of common cases, but it is by no means a panacea. +For example, the default loss function will tend to get stuck on divergences. + +Both the {class}`~adaptive.Learner1D` and {class}`~adaptive.Learner2D` allow you to specify a *custom loss function*. +Below we illustrate how you would go about writing your own loss function. +The documentation for {class}`~adaptive.Learner1D` and {class}`~adaptive.Learner2D` specifies the signature that your loss function needs to have in order for it to work with `adaptive`. + +tl;dr, one can use the following *loss functions* that **we** already implemented: + +- {class}`adaptive.learner.learner1D.default_loss` +- {class}`adaptive.learner.learner1D.uniform_loss` +- {class}`adaptive.learner.learner1D.curvature_loss_function` +- {class}`adaptive.learner.learner1D.resolution_loss_function` +- {class}`adaptive.learner.learner1D.abs_min_log_loss` +- {class}`adaptive.learner.learner2D.default_loss` +- {class}`adaptive.learner.learner2D.uniform_loss` +- {class}`adaptive.learner.learner2D.minimize_triangle_surface_loss` +- {class}`adaptive.learner.learner2D.resolution_loss_function` + +Whenever a loss function has `_function` appended to its name, it is a factory function that returns the loss function with certain settings. + +## Uniform sampling + +Say we want to properly sample a function that contains divergences. +A simple (but naive) strategy is to *uniformly* sample the domain: + +```{code-cell} ipython3 +def uniform_sampling_1d(xs, ys): + dx = xs[1] - xs[0] + return dx + + +def f_divergent_1d(x): + if x == 0: + return np.inf + return 1 / x**2 + + +learner = adaptive.Learner1D( + f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d +) +runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) +learner.plot().select(y=(0, 10000)) +``` + +```{code-cell} ipython3 +from adaptive.runner import SequentialExecutor + + +def uniform_sampling_2d(ip): + from adaptive.learner.learner2D import areas + + A = areas(ip) + return np.sqrt(A) + + +def f_divergent_2d(xy): + x, y = xy + return 1 / (x**2 + y**2) + + +learner = adaptive.Learner2D( + f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d +) + +# this takes a while, so use the async Runner so we know *something* is happening +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.03 or l.npoints > 1000) +``` + +```{code-cell} ipython3 +:tags: [hide-cell] + +await runner.task # This is not needed in a notebook environment! +``` + +```{code-cell} ipython3 +runner.live_info() +``` + +```{code-cell} ipython3 +plotter = lambda l: l.plot(tri_alpha=0.3).relabel("1 / (x^2 + y^2) in log scale") +runner.live_plot(update_interval=0.2, plotter=plotter) +``` + +The uniform sampling strategy is a common case to benchmark against, so the 1D and 2D versions are included in `adaptive` as {class}`adaptive.learner.learner1D.uniform_loss` and {class}`adaptive.learner.learner2D.uniform_loss`. + +## Doing better + +Of course, using `adaptive` for uniform sampling is a bit of a waste! + +Let’s see if we can do a bit better. +Below we define a loss per subdomain that scales with the degree of nonlinearity of the function (this is very similar to the default loss function for {class}`~adaptive.Learner2D`), but which is 0 for subdomains smaller than a certain area, and infinite for subdomains larger than a certain area. + +A loss defined in this way means that the adaptive algorithm will first prioritise subdomains that are too large (infinite loss). +After all subdomains are appropriately small it will prioritise places where the function is very nonlinear, but will ignore subdomains that are too small (0 loss). + +```{code-cell} ipython3 +def resolution_loss_function(min_distance=0, max_distance=1): + """min_distance and max_distance should be in between 0 and 1 + because the total area is normalized to 1.""" + + def resolution_loss(ip): + from adaptive.learner.learner2D import default_loss, areas + + loss = default_loss(ip) + + A = areas(ip) + # Setting areas with a small area to zero such that they won't be chosen again + loss[A < min_distance**2] = 0 + + # Setting triangles that have a size larger than max_distance to infinite loss + loss[A > max_distance**2] = np.inf + + return loss + + return resolution_loss + + +loss = resolution_loss_function(min_distance=0.01) + +learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss) +runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02) +learner.plot(tri_alpha=0.3).relabel("1 / (x^2 + y^2) in log scale").opts( + hv.opts.EdgePaths(color="w"), hv.opts.Image(logz=True, colorbar=True) +) +``` + +Awesome! We zoom in on the singularity, but not at the expense of sampling the rest of the domain a reasonable amount. + +The above strategy is available as {class}`adaptive.learner.learner2D.resolution_loss_function`. + +[^download]: This notebook can be downloaded as **{nb-download}`tutorial.custom_loss.ipynb`** and {download}`tutorial.custom_loss.md`. diff --git a/docs/source/tutorial/tutorial.custom_loss.rst b/docs/source/tutorial/tutorial.custom_loss.rst deleted file mode 100644 index 528d50cdb..000000000 --- a/docs/source/tutorial/tutorial.custom_loss.rst +++ /dev/null @@ -1,168 +0,0 @@ -Custom adaptive logic for 1D and 2D ------------------------------------ - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebook - in order to see the real behaviour. - -.. seealso:: - The complete source code of this tutorial can be found in - :jupyter-download:notebook:`tutorial.custom-loss` - -.. jupyter-execute:: - :hide-code: - - import adaptive - adaptive.notebook_extension() - - # Import modules that are used in multiple cells - import numpy as np - from functools import partial - - -`~adaptive.Learner1D` and `~adaptive.Learner2D` both work on the principle of -subdividing their domain into subdomains, and assigning a property to -each subdomain, which we call the *loss*. The algorithm for choosing the -best place to evaluate our function is then simply *take the subdomain -with the largest loss and add a point in the center, creating new -subdomains around this point*. - -The *loss function* that defines the loss per subdomain is the canonical -place to define what regions of the domain are “interesting”. The -default loss function for `~adaptive.Learner1D` and `~adaptive.Learner2D` is sufficient -for a wide range of common cases, but it is by no means a panacea. For -example, the default loss function will tend to get stuck on -divergences. - -Both the `~adaptive.Learner1D` and `~adaptive.Learner2D` allow you to specify a *custom -loss function*. Below we illustrate how you would go about writing your -own loss function. The documentation for `~adaptive.Learner1D` and `~adaptive.Learner2D` -specifies the signature that your loss function needs to have in order -for it to work with ``adaptive``. - -tl;dr, one can use the following *loss functions* that -**we** already implemented: - -+ `adaptive.learner.learner1D.default_loss` -+ `adaptive.learner.learner1D.uniform_loss` -+ `adaptive.learner.learner1D.curvature_loss_function` -+ `adaptive.learner.learner1D.resolution_loss_function` -+ `adaptive.learner.learner1D.abs_min_log_loss` -+ `adaptive.learner.learner2D.default_loss` -+ `adaptive.learner.learner2D.uniform_loss` -+ `adaptive.learner.learner2D.minimize_triangle_surface_loss` -+ `adaptive.learner.learner2D.resolution_loss_function` - -Whenever a loss function has `_function` appended to its name, it is a factory function -that returns the loss function with certain settings. - -Uniform sampling -~~~~~~~~~~~~~~~~ - -Say we want to properly sample a function that contains divergences. A -simple (but naive) strategy is to *uniformly* sample the domain: - -.. jupyter-execute:: - - def uniform_sampling_1d(xs, ys): - dx = xs[1] - xs[0] - return dx - - def f_divergent_1d(x): - if x == 0: - return np.inf - return 1 / x**2 - - learner = adaptive.Learner1D(f_divergent_1d, (-1, 1), loss_per_interval=uniform_sampling_1d) - runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.01) - learner.plot().select(y=(0, 10000)) - -.. jupyter-execute:: - - %%opts EdgePaths (color='w') Image [logz=True colorbar=True] - - from adaptive.runner import SequentialExecutor - - def uniform_sampling_2d(ip): - from adaptive.learner.learner2D import areas - A = areas(ip) - return np.sqrt(A) - - def f_divergent_2d(xy): - x, y = xy - return 1 / (x**2 + y**2) - - learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=uniform_sampling_2d) - - # this takes a while, so use the async Runner so we know *something* is happening - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.02) - -.. jupyter-execute:: - :hide-code: - - await runner.task # This is not needed in a notebook environment! - -.. jupyter-execute:: - - runner.live_info() - -.. jupyter-execute:: - - plotter = lambda l: l.plot(tri_alpha=0.3).relabel( - '1 / (x^2 + y^2) in log scale') - runner.live_plot(update_interval=0.2, plotter=plotter) - -The uniform sampling strategy is a common case to benchmark against, so -the 1D and 2D versions are included in ``adaptive`` as -`adaptive.learner.learner1D.uniform_loss` and -`adaptive.learner.learner2D.uniform_loss`. - -Doing better -~~~~~~~~~~~~ - -Of course, using ``adaptive`` for uniform sampling is a bit of a waste! - -Let’s see if we can do a bit better. Below we define a loss per -subdomain that scales with the degree of nonlinearity of the function -(this is very similar to the default loss function for `~adaptive.Learner2D`), -but which is 0 for subdomains smaller than a certain area, and infinite -for subdomains larger than a certain area. - -A loss defined in this way means that the adaptive algorithm will first -prioritise subdomains that are too large (infinite loss). After all -subdomains are appropriately small it will prioritise places where the -function is very nonlinear, but will ignore subdomains that are too -small (0 loss). - -.. jupyter-execute:: - - %%opts EdgePaths (color='w') Image [logz=True colorbar=True] - - def resolution_loss_function(min_distance=0, max_distance=1): - """min_distance and max_distance should be in between 0 and 1 - because the total area is normalized to 1.""" - def resolution_loss(ip): - from adaptive.learner.learner2D import default_loss, areas - loss = default_loss(ip) - - A = areas(ip) - # Setting areas with a small area to zero such that they won't be chosen again - loss[A < min_distance**2] = 0 - - # Setting triangles that have a size larger than max_distance to infinite loss - loss[A > max_distance**2] = np.inf - - return loss - return resolution_loss - loss = resolution_loss_function(min_distance=0.01) - - learner = adaptive.Learner2D(f_divergent_2d, [(-1, 1), (-1, 1)], loss_per_triangle=loss) - runner = adaptive.BlockingRunner(learner, goal=lambda l: l.loss() < 0.02) - learner.plot(tri_alpha=0.3).relabel('1 / (x^2 + y^2) in log scale') - -Awesome! We zoom in on the singularity, but not at the expense of -sampling the rest of the domain a reasonable amount. - -The above strategy is available as -`adaptive.learner.learner2D.resolution_loss_function`. diff --git a/docs/source/tutorial/tutorial.md b/docs/source/tutorial/tutorial.md new file mode 100644 index 000000000..7ad2e81af --- /dev/null +++ b/docs/source/tutorial/tutorial.md @@ -0,0 +1,43 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Tutorial Adaptive + +[Adaptive](https://github.com/python-adaptive/adaptive) +is a package for adaptively sampling functions with support for parallel +evaluation. + +This is an introductory notebook that shows some basic use cases. + +We recommend to start with the {ref}`adaptive.Learner1D tutorial`. + +```{note} +Because this documentation consists of static html, the `live_plot` and `live_info` widget is not live. +Download the notebooks in order to see the real behaviour. +``` + +```{toctree} +:hidden: true + +tutorial.Learner1D +tutorial.Learner2D +tutorial.custom_loss +tutorial.AverageLearner +tutorial.BalancingLearner +tutorial.DataSaver +tutorial.IntegratorLearner +tutorial.LearnerND +tutorial.AverageLearner1D +tutorial.SequenceLearner +tutorial.SKOptLearner +tutorial.parallelism +tutorial.advanced-topics +``` diff --git a/docs/source/tutorial/tutorial.parallelism.md b/docs/source/tutorial/tutorial.parallelism.md new file mode 100644 index 000000000..f3c1985f6 --- /dev/null +++ b/docs/source/tutorial/tutorial.parallelism.md @@ -0,0 +1,137 @@ +--- +kernelspec: + name: python3 + display_name: python3 +jupytext: + text_representation: + extension: .md + format_name: myst + format_version: '0.13' + jupytext_version: 1.13.8 +--- +# Parallelism - using multiple cores + +Often you will want to evaluate the function on some remote computing resources. +`adaptive` works out of the box with any framework that implements a [PEP 3148](https://www.python.org/dev/peps/pep-3148/) compliant executor that returns `concurrent.futures.Future` objects. + +## `concurrent.futures` + +On Unix-like systems by default {class}`adaptive.Runner` creates a {class}`~concurrent.futures.ProcessPoolExecutor`, but you can also pass one explicitly e.g. to limit the number of workers: + +```python +from concurrent.futures import ProcessPoolExecutor + +executor = ProcessPoolExecutor(max_workers=4) + +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05) +runner.live_info() +runner.live_plot(update_interval=0.1) +``` + +## `ipyparallel.Client` + +```python +import ipyparallel + +client = ipyparallel.Client() # You will need to start an `ipcluster` to make this work + +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01) +runner.live_info() +runner.live_plot() +``` + +## `distributed.Client` + +On Windows by default {class}`adaptive.Runner` uses a `distributed.Client`. + +```python +import distributed + +client = distributed.Client() + +learner = adaptive.Learner1D(f, bounds=(-1, 1)) +runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01) +runner.live_info() +runner.live_plot(update_interval=0.1) +``` + +## `mpi4py.futures.MPIPoolExecutor` + +This makes sense if you want to run a `Learner` on a cluster non-interactively using a job script. + +For example, you create the following file called `run_learner.py`: + +```python +from mpi4py.futures import MPIPoolExecutor + +# use the idiom below, see the warning at +# https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor +if __name__ == "__main__": + + learner = adaptive.Learner1D(f, bounds=(-1, 1)) + + # load the data + learner.load(fname) + + # run until `goal` is reached with an `MPIPoolExecutor` + runner = adaptive.Runner( + learner, + executor=MPIPoolExecutor(), + shutdown_executor=True, + goal=lambda l: l.loss() < 0.01, + ) + + # periodically save the data (in case the job dies) + runner.start_periodic_saving(dict(fname=fname), interval=600) + + # block until runner goal reached + runner.ioloop.run_until_complete(runner.task) + + # save one final time before exiting + learner.save(fname) +``` + +On your laptop/desktop you can run this script like: + +```bash +export MPI4PY_MAX_WORKERS=15 +mpiexec -n 1 python run_learner.py +``` + +Or you can pass `max_workers=15` programmatically when creating the `MPIPoolExecutor` instance. + +Inside the job script using a job queuing system use: + +```bash +mpiexec -n 16 python -m mpi4py.futures run_learner.py +``` + +How you call MPI might depend on your specific queuing system, with SLURM for example it's: + +```bash +#!/bin/bash +#SBATCH --job-name adaptive-example +#SBATCH --ntasks 100 + +srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py +``` + +## `loky.get_reusable_executor` + +This executor is basically a powered-up version of {class}`~concurrent.futures.ProcessPoolExecutor`, check its [documentation](https://loky.readthedocs.io/). +Among other things, it allows to *reuse* the executor and uses `cloudpickle` for serialization. +This means you can even learn closures, lambdas, or other functions that are not picklable with `pickle`. + +```python +from loky import get_reusable_executor + +ex = get_reusable_executor() + +f = lambda x: x +learner = adaptive.Learner1D(f, bounds=(-1, 1)) + +runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, executor=ex) +runner.live_info() +``` diff --git a/docs/source/tutorial/tutorial.parallelism.rst b/docs/source/tutorial/tutorial.parallelism.rst deleted file mode 100644 index 4ed6132b0..000000000 --- a/docs/source/tutorial/tutorial.parallelism.rst +++ /dev/null @@ -1,136 +0,0 @@ -Parallelism - using multiple cores ----------------------------------- - -Often you will want to evaluate the function on some remote computing -resources. ``adaptive`` works out of the box with any framework that -implements a `PEP 3148 `__ -compliant executor that returns `concurrent.futures.Future` objects. - -`concurrent.futures` -~~~~~~~~~~~~~~~~~~~~ - -On Unix-like systems by default `adaptive.Runner` creates a -`~concurrent.futures.ProcessPoolExecutor`, but you can also pass -one explicitly e.g. to limit the number of workers: - -.. code:: python - - from concurrent.futures import ProcessPoolExecutor - - executor = ProcessPoolExecutor(max_workers=4) - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner, executor=executor, goal=lambda l: l.loss() < 0.05) - runner.live_info() - runner.live_plot(update_interval=0.1) - -`ipyparallel.Client` -~~~~~~~~~~~~~~~~~~~~ - -.. code:: python - - import ipyparallel - - client = ipyparallel.Client() # You will need to start an `ipcluster` to make this work - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01) - runner.live_info() - runner.live_plot() - -`distributed.Client` -~~~~~~~~~~~~~~~~~~~~ - -On Windows by default `adaptive.Runner` uses a `distributed.Client`. - -.. code:: python - - import distributed - - client = distributed.Client() - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - runner = adaptive.Runner(learner, executor=client, goal=lambda l: l.loss() < 0.01) - runner.live_info() - runner.live_plot(update_interval=0.1) - -`mpi4py.futures.MPIPoolExecutor` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This makes sense if you want to run a ``Learner`` on a cluster non-interactively using a job script. - -For example, you create the following file called ``run_learner.py``: - -.. code:: python - - from mpi4py.futures import MPIPoolExecutor - - # use the idiom below, see the warning at - # https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html#mpipoolexecutor - if __name__ == "__main__": - - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - - # load the data - learner.load(fname) - - # run until `goal` is reached with an `MPIPoolExecutor` - runner = adaptive.Runner( - learner, - executor=MPIPoolExecutor(), - shutdown_executor=True, - goal=lambda l: l.loss() < 0.01, - ) - - # periodically save the data (in case the job dies) - runner.start_periodic_saving(dict(fname=fname), interval=600) - - # block until runner goal reached - runner.ioloop.run_until_complete(runner.task) - - # save one final time before exiting - learner.save(fname) - - -On your laptop/desktop you can run this script like: - -.. code:: bash - - export MPI4PY_MAX_WORKERS=15 - mpiexec -n 1 python run_learner.py - -Or you can pass ``max_workers=15`` programmatically when creating the `MPIPoolExecutor` instance. - -Inside the job script using a job queuing system use: - -.. code:: bash - - mpiexec -n 16 python -m mpi4py.futures run_learner.py - -How you call MPI might depend on your specific queuing system, with SLURM for example it's: - -.. code:: bash - - #!/bin/bash - #SBATCH --job-name adaptive-example - #SBATCH --ntasks 100 - - srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py - -`loky.get_reusable_executor` -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This executor is basically a powered-up version of `~concurrent.futures.ProcessPoolExecutor`, check its `documentation `_. -Among other things, it allows to *reuse* the executor and uses ``cloudpickle`` for serialization. -This means you can even learn closures, lambdas, or other functions that are not picklable with `pickle`. - -.. code:: python - - from loky import get_reusable_executor - ex = get_reusable_executor() - - f = lambda x: x - learner = adaptive.Learner1D(f, bounds=(-1, 1)) - - runner = adaptive.Runner(learner, goal=lambda l: l.loss() < 0.01, executor=ex) - runner.live_info() diff --git a/docs/source/tutorial/tutorial.rst b/docs/source/tutorial/tutorial.rst deleted file mode 100644 index a9a2c71b3..000000000 --- a/docs/source/tutorial/tutorial.rst +++ /dev/null @@ -1,32 +0,0 @@ -Tutorial Adaptive -================= - -`Adaptive `__ -is a package for adaptively sampling functions with support for parallel -evaluation. - -This is an introductory notebook that shows some basic use cases. - -We recommend to start with the :ref:`Tutorial `~adaptive.Learner1D``. - -.. note:: - Because this documentation consists of static html, the ``live_plot`` - and ``live_info`` widget is not live. Download the notebooks - in order to see the real behaviour. - -.. toctree:: - :hidden: - - tutorial.Learner1D - tutorial.Learner2D - tutorial.custom_loss - tutorial.AverageLearner - tutorial.BalancingLearner - tutorial.DataSaver - tutorial.IntegratorLearner - tutorial.LearnerND - tutorial.AverageLearner1D - tutorial.SequenceLearner - tutorial.SKOptLearner - tutorial.parallelism - tutorial.advanced-topics diff --git a/example-notebook.ipynb b/example-notebook.ipynb index ab3963892..3bf858635 100644 --- a/example-notebook.ipynb +++ b/example-notebook.ipynb @@ -1015,11 +1015,11 @@ "metadata": {}, "outputs": [], "source": [ - "%%opts Overlay [legend_position='top']\n", "xs = np.linspace(*learner.space.bounds[0])\n", "to_learn = hv.Curve((xs, [F(x, 0) for x in xs]), label=\"to learn\")\n", "\n", - "runner.live_plot().relabel(\"prediction\", depth=2) * to_learn" + "plot = runner.live_plot().relabel(\"prediction\", depth=2) * to_learn\n", + "plot.opts(legend_position=\"top\")" ] }, {