You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/tutorial/tutorial.advanced-topics.md
+57-11
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ kernelspec:
9
9
display_name: python3
10
10
name: python3
11
11
---
12
-
12
+
(TutorialAdvancedTopics)=
13
13
# Advanced Topics
14
14
15
15
```{note}
@@ -365,22 +365,19 @@ await runner.task # This is not needed in a notebook environment!
365
365
# The result will only be set when the runner is done.
366
366
timer.result()
367
367
```
368
-
368
+
(CustomParallelization)=
369
369
## Custom parallelization using coroutines
370
370
371
371
Adaptive by itself does not implement a way of sharing partial results between function executions.
372
372
Instead its implementation of parallel computation using executors is minimal by design.
373
373
The appropriate way to implement custom parallelization is by using coroutines (asynchronous functions).
374
374
375
+
375
376
We illustrate this approach by using `dask.distributed` for parallel computations in part because it supports asynchronous operation out-of-the-box.
376
-
Let us consider a function `f(x)` which is composed by two parts:
377
-
a slow part `g` which can be reused by multiple inputs and shared across function evaluations and a fast part `h` that will be computed for every `x`.
377
+
We will focus on a function `f(x)` that consists of two distinct components: a slow part `g` that can be reused across multiple inputs and shared among various function evaluations, and a fast part `h` that is calculated for each `x` value.
378
378
379
379
```{code-cell} ipython3
380
-
import time
381
-
382
-
383
-
def f(x):
380
+
def f(x): # example function without caching
384
381
"""
385
382
Integer part of `x` repeats and should be reused
386
383
Decimal part requires a new computation
@@ -390,7 +387,9 @@ def f(x):
390
387
391
388
def g(x):
392
389
"""Slow but reusable function"""
393
-
time.sleep(random.randrange(5))
390
+
from time import sleep
391
+
392
+
sleep(random.randrange(5))
394
393
return x**2
395
394
396
395
@@ -399,12 +398,59 @@ def h(x):
399
398
return x**3
400
399
```
401
400
401
+
### Using `adaptive.utils.daskify`
402
+
403
+
To simplify the process of using coroutines and caching with dask and Adaptive, we provide the {func}`adaptive.utils.daskify` decorator. This decorator can be used to parallelize functions with caching as well as functions without caching, making it a powerful tool for custom parallelization in Adaptive.
Copy file name to clipboardExpand all lines: docs/source/tutorial/tutorial.parallelism.md
+2
Original file line number
Diff line number
Diff line change
@@ -57,6 +57,8 @@ runner.live_info()
57
57
runner.live_plot(update_interval=0.1)
58
58
```
59
59
60
+
Also check out the {ref}`Custom parallelization<CustomParallelization>` section in the {ref}`advanced topics tutorial<TutorialAdvancedTopics>` for more control over caching and parallelization.
61
+
60
62
## `mpi4py.futures.MPIPoolExecutor`
61
63
62
64
This makes sense if you want to run a `Learner` on a cluster non-interactively using a job script.
0 commit comments