From d372e26dd8beb78a30048c86ce45f4823450b795 Mon Sep 17 00:00:00 2001 From: Bas Nijholt Date: Mon, 26 Aug 2019 10:06:13 +0200 Subject: [PATCH] remove MPI4PY_MAX_WORKERS where it's not used --- docs/source/tutorial/tutorial.parallelism.rst | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/source/tutorial/tutorial.parallelism.rst b/docs/source/tutorial/tutorial.parallelism.rst index f1b9f9c29..9b08d5116 100644 --- a/docs/source/tutorial/tutorial.parallelism.rst +++ b/docs/source/tutorial/tutorial.parallelism.rst @@ -98,7 +98,6 @@ Inside the job script using a job queuing system use: .. code:: python - export MPI4PY_MAX_WORKERS=15 mpiexec -n 16 python -m mpi4py.futures run_learner.py How you call MPI might depend on your specific queuing system, with SLURM for example it's: @@ -109,5 +108,4 @@ How you call MPI might depend on your specific queuing system, with SLURM for ex #SBATCH --job-name adaptive-example #SBATCH --ntasks 100 - export MPI4PY_MAX_WORKERS=$SLURM_NTASKS srun -n $SLURM_NTASKS --mpi=pmi2 ~/miniconda3/envs/py37_min/bin/python -m mpi4py.futures run_learner.py