Skip to content

Commit 9c27f57

Browse files
jingxu10svekars
andauthored
update out-of-date URL for Intel optimization guide (#2657)
Co-authored-by: Svetlana Karslioglu <[email protected]>
1 parent ab4e99a commit 9c27f57

File tree

1 file changed

+5
-2
lines changed

1 file changed

+5
-2
lines changed

Diff for: recipes_source/recipes/tuning_guide.py

+5-2
Original file line numberDiff line numberDiff line change
@@ -193,12 +193,15 @@ def fused_gelu(x):
193193
#
194194
# numactl --cpunodebind=N --membind=N python <pytorch_script>
195195

196+
###############################################################################
197+
# More detailed descriptions can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.
198+
196199
###############################################################################
197200
# Utilize OpenMP
198201
# ~~~~~~~~~~~~~~
199202
# OpenMP is utilized to bring better performance for parallel computation tasks.
200203
# ``OMP_NUM_THREADS`` is the easiest switch that can be used to accelerate computations. It determines number of threads used for OpenMP computations.
201-
# CPU affinity setting controls how workloads are distributed over multiple cores. It affects communication overhead, cache line invalidation overhead, or page thrashing, thus proper setting of CPU affinity brings performance benefits. ``GOMP_CPU_AFFINITY`` or ``KMP_AFFINITY`` determines how to bind OpenMP* threads to physical processing units.
204+
# CPU affinity setting controls how workloads are distributed over multiple cores. It affects communication overhead, cache line invalidation overhead, or page thrashing, thus proper setting of CPU affinity brings performance benefits. ``GOMP_CPU_AFFINITY`` or ``KMP_AFFINITY`` determines how to bind OpenMP* threads to physical processing units. Detailed information can be found `here <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance_tuning/tuning_guide.html>`_.
202205

203206
###############################################################################
204207
# With the following command, PyTorch run the task on N OpenMP threads.
@@ -283,7 +286,7 @@ def fused_gelu(x):
283286
traced_model(*sample_input)
284287

285288
###############################################################################
286-
# While the JIT fuser for oneDNN Graph also supports inference with ``BFloat16`` datatype,
289+
# While the JIT fuser for oneDNN Graph also supports inference with ``BFloat16`` datatype,
287290
# performance benefit with oneDNN Graph is only exhibited by machines with AVX512_BF16
288291
# instruction set architecture (ISA).
289292
# The following code snippets serves as an example of using ``BFloat16`` datatype for inference with oneDNN Graph:

0 commit comments

Comments
 (0)