Skip to content

Commit dced4fa

Browse files
committed
Improve readibility/reduce choppiness & a few other textual tweaks
1 parent 57fb65a commit dced4fa

File tree

1 file changed

+28
-26
lines changed

1 file changed

+28
-26
lines changed

Doc/whatsnew/3.11.rst

+28-26
Original file line numberDiff line numberDiff line change
@@ -1162,15 +1162,16 @@ Optimizations
11621162
Faster CPython
11631163
==============
11641164

1165-
CPython 3.11 is on average `25% faster <https://github.com/faster-cpython/ideas#published-results>`_
1166-
than CPython 3.10 when measured with the
1165+
CPython 3.11 is an average of
1166+
`25% faster <https://github.com/faster-cpython/ideas#published-results>`_
1167+
than CPython 3.10 as measured with the
11671168
`pyperformance <https://github.com/python/pyperformance>`_ benchmark suite,
1168-
and compiled with GCC on Ubuntu Linux. Depending on your workload, the speedup
1169-
could be up to 10-60% faster.
1169+
when compiled with GCC on Ubuntu Linux.
1170+
Depending on your workload, the overall speedup could likely be 10-60%.
11701171

11711172
This project focuses on two major areas in Python:
11721173
:ref:`whatsnew311-faster-startup` and :ref:`whatsnew311-faster-runtime`.
1173-
Other optimizations not under this project are listed in
1174+
Optimizations not covered by this project are listed separately under
11741175
:ref:`whatsnew311-optimizations`.
11751176

11761177

@@ -1196,7 +1197,7 @@ Previously in 3.10, Python module execution looked like this:
11961197
In Python 3.11, the core modules essential for Python startup are "frozen".
11971198
This means that their :ref:`codeobjects` (and bytecode)
11981199
are statically allocated by the interpreter.
1199-
This reduces the steps in module execution process to this:
1200+
This reduces the steps in module execution process to:
12001201

12011202
.. code-block:: text
12021203
@@ -1205,7 +1206,7 @@ This reduces the steps in module execution process to this:
12051206
Interpreter startup is now 10-15% faster in Python 3.11. This has a big
12061207
impact for short-running programs using Python.
12071208

1208-
(Contributed by Eric Snow, Guido van Rossum and Kumar Aditya in numerous issues.)
1209+
(Contributed by Eric Snow, Guido van Rossum and Kumar Aditya in many issues.)
12091210

12101211

12111212
.. _whatsnew311-faster-runtime:
@@ -1218,8 +1219,9 @@ Faster Runtime
12181219
Cheaper, lazy Python frames
12191220
^^^^^^^^^^^^^^^^^^^^^^^^^^^
12201221

1221-
Python frames are created whenever Python calls a Python function. This frame
1222-
holds execution information. The following are new frame optimizations:
1222+
Python frames, holding execution information,
1223+
are created whenever Python calls a Python function.
1224+
The following are new frame optimizations:
12231225

12241226
- Streamlined the frame creation process.
12251227
- Avoided memory allocation by generously re-using frame space on the C stack.
@@ -1228,7 +1230,7 @@ holds execution information. The following are new frame optimizations:
12281230

12291231
Old-style :ref:`frame objects <frame-objects>`
12301232
are now created only when requested by debuggers
1231-
or by Python introspection functions such as :func:`sys._getframe` or
1233+
or by Python introspection functions such as :func:`sys._getframe` and
12321234
:func:`inspect.currentframe`. For most user code, no frame objects are
12331235
created at all. As a result, nearly all Python functions calls have sped
12341236
up significantly. We measured a 3-7% speedup in pyperformance.
@@ -1250,9 +1252,9 @@ In 3.11, when CPython detects Python code calling another Python function,
12501252
it sets up a new frame, and "jumps" to the new code inside the new frame. This
12511253
avoids calling the C interpreting function altogether.
12521254

1253-
Most Python function calls now consume no C stack space. This speeds up
1254-
most of such calls. In simple recursive functions like fibonacci or
1255-
factorial, a 1.7x speedup was observed. This also means recursive functions
1255+
Most Python function calls now consume no C stack space, speeding them up.
1256+
In simple recursive functions like fibonacci or
1257+
factorial, we observed a 1.7x speedup. This also means recursive functions
12561258
can recurse significantly deeper
12571259
(if the user increases the recursion limit with :func:`sys.setrecursionlimit`).
12581260
We measured a 1-3% improvement in pyperformance.
@@ -1265,7 +1267,7 @@ We measured a 1-3% improvement in pyperformance.
12651267
PEP 659: Specializing Adaptive Interpreter
12661268
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
12671269

1268-
:pep:`659` is one of the key parts of the faster CPython project. The general
1270+
:pep:`659` is one of the key parts of the Faster CPython project. The general
12691271
idea is that while Python is a dynamic language, most code has regions where
12701272
objects and types rarely change. This concept is known as *type stability*.
12711273

@@ -1278,14 +1280,14 @@ Python caches the results of expensive operations directly in the
12781280
:term:`bytecode`.
12791281

12801282
The specializer will also combine certain common instruction pairs into one
1281-
superinstruction. This reduces the overhead during execution.
1283+
superinstruction, reducing the overhead during execution.
12821284

12831285
Python will only specialize
12841286
when it sees code that is "hot" (executed multiple times). This prevents Python
1285-
from wasting time for run-once code. Python can also de-specialize when code is
1287+
from wasting time on run-once code. Python can also de-specialize when code is
12861288
too dynamic or when the use changes. Specialization is attempted periodically,
1287-
and specialization attempts are not too expensive. This allows specialization
1288-
to adapt to new circumstances.
1289+
and specialization attempts are not too expensive,
1290+
allowing it to adapt to new circumstances.
12891291

12901292
(PEP written by Mark Shannon, with ideas inspired by Stefan Brunthaler.
12911293
See :pep:`659` for more information. Implementation by Mark Shannon and Brandt
@@ -1353,8 +1355,8 @@ Bucher, with additional help from Irit Katriel and Dennis Sweeney.)
13531355
Misc
13541356
----
13551357

1356-
* Objects now require less memory due to lazily created object namespaces. Their
1357-
namespace dictionaries now also share keys more freely.
1358+
* Objects now require less memory due to lazily created object namespaces.
1359+
Their namespace dictionaries now also share keys more freely.
13581360
(Contributed Mark Shannon in :issue:`45340` and :issue:`40116`.)
13591361

13601362
* A more concise representation of exceptions in the interpreter reduced the
@@ -1372,17 +1374,17 @@ FAQ
13721374
How should I write my code to utilize these speedups?
13731375
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
13741376

1375-
You don't have to change your code. Write Pythonic code that follows common
1376-
best practices. The Faster CPython project optimizes for common code
1377-
patterns we observe.
1377+
Write Pythonic code that follows common best practices;
1378+
you don't have to change your code.
1379+
The Faster CPython project optimizes for common code patterns we observe.
13781380

13791381

13801382
.. _faster-cpython-faq-memory:
13811383

13821384
Will CPython 3.11 use more memory?
13831385
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
13841386

1385-
Maybe not. We don't expect memory use to exceed 20% more than 3.10.
1387+
Maybe not; we don't expect memory use to exceed 20% higher than 3.10.
13861388
This is offset by memory optimizations for frame objects and object
13871389
dictionaries as mentioned above.
13881390

@@ -1394,8 +1396,8 @@ I don't see any speedups in my workload. Why?
13941396

13951397
Certain code won't have noticeable benefits. If your code spends most of
13961398
its time on I/O operations, or already does most of its
1397-
computation in a C extension library like numpy, there won't be significant
1398-
speedup. This project currently benefits pure-Python workloads the most.
1399+
computation in a C extension library like NumPy, there won't be significant
1400+
speedups. This project currently benefits pure-Python workloads the most.
13991401

14001402
Furthermore, the pyperformance figures are a geometric mean. Even within the
14011403
pyperformance benchmarks, certain benchmarks have slowed down slightly, while

0 commit comments

Comments
 (0)