1
- # The Tier 2 Interpreter
2
-
3
- The [ basic interpreter] ( interpreter.md ) , also referred to as the ` tier 1 `
4
- interpreter, consists of a main loop that executes the bytecode instructions
5
- generated by the [ bytecode compiler] ( compiler.md ) and their
6
- [ specializations] ( interpreter.md#Specialization ) . Runtime optimization in tier 1
7
- can only be done for one instruction at a time. The ` tier 2 ` interpreter is
8
- based on a mechanism to replace an entire sequence of bytecode instructions,
1
+ # The JIT
2
+
3
+ The [ adaptive interpreter] ( interpreter.md ) consists of a main loop that
4
+ executes the bytecode instructions generated by the
5
+ [ bytecode compiler] ( compiler.md ) and their
6
+ [ specializations] ( interpreter.md#Specialization ) . Runtime optimization in
7
+ this interpreter can only be done for one instruction at a time. The JIT
8
+ is based on a mechanism to replace an entire sequence of bytecode instructions,
9
9
and this enables optimizations that span multiple instructions.
10
10
11
+ Historically, the adaptive interpreter was referred to as ` tier 1 ` and
12
+ the JIT as ` tier 2 ` . You will see remnants of this in the code.
13
+
11
14
## The Optimizer and Executors
12
15
13
- The program begins running in tier 1 , until a ` JUMP_BACKWARD ` instruction
14
- determines that it is ` hot ` because the counter in its
15
- [ inline cache] ( interpreter.md#inline-cache-entries ) indicates that is
16
+ The program begins running on the adaptive interpreter , until a ` JUMP_BACKWARD `
17
+ instruction determines that it is " hot" because the counter in its
18
+ [ inline cache] ( interpreter.md#inline-cache-entries ) indicates that it
16
19
executed more than some threshold number of times (see
17
20
[ ` backoff_counter_triggers ` ] ( ../Include/internal/pycore_backoff.h ) ).
18
21
It then calls the function ` _PyOptimizer_Optimize() ` in
@@ -23,40 +26,41 @@ constructs an object of type
23
26
an optimized version of the instruction trace beginning at this jump.
24
27
25
28
The optimizer determines where the trace ends, and the executor is set up
26
- to either return to ` tier 1 ` and resume execution, or transfer control
27
- to another executor (see ` _PyExitData ` in Include/internal/pycore_optimizer.h).
29
+ to either return to the adaptive interpreter and resume execution, or
30
+ transfer control to another executor (see ` _PyExitData ` in
31
+ Include/internal/pycore_optimizer.h).
28
32
29
33
The executor is stored on the [ ` code object ` ] ( code_objects.md ) of the frame,
30
34
in the ` co_executors ` field which is an array of executors. The start
31
35
instruction of the trace (the ` JUMP_BACKWARD ` ) is replaced by an
32
36
` ENTER_EXECUTOR ` instruction whose ` oparg ` is equal to the index of the
33
37
executor in ` co_executors ` .
34
38
35
- ## The uop optimizer
39
+ ## The micro-op optimizer
36
40
37
- The optimizer that ` _PyOptimizer_Optimize() ` runs is configurable
38
- via the ` _Py_SetTier2Optimizer() ` function (this is used in test
39
- via ` _testinternalcapi.set_optimizer() ` .)
41
+ The optimizer that ` _PyOptimizer_Optimize() ` runs is configurable via the
42
+ ` _Py_SetTier2Optimizer() ` function (this is used in test via
43
+ ` _testinternalcapi.set_optimizer() ` .)
40
44
41
- The tier 2 optimizer, ` _PyUOpOptimizer_Type ` , is defined in
42
- [ ` Python/optimizer.c ` ] ( ../Python/optimizer.c ) . It translates
43
- an instruction trace into a sequence of micro-ops by replacing
44
- each bytecode by an equivalent sequence of micro-ops
45
- (see ` _PyOpcode_macro_expansion ` in
45
+ The micro-op optimizer (abbreviated ` uop ` to approximate ` μop ` ) is defined in
46
+ [ ` Python/optimizer.c ` ] ( ../Python/optimizer.c ) as the type ` _PyUOpOptimizer_Type ` .
47
+ It translates an instruction trace into a sequence of micro-ops by replacing
48
+ each bytecode by an equivalent sequence of micro-ops (see
49
+ ` _PyOpcode_macro_expansion ` in
46
50
[ pycore_opcode_metadata.h] ( ../Include/internal/pycore_opcode_metadata.h )
47
51
which is generated from [ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) ).
48
52
The micro-op sequence is then optimized by
49
53
` _Py_uop_analyze_and_optimize ` in
50
54
[ ` Python/optimizer_analysis.c ` ] ( ../Python/optimizer_analysis.c )
51
55
and a ` _PyUOpExecutor_Type ` is created to contain it.
52
56
53
- ## Running a uop executor on the tier 2 interpreter
57
+ ## Debugging a uop executor in the JIT interpreter
54
58
55
- After a tier 1 ` JUMP_BACKWARD ` instruction invokes the uop optimizer
56
- to create a tier 2 uop executor, it transfers control to this executor
57
- via the ` GOTO_TIER_TWO ` macro.
59
+ After a ` JUMP_BACKWARD ` instruction invokes the uop optimizer to create a uop
60
+ executor, it transfers control to this executor via the ` GOTO_TIER_TWO ` macro.
58
61
59
- When tier 2 is enabled but the JIT is not (python was configured with
62
+ When the JIT is configured to run on its interpreter (i.e., python is
63
+ configured with
60
64
[ ` --enable-experimental-jit=interpreter ` ] ( https://docs.python.org/dev/using/configure.html#cmdoption-enable-experimental-jit ) ),
61
65
the executor jumps to ` tier2_dispatch: ` in
62
66
[ ` Python/ceval.c ` ] ( ../Python/ceval.c ) , where there is a loop that
@@ -67,19 +71,19 @@ which is generated by the build script
67
71
from the bytecode definitions in
68
72
[ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) .
69
73
This loop exits when an ` _EXIT_TRACE ` or ` _DEOPT ` uop is reached,
70
- and execution returns to teh tier 1 interpreter.
74
+ and execution returns to the adaptive interpreter.
71
75
72
76
## Invalidating Executors
73
77
74
78
In addition to being stored on the code object, each executor is also
75
- inserted into a list of all executors which is stored in the interpreter
79
+ inserted into a list of all executors, which is stored in the interpreter
76
80
state's ` executor_list_head ` field. This list is used when it is necessary
77
- to invalidate executors because values that their construction depended
78
- on may have changed.
81
+ to invalidate executors because values they used in their construction may
82
+ have changed.
79
83
80
84
## The JIT
81
85
82
- When the jit is enabled (python was configured with
86
+ When the full jit is enabled (python was configured with
83
87
[ ` --enable-experimental-jit ` ] ( https://docs.python.org/dev/using/configure.html#cmdoption-enable-experimental-jit ) ,
84
88
the uop executor's ` jit_code ` field is populated with a pointer to a compiled
85
89
C function that implement the executor logic. This function's signature is
@@ -89,7 +93,7 @@ the uop interpreter at `tier2_dispatch`, the executor runs the function
89
93
that ` jit_code ` points to. This function returns the instruction pointer
90
94
of the next Tier 1 instruction that needs to execute.
91
95
92
- The generation of the jitted fuctions uses the copy-and-patch technique
96
+ The generation of the jitted functions uses the copy-and-patch technique
93
97
which is described in
94
98
[ Haoran Xu's article] ( https://sillycross.github.io/2023/05/12/2023-05-12/ ) .
95
99
At its core are statically generated ` stencils ` for the implementation
@@ -113,8 +117,8 @@ functions are used to generate the file
113
117
that the JIT can use to emit code for each of the bytecodes.
114
118
115
119
For Python maintainers this means that changes to the bytecodes and
116
- their implementations do not require changes related to the JIT ,
117
- because everything the JIT needs is automatically generated from
120
+ their implementations do not require changes related to the stencils ,
121
+ because everything is automatically generated from
118
122
[ ` Python/bytecodes.c ` ] ( ../Python/bytecodes.c ) at build time.
119
123
120
124
See Also:
0 commit comments