-
-
Notifications
You must be signed in to change notification settings - Fork 31.9k
gh-107557: Setup abstract interpretation #107847
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-Authored-By: Guido van Rossum <[email protected]>
Co-Authored-By: Jules <[email protected]>
Co-Authored-By: Jules <[email protected]>
Co-Authored-By: Jules <[email protected]>
Co-Authored-By: Jules <[email protected]>
* Fix+Refactor: Handling of root nodes in special-cased type prop * Style: Removed trailing space
…WRITE` and mis-port of `PARTITIONNODE_OVERWRITE` (#41) * Fix: Inconsistent AbstractInterpContext used in PARTITIONNODE_OVERWRITE and typo in PARTITIONNODE_OVERWRITE * Style: Removed whitespace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some nits for everything except optimizer_analysis.c
Include/internal/pycore_uops.h
Outdated
int32_t opcode; | ||
int32_t oparg; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious about this change. Are you using negative opcodes or opargs? IIUC oparg really is treated as a 32-bit unsigned int in ceval.c and bytecodes.c.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm using both negative ints and opargs, but only during the analysis phase. They are converted back to unsigned after the last pass.
For all intents and purposes outside of the optimizer pass, they can be treated as unsigned.
if instr.is_viable_uop() and instr.name not in SPECIALLY_HANDLED_ABSTRACT_INSTR: | ||
self.out.emit("") | ||
with self.out.block(f"case {thing.name}:"): | ||
instr.write(self.out, tier=TIER_TWO) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that in gh-107760 I'm removing Instruction.write
altogether.
a.write_abstract_interpreter_instructions(args.abstract_interpreter_cases, | ||
args.emit_line_directives) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, I noticed this too. :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at optimizer_analysis.c I am wondering if that code is ready for prime time. Maybe there's a way that we can add most of the infrastructure code (that is likely relatively stable) without adding any of the code that's still under heavy development? E.g. I'm fine with the extra optimizer argument and generating a new .c.h file, and even with having a dummy optimizer_analysis.c file, but I worry that the latter will undergo many serious changes that will cause a lot of churn.
It is also by far the largest amount of code, and of a highly algorithmic nature that I won't be able to review meaningfully until you and I have sat down for a review of the algorithm. Compare this to most of the Tier 2 work so far, which is mostly just engineering -- e.g. splitting bytecodes into guard and action uops that use at most one cache entry feels more like a refactoring, and the superblock generator and interpreter are also mostly just hard work, not deep thinking.
Yeah I'm concerned with the changes in the analysis file as well. The other problem I also thought of is that new uops (especially calls) would need modifications to the analysis file and that requires deep understanding of the algorithm. Would you be amenable if I guarded the optimization pass behind a define that is by default off? That way we can still experiment with things but not impede other efforts. |
That would totally work. Can you update the PR? |
I've decided to use an env var instead, because it's easier for testing. With a define, we would not be able to run any tests except with a custom branch of cpython. |
Python/bytecodes.c
Outdated
@@ -3743,13 +3743,12 @@ dummy_func( | |||
return frame; | |||
} | |||
|
|||
op(INSERT, (--)) { | |||
op(INSERT, (stuff[oparg], top -- top, stuff[oparg])) { | |||
// Inserts TOS at position specified by oparg | |||
PyObject *tos = TOP(); | |||
for (int i = 1; i < oparg + 1; i++) { | |||
stack_pointer[i] = stack_pointer[i - 1]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, doesn't this repeat the first stack element over and over? Maybe memmove()
would do what you want? (It is supposed to be good at overlaps, unlike memcpy()
.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Woops yeah thanks for catching this! Forgot to negate the indexes.
case parsing.InstDef(): | ||
instr = AbstractInstruction(self.instrs[thing.name].inst) | ||
if instr.is_viable_uop() and instr.name not in SPECIALLY_HANDLED_ABSTRACT_INSTR: | ||
self.out.emit("") | ||
with self.out.block(f"case {thing.name}:"): | ||
instr.write(self.out, tier=TIER_TWO) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if you even need the AbstractInstruction
class. Maybe you could just call a different method on Instruction
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of how we might need to expand on it more in the future, so a separate class might be better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When that happens in the future you can refactor the code. Until then, I recommend less code.
As you may have noticed this code gets refactored a lot. :-) It's easy because the generator is not a public API -- all we care about is whether it generates a handful of files correctly from bytecodes.c at build time. But I worry about copying and pasting code, because that's harder to refactor.
Guido, thanks for the multiple rounds of reviews. It's a gigantic PR, and I truly appreciate your time on this! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still concerned that we have that 1000+ line experimental file. I think requiring you to have a branch with just the changes to that file in it makes sense until it all works reliably. (For comparison, Brandt's copy-and-patch is also still a branch.)
Python/executor_cases.c.h
Outdated
@@ -2736,14 +2736,14 @@ | |||
|
|||
case INSERT: { | |||
PyObject *top; | |||
PyObject **stuff; | |||
PyObject **stuff1; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, the warnings are kind of annoying -- let's see if we can rid of those.
Got it. I removed the experimental optimizer part and pushed it to another branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great. Do you think this is in the state you'd like to merge now?
Yes I'm happy merging this! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go for it!
This is joint work by @JuliaPoo and me, supported by the NUS TEST Lab.
This implements a subset of partial evaluation (specifically, constant propagation), using a technique described for our type propagation in our Tier 2 interpreter report and abstract interpretation over uops.The main goal of upstreaming this PR is to set up the infrastructure for optimization passes of CPython uops. With constant propagation, turning global loads or attribute loads to constants at the region formation phase will open up further optimizations in subsequent passes. This PR also makes CPython uops ready for type propagation, thus allowing wide-scale typed operations.
Features:
Partitioning of values on the stack and locals into static/dynamic using the concept of "partitions of nodes".Introduces the concept of "pure" operations, and what can be ascertained about them.Constant propagation purely via abstract interpretation of bytecode, with no requirement for SSA or AST.An optimization pass to remove redundantSAVE_IP
s after partial evaluation.Jump target/instruction numbering, which allows us to freely relocate jump targets/instructions, as a final pass will fix them up.Example:
We have the following test case.
This is now essentially simplified down to:
Which roughly halves the trace length.
TODO:
lltrace
convention commonly seen in rest of the uops codebase.