Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
[fatlto] Add coroutine passes when using FatLTO with ThinLTO #134434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[fatlto] Add coroutine passes when using FatLTO with ThinLTO #134434
Changes from all commits
80c3615
fb945cf
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also possible to do
MPM.addPass(buildModuleSimplificationPipeline(Level, ThinOrFullLTOPhase::ThinLTOPostLink));
That seems to generate better code here, since it allows the tail call in the test to get removed. But that seems a bit expensive, just to give the inliner a second chance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the current PR is a good way to start, as it avoids the miscompilation in a minimal way that is suitable for backporting.
Using the module simplification pipeline here (+ the module optimization pipeline) would effectively be the same as using the ThinLTO post-link pipeline.
I ran some tests, and it looks like adding
-ffat-lto-objects
adds about 10% overhead to the clang build (https://llvm-compile-time-tracker.com/compare.php?from=44923d8631fb28b4de54d4210762f256c3894cef&to=2bdf721c2d37af6fcbd931d963f586478cb55f17&stat=instructions:u). On top of that, switching this from using the module optimization pipeline to the ThinLTO post-link pipeline (your original PR) adds an extra 4%: https://llvm-compile-time-tracker.com/compare.php?from=2bdf721c2d37af6fcbd931d963f586478cb55f17&to=63666418fbe19f30bf971796747a751b4e1c57f3&stat=instructions:uAdding 4% overhead is not great, but also not terrible, so maybe it's worth it to avoid pipeline compatibility issues in a principled way. It does make the codegen for FatLTO ELF diverge more from a normal compilation though.
Ideally we'd be able to rerun the simplification pipeline, but skip the inliner pipeline for already optimized functions. (I think @aeubanks had a prototype that did that for the actual ThinLTO scenario, by looking at available_externally functions. The situation here is somewhat different.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, 10% seems a bit high for overhead on build times, though we haven't used it too much w/ ThinLTO in our toolchain, so maybe that's it?
Looking at our build times when we enabled it in our toolchain, we saw about a 2.5% slowdown in total build time, but a 22% improvement in test time (ninja check-*). Overall that ended up being about 4.4% speedup in total time.
So, I'm not surprised it slowed down for just the build, but I am surprised it added a full 10%. Well, I guess I/O can have a lot of variance between machines, so maybe that's enough to explain it, since for ThinLTO it probably more than doubles the size of the
.o
.