Skip to content

Properly test cross-language LTO #57438

Open
@michaelwoerister

Description

@michaelwoerister

I'd like us to stabilize cross-language LTO as soon as possible but before we can do that we need more robust tests. Some background:

  • Cross-language LTO works by emitting LLVM bitcode instead of machine code for RLIBs/staticlibs and then lets the linker do the final ThinLTO pass. Because code coming from other languages (most notably C/C++) are also compiled to LLVM bitcode, the linker can perform inter-procedural optimizations across language boundaries.
  • For this to work properly, the LLVM versions used by rustc and the foreign language compilers must roughly be the same (at least the same major version).
  • The current implementation of cross-language LTO is available via -Zcross-lang-lto in rustc. It has some tests that make sure libraries indeed contain LLVM bitcode instead of object files. This is the bare-minimum requirement for cross-language LTO to work.

However, there is a problem with this approach:

  • Having LLVM bitcode available might not be sufficient for cross-language optimizations to actually happen. For example, LLVM will not inline one function into another if their target-features don't match. For this reason we unconditionally emit the target-cpu attribute for LLVM functions. For now this seems to be sufficient but for future versions of LLVM additional requirements might emerge and break cross-language LTO in a silent way (i.e. everything still compiles but the expected optimizations aren't happening).

So, to test the feature in a more robust way, we have to have a test that actually compiles some mixed Rust/C++ code and then verifies that function inlining across language boundaries was successful.

The good news is that trivial functions (e.g. ones that just return a constant integer) are reliably inlined, so we have a good way of checking whether inlining happens at all.

The bad news is that, since Rust's LLVM is ahead of stable Clang releases, we need to build our own Clang in order to get a compatible version. We already have Clang in tree because we need it for building Rust-enabled LLDB, but this is not the default.

So my current proposal for implementing these tests is:

  • Add a script (available to run-make-fulldeps tests) that checks if a compatible Clang is available.
  • Add an option to compiletest that allows to specify whether tests relying on Clang are optional or required.
  • If Clang-based tests are required, fail them if Clang is not available, otherwise just ignore them (so that local testing can be done without also building Clang).
  • Force building Rust LLDB on major platform CI jobs, thus making Clang available.
  • For those same jobs, set the compiletest option that requires Clang to be available.

This way cross-language LTO will be tested by CI and hopefully sccache will make building Clang+LLDB cheap enough. But we also rely on not forgetting to set the appropriate flags in CI images, which might cause things to silently go untested.

@rust-lang/infra & @rust-lang/compiler, do you have any thoughts? Or a better, less complicated approach? It would be great if we could require Clang unconditionally but I'm afraid that that would be too much of a burden for people running tests locally.

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-linkageArea: linking into static, shared libraries and binariesA-testsuiteArea: The testsuite used to check the correctness of rustcE-help-wantedCall for participation: Help is requested to fix this issue.T-bootstrapRelevant to the bootstrap subteam: Rust's build system (x.py and src/bootstrap)T-compilerRelevant to the compiler team, which will review and decide on the PR/issue.T-infraRelevant to the infrastructure team, which will review and decide on the PR/issue.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions