-
Notifications
You must be signed in to change notification settings - Fork 75
Potential CI improvement for multi-distro testing #179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
As far as I understand, what happens there is that the container image is built once and reused. The drawback here would be that then the image might contain older versions of stuff, which we will probably need a way to deal with. But yeah, I agree this would be great. |
We could just rebuild the Docker images themselves weekly or monthly. Base OS images don't change a lot. |
Yeah, that makes sense. We can have a scheduled CI job to do that. |
Looking at https://github.com/mesonbuild/meson/blob/7912901accaee714fc86febdc72f4347b9397759/.github/workflows/os_comp.yml, I'd say they are complementary. It only has ubuntu and arch, but it has some compiler configs and some cross compiling set up, which is nice. ci-sage has other platforms such as fedora, gentoo, ... |
If speed is a concern for ci-sage, yes, incremental builds are a possibility. |
#183 makes ci-sage a bit faster. It's still a from-scratch build, but using a much smaller configuration of system packages. Incremental builds will be easier to do after https://trac.sagemath.org/ticket/34081 has been merged |
Meson has a weekly image builder job that attempts to build the container image from scratch, run tests, and on success pushes the newly built image to the docker hub. So older versions of stuff do get bumped, just not on every build. This also means that we usually avoid dependency updates if those dependencies break the CI, so PRs only go red if the PR itself introduces a problem. This has its advantages and disadvantages. Jussi gets an email if the CI images are failing to build. On the other hand, people may not always look at it. It might be neat if the workflow could somehow submit or update an open issue on failures. Actually there are two broken images, but I am fixing them as we speak... |
For testing the macOS arm64 wheels I looked into Cirrus CI. They have a very nice service where given a Dockerfile they take care of the image build and caching. I tried instantiating a simple Debian unstable image with python and basic development tools, enough to build a python extension module. Building the image takes around 90 seconds. However, after the first build the image is cached and instantiated almost instantaneously. This seems a solid and easy to setup alternative to the Sage test infrastructure that may be better tailored to the needs of meson-python. This would also move some of the jobs from the busy GitHub Actions workers to another service with pretty high usage limits. If there is interest I can try to prepare a PR with a setup that runs the tests on a couple of different distributions and we can see how we like it. Here is a minimal example https://github.com/dnicolodi/cirrusci |
That looks super clean. I've been quite impressed with Cirrus CI so far. +1 for giving this a try for both arm64 and multi-distro testing. |
FWIW we’ve been happy with Cirrus in cibuildwheel. |
Implemented in #241 |
We now use the Sage CI jobs in https://github.com/FFY00/meson-python/blob/main/.github/workflows/ci-sage.yml. Those take quite a while:
I recently noticed how Meson does this:
That takes just over 1 minute to initialize a container, and then the tests start running. Given how faster the
meson-python
test suite is, doing it that way should be ~10x faster than through rebuilding Sage Docker image. I'm not sure if we'd lose anything in terms of coverage of configurations (@mkoeppe please comment if I'm missing something important here).And we could use that extra time to test:
--user
sudo
The text was updated successfully, but these errors were encountered: