-
Notifications
You must be signed in to change notification settings - Fork 61
RFC: Higher sample count for benching? #16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This would be great! It'd also be nice if there was a way of adding the benchmarks to the README stars table in some way 🤔. Another thing along these lines that I added to my repo created from this template (thank you for this by the way!) was a way to test the solution, not just the example. So that if I go refactoring and optimizing I don't accidentally break my implementation for the real input. I had to leave it ignored so it didn't break CI which doesn't have the real inputs downloaded, but it's something. Anyway, I thought I'd send it your way just to get your thoughts on it since I saw you had some RFCs up :) |
Cool idea! 👍 Not sure if we should try extend the table or setup a different block. Maybe (a tweaked version of) the output of Benchmarks
I'm a bit on the fence about that gitignore - I understand where it is coming from, but in reality most people check in the inputs anyway. |
The problem with adding this to the stars table is that the current template does not verify that we generate the correct result and the CI hook that adds the stars is probably not the best part to benchmark the solution. It should be possible to add a function to the solve program or even add a submit program that automatically submits the result to the adventofcode webpage. I think I have seen a python template that does that. That submit program could than run the benchmark, submit the result and update the readme.
I won't argue with anyone that wants to commit the inputs themselfs but if the developer of AoC specificly asked that we don't publish the inputs, maybe a template isn't the best place to ignore that. |
Yeah this would be great!
If the dev of AoC asked for them not to be checked in, I don't want my stuff checking it in. Another option would be giving the aoc-cli a way to pull the session token from an env var instead of a file so that it could then be stored in Secrets, at which point CI could just run |
Oh yeah, I'm perfectly happy with it not being part of CI, but instead being done locally. So long as these things are opt-in via a parameter, at least. Something like |
The crate we use for downloading inputs supports input submissions, so we could leverage it here. I would accept a PR for that if someone is interested and created #19 for this.
This looks like a good way forward. 👍 I have a somewhat working implementation of benchmarks on my repo for this year - I'll try and put it behind a
I think we could implement this on our side with the edit: forgot to mention that my preference for benchmarking is that it should be done client-side. :) |
I am currently learning rust, thats how I found this repo. I already started to work on a "simple" solution for this using criterion. Right now I have a PoC for criterion working, running my solution for day 1. #20 |
I was just looking at the aoc-cli repo to see about taking a crack at implementing #19 and saw this PR adding the ability for aoc-cli to use an env var for the session was just opened. So that may be easier to do soon :) |
I implemented a proof of concept for readme benchmarks on my 2022 repo to get a better idea how this would work with the current code layout. The code is still unfinished and a bit terrible, but benchmarks are written to the readme automatically when I also updated the Screen.Recording.2022-12-05.at.17.58.32.mov |
Implementation
Currently experimenting with a higher sample count for benching on my solution repo, might add to template after this year concludes. Unfortunately, using
criterion
with binary crates is not possible yet, would make this a bit simpler.Advantages:
Disadvantages:
Copy
(not an issue so far).Example output:
The text was updated successfully, but these errors were encountered: