Which tests do I have to run before submitting a pool request for a new feature? #15157
Unanswered
Settheworldonfireiii
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Which tests must the new feature pass before the pull request is made?
I locally implemented output of hidden states from LLM's
generate()
function based on a new flag attribute that I added toSamplingParams
in a separate sandboxed version of vLLM.Before I even tried to merge/code it into the official dev version, I tried to run tests as described here:
on an editable vLLM version that I installed using these commands, before implementing my changes:
and I got a ton of errors, and that's the official dev repo's code. When I run all my University research scripts, and when I run all the commands I mentioned above **before
pre-commit run --all-files
** within the same environment(that is, with dev version of vllm), I do not get any errors, andpre-commit run --all-files
produces only a few ones related to lint. When I run models as a part of my research in the same environment, I never have any issues. Apparently, some of the errors are due to peculiarities of my environment, or lack of access to certain Huggingface models that I never use. Anyways, those errors pertain to some aspects that I never encounter and the feature I implemented never interacts with those.Therefore, I wonder do I really have to pass all the tests in
pytest/
directory, especially given that currently all of the errors are get are related to my conda environment rather than to the code I wrote/ feature I implemented, since I haven't yet written/merged them into the repository in question.In addition, tests are notoriously slow to run, especially with limited compute resources.
If so, I wonder which subset of the tests do I need to pass before I create pull request? How do I determine those tests that are related to my feature (apart from those that I written myself)?
Also, how many tests of my own should I write to extensively test my feature? I know the criteria is that they should cover the usage of the feature, but still, on the average, how many tests usually does it take quantitatively?
Just for reference, the errors and the environment snapshot. I did not put it into "issues" or on Stackoverflow because my primary question is not asking to help with these errors below (although, if I have to pass all the tests, that would be appreciated), but rather to figure out which, if not all, tests does my code have to pass :
Environment snapshot:
Beta Was this translation helpful? Give feedback.
All reactions