-
Notifications
You must be signed in to change notification settings - Fork 32
Feature: Automatic generation of StrictDoc's qualification data package #2147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Any feedback is greatly appreciated! cc @haxtibal @johanenglund @thseiler @richardbarlow @nicpappler |
This looks like a great plan! Thank you for putting the time and effort in to help ensure that StrictDoc is easy to adopt. Before getting into the details I thought I'd make it clear what we (gaitQ) need as a Medical Device (MD) manufacturer: For any tool that we use in the development of our MDs we must ensure that the tool meets our intended use of the tool. As discussed in the office hours the other week, our preferred approach would be for us to map our requirements to StrictDoc's requirements.
In addition to the documentation of the qualification data package generation, we feel that there a couple of additional bits of documentation missing to ensure that quality is maintained in general (unless we're just looking in the wrong place?):
Something that I wanted to ask in the most recent office hours call was what do you consider to be the 'Device Under Test' when it comes to testing a StrictDoc release? We would prefer that all of the qualification steps are performed against the actual artifact that will be pushed to PyPI. Have you considered attestation of both the source release pushed to PyPI and the qualification data pack?
If PyPI is the repository for release artifacts then to me it would make the most sense for it to live along side those. I think it's outside the scope of this issue, but we also need the traceability of StrictDoc's requirements through to the source - at least for the requirements that we depend on. We are in the process of defining our requirements for StrictDoc and should be able to get something to you in the next week or two. |
Thanks. This is very good input. Now that you are creating requirements for StrictDoc in a dedicated document, please consider including all these points there, so that we can address them in a structured and traceable manner. To support the first round of discussion, my comments below. Later on, I will provide my answers in SDoc documents, traceable from your document.
Understood, and as far as I know, this aligns with how safety-related developments are generally handled. I am also familiar with how the RTEMS RTOS prepares its Qualification Data Package (QDP) for a specific minimal profile across a set of pre-qualified hardware platforms. A user then has to integrate the RTEMS QDP into their larger project and demonstrate that the intended use remains within the envelope of the QDP. I consider the RTEMS QDP a good reference for many aspects of OSS qualification.
Understood and agree with the approach. Please include these as requirements in your document.
Within the small core team (@mettta and myself), we haven't been too strict about this so far, as many things were discussed directly. However, we have always reviewed and approved contributions from users. Now that there are more contributors and we want to move towards a more formal development process, it would be a good idea to configure the GitHub settings to require at least one approval, and to document this approach in the development plan.
This is an important open point. StrictDoc has four groups of tests, and none of them individually achieves 100% code coverage. The current coverage threshold for unit tests is set at 60%, mainly because some Python classes are primarily exercised through higher-level end-to-end tests for the CLI and web interface. I am considering a solution where a custom Python script would validate that the combined code coverage does not drop below a certain percentage. However, the challenge is that the GitHub CI jobs are split between CLI and web end-to-end tests for performance reasons. They have to be parallelized, otherwise, a single test job would take much longer for each PR.
I need to think through a good solution to this. Maybe we could make an exception for GitHub CI Linux jobs because they are the fastest. If all tests on Linux are merged into one job that calculates the combined coverage and sets a limit, this would give a day-to-day validation of the code coverage threshold.
This one is easy, opened an issue: #2169.
Yes, the plan was to use PyPI as the device under test.
That was the plan.
It is an interesting trade-off between development convenience and precise version pinning. StrictDoc used to have all its direct dependencies pinned, but that created some overhead because it requires manual updates of all dependencies and reacting to security update notifications. If strict pinning was required, we would need a complete list of dependencies frozen recursively. I would suggest keeping this as a separate exercise, to be done only if a more rigorous approach becomes necessary.
It is a great idea, and I need to think about how it could be implemented. I used to have an automatic release and deployment process triggered by a GitHub release, but I had to disable it at some point because:
Ideally, the StrictDoc PIP package should already be available when the qualification tasks start running. However, I have observed that after a release to PyPI, it sometimes takes a few seconds or minutes before the new package becomes fully downloadable. A related issue: after releasing, one needs to trigger one I am open to discussing the best approach for improving this. So far, solving it has not been the highest priority because the manual process has worked reliably enough.
We need to think about connecting the PyPI stable release and the qualification data pack. I am not sure if coupling directly the PyPI release and the qualification data pack is a good idea from the release process convenience perspective.
We are planning to trace requirements to both code and tests. We have created this diagram to structure more how StrictDoc's own documentation has to be traced: #2167. With a few exceptions, we are almost certain this structure should cover everything StrictDoc has. The rest is mechanical work of adding the traces.
This is great, and we are looking forward to reviewing it. Thanks for your comments! |
Description
StrictDoc needs to demonstrate its capability to support safety- and security-related developments. The simplest way to consolidate all supporting evidence is to create a single executable Python task that generates and merges all evidence packages into a known location.
Problem
Currently, StrictDoc only has the following artifacts generated automatically:
What is missing is a combined report that bundles together all reports and artifacts in one place with all items cross-linked with each other.
Solution
The following tasks must be accomplished:
qualification
. The task must perform the following functions.reports/
folder.coverage
folder. The expected format is gcov/JSON.Additional Information
qualification
task must support running its subtasks within or outside StrictDoc's Docker container.The text was updated successfully, but these errors were encountered: