-
Notifications
You must be signed in to change notification settings - Fork 55
Metrics Initiative #260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This issue is intended for status updates only. For general questions or comments, please contact the owner(s) directly. |
I'm very excited to see that this got accepted as a project goal 🥰 🎉 Let me go ahead and start by giving an initial status update of where I'm at right now.
from {"lib_features":[{"symbol":"variant_count"}],"lang_features":[{"symbol":"associated_type_defaults","since":null},{"symbol":"closure_track_caller","since":null},{"symbol":"let_chains","since":null},{"symbol":"never_type","since":null},{"symbol":"rustc_attrs","since":null}]} Snippet of unstable feature usage metrics post conversion to line protocol
Snippet of feature status metrics post conversion to line protocol
Run with SELECT
COUNT(*) TotalCount, "featureStatus".name
FROM
"featureStatus"
INNER JOIN "featureUsage" ON
"featureUsage".feature = "featureStatus".name
GROUP BY
"featureStatus".name
ORDER BY
TotalCount DESC |
My next step is to revisit the output format, currently a direct json serialization of the data as it is represented internally within the compiler. This is already proven to be inadequate by personal experience, given the need for additional ad-hoc conversion into another format with faked timestamp data that wasn't present in the original dump, and by conversation with @badboy (Jan-Erik), where he recommended we explicitly avoid ad-hoc definitions of telemetry schemas which can lead to difficult to manage chaos. I'm currently evaluating what options are available to me, such as a custom system built around influxdb's line format, or opentelemetry's metrics API. Either way I want to use firefox's telemetry system as inspiration / a basis for requirements when evaluating the output format options Relevant notes from my conversation w/ Jan-Erik
|
After further review I've decided to limit scope initially and not get ahead of myself so I can make sure the schemas I'm working with can support the kind of queries and charts we're going to eventually want in the final version of the unstable feature usage metric. I'm hoping that by limiting scope I can have most of the items currently outlined in this project goal done ahead of schedule so I can move onto building the proper foundations based on the proof of concept and start to design more permanent components. As such I've opted for the following:
For the second item above I need to have more detailed conversations with both @rust-lang/libs-api and @rust-lang/lang |
Small progress update: following the plan mentioned above plus some extra bits, I've implemented the following changes
Next Steps:
|
posting this here so I can link to it in other places, I've setup the basic usage over time chart using some synthesized data that just emulates quadraticly (is that a word?) increasing feature usage for my given feature over the course of a week (the generated data starts at 0 usages per day and ends at 1000 usages per day). This chart counts the usage over each day long period and charts those counts over a week. The dip at the end is the difference between when I generated the data, after which there is zero usage data, and when I queried it. With this I should be ready to just upload the data once we've gathered it from docs.rs, all I need to do is polish and export the dashboards I've made from grafana to the rust-lang grafana instance, connect that instance to the rust-lang influxdb instance, and upload the data to influxdb once we've gathered it. |
Summary
Build out the support for metrics within the rust compiler and starting with a Proof of concept dashboard for viewing unstable feature usage statistics over time.
Tasks and status
The text was updated successfully, but these errors were encountered: