Skip to content

Convert to class to reduce argument passing #9490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 28, 2022

Conversation

mads-hartmann
Copy link
Contributor

@mads-hartmann mads-hartmann commented Apr 22, 2022

Description

This promotes installMonitoringSatellite to be a class to reduce the need for passing the same arguments around to the functions.

Related Issue(s)

Fixes #9059

How to test

  • in core-dev with observability enabled werft run github -f -a with-helm=true -a with-observability=true - job - was able to port-forward to Prometheus and Grafana.
  • with-vm werft run github -f -a with-vm=true - job - was able to port-forward to Prometheus and Grafana but had to use the following commands instead of our own script. I suspect there's a bug in our proxy. Will open a follow up issue.
    kubectl --context=k3s-preview-environment port-forward svc/prometheus-k8s "9090:9090"
    kubectl --context=k3s-preview-environment port-forward svc/grafana "3000:3000"
    

Release Notes

NONE

Documentation

N/A

Copy link
Contributor

@ArthurSens ArthurSens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noice! I'm really liking this pattern for using those "installer" classes.

I have a few questions but I don't know if they make sense 😅, so I'm approving with a hold. Please decide if we still need to make changes here, leave them another PR or not do them at all

/hold

Comment on lines 107 to 139
async function deployGitpodServiceMonitors(kubeconfig: string) {
const werft = getGlobalWerftInstance()
private async deployGitpodServiceMonitors() {
const { werft, kubeconfigPath } = this.options;
Copy link
Contributor

@ArthurSens ArthurSens Apr 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

back when I implemented this I had no idea what was this async and was just adding because it was being used everywhere.

Does it make sense in this class? Almost every function is using async, although we're not using Promises anywhere. Would it make sense to just remove it?

installMonitoringSatelliteParams.stackdriverServiceAccount = stackdriverServiceAccount
installMonitoringSatelliteParams.withVM = withVM
installMonitoringSatellite(installMonitoringSatelliteParams);
async function installMonitoring(werft: Werft, kubeconfig: string, namespace: string, nodeExporterPort: number, domain: string, stackdriverServiceAccount: any, withVM: boolean, observabilityBranch: string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing here, we declare an async function only to use await when calling it. Would it make sense to just remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm keeping this async as it's my understand that we want to block in core-dev (when observability is enabled) but not block when installing it in VM-based preview environments ☺️

@mads-hartmann
Copy link
Contributor Author

mads-hartmann commented Apr 26, 2022

@ArthurSens Thanks for the pointer! I removed the async keyword from the methods and also the async option for the exec invocations - in effect making observability installation blocking. Previously we were blocking using await in core-dev but not in Harvester. As we generally want observability ready before we install Gitpod to avoid errors about Gitpod not being able to send telemetry I think it makes sense to always have the function be blocking. But I will test the code tomorrow and do another pass to see if it makes sense ☺️

@mads-hartmann mads-hartmann force-pushed the mads-classify-monitoring-satellite branch from 193a8e7 to 794d226 Compare April 27, 2022 09:04
@ArthurSens
Copy link
Contributor

ArthurSens commented Apr 27, 2022

I believe this is good to go? Can I just remove the hold? 🙂

@mads-hartmann
Copy link
Contributor Author

mads-hartmann commented Apr 27, 2022

@ArthurSens When testing earlier it looked like it had broken tracing but I didn't have time to debug further. I will move this back to draft ☺️

@mads-hartmann mads-hartmann force-pushed the mads-classify-monitoring-satellite branch from 794d226 to 7913e4d Compare April 28, 2022 08:58
@mads-hartmann mads-hartmann marked this pull request as ready for review April 28, 2022 12:30
@mads-hartmann
Copy link
Contributor Author

mads-hartmann commented Apr 28, 2022

Okay @ArthurSens this is finally ready for another review. In the future I think we want to separate the installation of monitoring satellite from the waiting for it to be ready. That way we'd avoid the problem of having a slice potentially spanning multiple phases, but I didn't want to make that change now as I've already spent too long on this 😅

This should PR should also fix the problem Chris has been experiencing with failed monitoring satellite deployments breaking the build as explained here.

The last job has some missing spans, but I think that's more likely to to be due to burst protection as you also called out here - that also explains why I saw issue sometime yesterday but not always.

@roboquat roboquat merged commit fc40211 into main Apr 28, 2022
@roboquat roboquat deleted the mads-classify-monitoring-satellite branch April 28, 2022 12:32
@mads-hartmann
Copy link
Contributor Author

mads-hartmann commented Apr 28, 2022

@ArthurSens lol sorry, I have to get used to commits not dismissing previous reviews - I did want you to have a look at this before merging 😅 I don't want to revert it as I tested it pretty well, but if you have time please have a look and I can create a follow up PR with any changes you'd like to see

Copy link
Contributor

@ArthurSens ArthurSens left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worries about the merge, nothing really important to change :)

Comment on lines +154 to +163
// Deploying monitoring satellite to VM-based preview environments is currently best-effort.
// That means we currently don't wait for the promise here, and should the installation fail
// we'll simply log an error rather than failing the build.
//
// Note: Werft currently doesn't support slices spanning across multiple phases so running this
// can result in many 'observability' slices. Currently we close all the spans in a phase
// when we complete a phase. This means we can't currently measure the full duration or the
// success rate or installing monitoring satellite, but we can at least count and debug errors.
// In the future we can consider not closing spans when closing phases, or restructuring our phases
// based on parallelism boundaries
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a golden comment 🤌, perfectly explains the code below!

@@ -462,7 +496,18 @@ async function deployToDevWithHelm(werft: Werft, jobConfig: JobConfig, deploymen
werft.log(`observability`, "Installing monitoring-satellite...")
if (deploymentConfig.withObservability) {
try {
await installMonitoring(CORE_DEV_KUBECONFIG_PATH, namespace, nodeExporterPort, monitoringDomain, STACKDRIVER_SERVICEACCOUNT, false, jobConfig.observability.branch);
const installMonitoringSatellite = new MonitoringSatelliteInstaller({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const installMonitoringSatellite = new MonitoringSatelliteInstaller({
const monitoringSatelliteInstaller = new MonitoringSatelliteInstaller({

I'd change that just to be consistent with how you call it on other places

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

.werft/monitoring-satellite: Clean up logic
3 participants