Skip to content

Convert to class to reduce argument passing #9490

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 28, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 50 additions & 18 deletions .werft/jobs/build/deploy-to-preview-environment.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ import { createHash, randomBytes } from "crypto";
import * as shell from 'shelljs';
import * as fs from 'fs';
import { exec, ExecOptions } from '../../util/shell';
import { InstallMonitoringSatelliteParams, installMonitoringSatellite } from '../../observability/monitoring-satellite';
import { MonitoringSatelliteInstaller } from '../../observability/monitoring-satellite';
import { wipeAndRecreateNamespace, setKubectlContextNamespace, deleteNonNamespaceObjects, findFreeHostPorts, createNamespace, helmInstallName, findLastHostPort, waitUntilAllPodsAreReady, waitForApiserver } from '../../util/kubectl';
import { issueCertificate, installCertificate, IssueCertificateParams, InstallCertificateParams } from '../../util/certs';
import { sleep, env } from '../../util/util';
Expand All @@ -13,6 +13,7 @@ import * as VM from '../../vm/vm'
import { Analytics, Installer } from "./installer/installer";
import { previewNameFromBranchName } from "../../util/preview";
import { createDNSRecord } from "../../util/gcloud";
import { SpanStatusCode } from '@opentelemetry/api';

// used by both deploys (helm and Installer)
const PROXY_SECRET_NAME = "proxy-config-certificates";
Expand Down Expand Up @@ -150,8 +151,41 @@ export async function deployToPreviewEnvironment(werft: Werft, jobConfig: JobCon
exec('exit 0')
}

installMonitoring(PREVIEW_K3S_KUBECONFIG_PATH, deploymentConfig.namespace, 9100, deploymentConfig.domain, STACKDRIVER_SERVICEACCOUNT, withVM, jobConfig.observability.branch);
werft.done('observability')
// Deploying monitoring satellite to VM-based preview environments is currently best-effort.
// That means we currently don't wait for the promise here, and should the installation fail
// we'll simply log an error rather than failing the build.
//
// Note: Werft currently doesn't support slices spanning across multiple phases so running this
// can result in many 'observability' slices. Currently we close all the spans in a phase
// when we complete a phase. This means we can't currently measure the full duration or the
// success rate or installing monitoring satellite, but we can at least count and debug errors.
// In the future we can consider not closing spans when closing phases, or restructuring our phases
// based on parallelism boundaries
Comment on lines +154 to +163
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a golden comment 🤌, perfectly explains the code below!

const monitoringSatelliteInstaller = new MonitoringSatelliteInstaller({
kubeconfigPath: PREVIEW_K3S_KUBECONFIG_PATH,
branch: jobConfig.observability.branch,
satelliteNamespace: deploymentConfig.namespace,
clusterName: deploymentConfig.namespace,
nodeExporterPort: 9100,
previewDomain: deploymentConfig.domain,
stackdriverServiceAccount: STACKDRIVER_SERVICEACCOUNT,
withVM: withVM,
werft: werft
});
const sliceID = "observability"
monitoringSatelliteInstaller.install()
.then(() => {
werft.log(sliceID, "Succeeded installing monitoring satellite")
})
.catch((err) => {
werft.log(sliceID, `Failed to install monitoring: ${err}`)
const span = werft.getSpanForSlice(sliceID)
span.setStatus({
code: SpanStatusCode.ERROR,
message: err
})
})
.finally(() => werft.done(sliceID));
}

werft.phase(phases.PREDEPLOY, "Checking for existing installations...");
Expand Down Expand Up @@ -462,7 +496,18 @@ async function deployToDevWithHelm(werft: Werft, jobConfig: JobConfig, deploymen
werft.log(`observability`, "Installing monitoring-satellite...")
if (deploymentConfig.withObservability) {
try {
await installMonitoring(CORE_DEV_KUBECONFIG_PATH, namespace, nodeExporterPort, monitoringDomain, STACKDRIVER_SERVICEACCOUNT, false, jobConfig.observability.branch);
const installMonitoringSatellite = new MonitoringSatelliteInstaller({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const installMonitoringSatellite = new MonitoringSatelliteInstaller({
const monitoringSatelliteInstaller = new MonitoringSatelliteInstaller({

I'd change that just to be consistent with how you call it on other places

kubeconfigPath: CORE_DEV_KUBECONFIG_PATH,
branch: jobConfig.observability.branch,
satelliteNamespace: namespace,
clusterName: namespace,
nodeExporterPort: nodeExporterPort,
previewDomain: domain,
stackdriverServiceAccount: STACKDRIVER_SERVICEACCOUNT,
withVM: false,
werft: werft
});
await installMonitoringSatellite.install()
} catch (err) {
if (!jobConfig.mainBuild) {
werft.fail('observability', err);
Expand Down Expand Up @@ -771,19 +816,6 @@ async function installMetaCertificates(werft: Werft, branch: string, withVM: boo
await installCertificate(werft, metaInstallCertParams, { ...metaEnv(), slice: slice });
}

async function installMonitoring(kubeconfig: string, namespace: string, nodeExporterPort: number, domain: string, stackdriverServiceAccount: any, withVM: boolean, observabilityBranch: string) {
const installMonitoringSatelliteParams = new InstallMonitoringSatelliteParams();
installMonitoringSatelliteParams.kubeconfigPath = kubeconfig
installMonitoringSatelliteParams.branch = observabilityBranch;
installMonitoringSatelliteParams.satelliteNamespace = namespace
installMonitoringSatelliteParams.clusterName = namespace
installMonitoringSatelliteParams.nodeExporterPort = nodeExporterPort
installMonitoringSatelliteParams.previewDomain = domain
installMonitoringSatelliteParams.stackdriverServiceAccount = stackdriverServiceAccount
installMonitoringSatelliteParams.withVM = withVM
installMonitoringSatellite(installMonitoringSatelliteParams);
}

// returns the static IP address
function getCoreDevIngressIP(): string {
return "104.199.27.246";
Expand Down Expand Up @@ -812,4 +844,4 @@ function generateToken(): [string, string] {
const tokenHash = createHash('sha256').update(token, "utf-8").digest("hex")

return [token, tokenHash]
}
}
268 changes: 157 additions & 111 deletions .werft/observability/monitoring-satellite.ts
Original file line number Diff line number Diff line change
@@ -1,126 +1,172 @@
import { exec } from '../util/shell';
import { getGlobalWerftInstance } from '../util/werft';
import * as shell from 'shelljs';
import * as fs from 'fs';
import { exec } from "../util/shell";
import { getGlobalWerftInstance, Werft } from "../util/werft";
import * as fs from "fs";

type MonitoringSatelliteInstallerOptions = {
werft: Werft;
kubeconfigPath: string;
satelliteNamespace: string;
clusterName: string;
nodeExporterPort: number;
branch: string;
previewDomain: string;
stackdriverServiceAccount: any;
withVM: boolean;
};

const sliceName = "observability";

/**
* Monitoring satellite deployment bits
* Installs monitoring-satellite, while updating its dependencies to the latest commit in the branch it is running.
*/
export class InstallMonitoringSatelliteParams {
kubeconfigPath: string
satelliteNamespace: string
clusterName: string
nodeExporterPort: number
branch: string
previewDomain: string
stackdriverServiceAccount: any
withVM: boolean
}

const sliceName = 'observability';

/**
* installMonitoringSatellite installs monitoring-satellite, while updating its dependencies to the latest commit in the branch it is running.
*/
export async function installMonitoringSatellite(params: InstallMonitoringSatelliteParams) {
const werft = getGlobalWerftInstance()

werft.log(sliceName, `Cloning observability repository - Branch: ${params.branch}`)
exec(`git clone --branch ${params.branch} https://roboquat:$(cat /mnt/secrets/monitoring-satellite-preview-token/token)@github.com/gitpod-io/observability.git`, {silent: true})
let currentCommit = exec(`git rev-parse HEAD`, {silent: true}).stdout.trim()
let pwd = exec(`pwd`, {silent: true}).stdout.trim()
werft.log(sliceName, `Updating Gitpod's mixin in monitoring-satellite's jsonnetfile.json to latest commit SHA: ${currentCommit}`);

let jsonnetFile = JSON.parse(fs.readFileSync(`${pwd}/observability/jsonnetfile.json`, 'utf8'));
jsonnetFile.dependencies.forEach(dep => {
if(dep.name == 'gitpod') {
dep.version = currentCommit
}
});
fs.writeFileSync(`${pwd}/observability/jsonnetfile.json`, JSON.stringify(jsonnetFile));
exec(`cd observability && jb update`, {slice: sliceName})

let jsonnetRenderCmd = `cd observability && jsonnet -c -J vendor -m monitoring-satellite/manifests \
--ext-code config="{
namespace: '${params.satelliteNamespace}',
clusterName: '${params.satelliteNamespace}',
tracing: {
honeycombAPIKey: '${process.env.HONEYCOMB_API_KEY}',
honeycombDataset: 'preview-environments',
},
previewEnvironment: {
domain: '${params.previewDomain}',
nodeExporterPort: ${params.nodeExporterPort},
},
${params.withVM ? '' : "nodeAffinity: { nodeSelector: { 'gitpod.io/workload_services': 'true' }, }," }
stackdriver: {
defaultProject: '${params.stackdriverServiceAccount.project_id}',
clientEmail: '${params.stackdriverServiceAccount.client_email}',
privateKey: '${params.stackdriverServiceAccount.private_key}',
},
prometheus: {
resources: {
requests: { memory: '200Mi', cpu: '50m' },
export class MonitoringSatelliteInstaller {
constructor(private readonly options: MonitoringSatelliteInstallerOptions) {}

public async install() {
const {
werft,
branch,
satelliteNamespace,
stackdriverServiceAccount,
withVM,
previewDomain,
nodeExporterPort,
} = this.options;

werft.log(sliceName, `Cloning observability repository - Branch: ${branch}`);
exec(
`git clone --branch ${branch} https://roboquat:$(cat /mnt/secrets/monitoring-satellite-preview-token/token)@github.com/gitpod-io/observability.git`,
{ silent: true },
);
let currentCommit = exec(`git rev-parse HEAD`, { silent: true }).stdout.trim();
let pwd = exec(`pwd`, { silent: true }).stdout.trim();
werft.log(
sliceName,
`Updating Gitpod's mixin in monitoring-satellite's jsonnetfile.json to latest commit SHA: ${currentCommit}`,
);

let jsonnetFile = JSON.parse(fs.readFileSync(`${pwd}/observability/jsonnetfile.json`, "utf8"));
jsonnetFile.dependencies.forEach((dep) => {
if (dep.name == "gitpod") {
dep.version = currentCommit;
}
});
fs.writeFileSync(`${pwd}/observability/jsonnetfile.json`, JSON.stringify(jsonnetFile));
exec(`cd observability && jb update`, { slice: sliceName });

let jsonnetRenderCmd = `cd observability && jsonnet -c -J vendor -m monitoring-satellite/manifests \
--ext-code config="{
namespace: '${satelliteNamespace}',
clusterName: '${satelliteNamespace}',
tracing: {
honeycombAPIKey: '${process.env.HONEYCOMB_API_KEY}',
honeycombDataset: 'preview-environments',
},
},
kubescape: {},
pyrra: {},
}" \
monitoring-satellite/manifests/yaml-generator.jsonnet | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {} && \
find monitoring-satellite/manifests -type f ! -name '*.yaml' ! -name '*.jsonnet' -delete`

werft.log(sliceName, 'rendering YAML files')
exec(jsonnetRenderCmd, {silent: true})
if(params.withVM) {
postProcessManifests()
}
previewEnvironment: {
domain: '${previewDomain}',
nodeExporterPort: ${nodeExporterPort},
},
${withVM ? "" : "nodeAffinity: { nodeSelector: { 'gitpod.io/workload_services': 'true' }, },"}
stackdriver: {
defaultProject: '${stackdriverServiceAccount.project_id}',
clientEmail: '${stackdriverServiceAccount.client_email}',
privateKey: '${stackdriverServiceAccount.private_key}',
},
prometheus: {
resources: {
requests: { memory: '200Mi', cpu: '50m' },
},
},
kubescape: {},
pyrra: {},
}" \
monitoring-satellite/manifests/yaml-generator.jsonnet | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {} && \
find monitoring-satellite/manifests -type f ! -name '*.yaml' ! -name '*.jsonnet' -delete`

werft.log(sliceName, "rendering YAML files");
exec(jsonnetRenderCmd, { silent: true });
if (withVM) {
this.postProcessManifests();
}

// The correct kubectl context should already be configured prior to this step
// Only checks node-exporter readiness for harvester
ensureCorrectInstallationOrder(params.kubeconfigPath, params.satelliteNamespace, params.withVM)
}
this.ensureCorrectInstallationOrder()
this.deployGitpodServiceMonitors();
await this.waitForReadiness()
}

async function ensureCorrectInstallationOrder(kubeconfig: string, namespace: string, checkNodeExporterStatus: boolean){
const werft = getGlobalWerftInstance()
private ensureCorrectInstallationOrder() {
const { werft, kubeconfigPath } = this.options;

werft.log(sliceName, 'installing monitoring-satellite')
exec(`cd observability && hack/deploy-satellite.sh --kubeconfig ${kubeconfig}`, {slice: sliceName})
werft.log(sliceName, "installing monitoring-satellite");
exec(`cd observability && hack/deploy-satellite.sh --kubeconfig ${kubeconfigPath}`, { slice: sliceName });
}

deployGitpodServiceMonitors(kubeconfig)
checkReadiness(kubeconfig, namespace, checkNodeExporterStatus)
}
private async waitForReadiness() {
const { kubeconfigPath, satelliteNamespace } = this.options;

const checks: Promise<any>[] = [];
// For some reason prometheus' statefulset always take quite some time to get created
// Therefore we wait a couple of seconds
checks.push(
exec(
`sleep 30 && kubectl --kubeconfig ${kubeconfigPath} rollout status -n ${satelliteNamespace} statefulset prometheus-k8s`,
{ slice: sliceName, async: true },
),
);
checks.push(
exec(`kubectl --kubeconfig ${kubeconfigPath} rollout status -n ${satelliteNamespace} deployment grafana`, {
slice: sliceName,
async: true,
}),
);
checks.push(
exec(
`kubectl --kubeconfig ${kubeconfigPath} rollout status -n ${satelliteNamespace} deployment kube-state-metrics`,
{ slice: sliceName, async: true },
),
);
checks.push(
exec(
`kubectl --kubeconfig ${kubeconfigPath} rollout status -n ${satelliteNamespace} deployment otel-collector`,
{ slice: sliceName, async: true },
),
);

// core-dev is just too unstable for node-exporter
// we don't guarantee that it will run at all
if (this.options.withVM) {
checks.push(
exec(
`kubectl --kubeconfig ${kubeconfigPath} rollout status -n ${satelliteNamespace} daemonset node-exporter`,
{ slice: sliceName, async: true },
),
);
}

async function checkReadiness(kubeconfig: string, namespace: string, checkNodeExporterStatus: boolean) {
// For some reason prometheus' statefulset always take quite some time to get created
// Therefore we wait a couple of seconds
exec(`sleep 30 && kubectl --kubeconfig ${kubeconfig} rollout status -n ${namespace} statefulset prometheus-k8s`, {slice: sliceName, async: true})
exec(`kubectl --kubeconfig ${kubeconfig} rollout status -n ${namespace} deployment grafana`, {slice: sliceName, async: true})
exec(`kubectl --kubeconfig ${kubeconfig} rollout status -n ${namespace} deployment kube-state-metrics`, {slice: sliceName, async: true})
exec(`kubectl --kubeconfig ${kubeconfig} rollout status -n ${namespace} deployment otel-collector`, {slice: sliceName, async: true})

// core-dev is just too unstable for node-exporter
// we don't guarantee that it will run at all
if(checkNodeExporterStatus) {
exec(`kubectl --kubeconfig ${kubeconfig} rollout status -n ${namespace} daemonset node-exporter`, {slice: sliceName, async: true})
await Promise.all(checks);
}
}

async function deployGitpodServiceMonitors(kubeconfig: string) {
const werft = getGlobalWerftInstance()
private deployGitpodServiceMonitors() {
const { werft, kubeconfigPath } = this.options;

werft.log(sliceName, 'installing gitpod ServiceMonitor resources')
exec(`kubectl --kubeconfig ${kubeconfig} apply -f observability/monitoring-satellite/manifests/gitpod/`, {silent: true})
}
werft.log(sliceName, "installing gitpod ServiceMonitor resources");
exec(`kubectl --kubeconfig ${kubeconfigPath} apply -f observability/monitoring-satellite/manifests/gitpod/`, {
silent: true,
});
}

function postProcessManifests() {
const werft = getGlobalWerftInstance()
private postProcessManifests() {
const werft = getGlobalWerftInstance();

// We're hardcoding nodeports, so we can use them in .werft/vm/manifests.ts
// We'll be able to access Prometheus and Grafana's UI by port-forwarding the harvester proxy into the nodePort
werft.log(sliceName, 'Post-processing manifests so it works on Harvester')
exec(`yq w -i observability/monitoring-satellite/manifests/grafana/service.yaml spec.type 'NodePort'`)
exec(`yq w -i observability/monitoring-satellite/manifests/prometheus/service.yaml spec.type 'NodePort'`)
// We're hardcoding nodeports, so we can use them in .werft/vm/manifests.ts
// We'll be able to access Prometheus and Grafana's UI by port-forwarding the harvester proxy into the nodePort
werft.log(sliceName, "Post-processing manifests so it works on Harvester");
exec(`yq w -i observability/monitoring-satellite/manifests/grafana/service.yaml spec.type 'NodePort'`);
exec(`yq w -i observability/monitoring-satellite/manifests/prometheus/service.yaml spec.type 'NodePort'`);

exec(`yq w -i observability/monitoring-satellite/manifests/prometheus/service.yaml spec.ports[0].nodePort 32001`)
exec(`yq w -i observability/monitoring-satellite/manifests/grafana/service.yaml spec.ports[0].nodePort 32000`)
}
exec(
`yq w -i observability/monitoring-satellite/manifests/prometheus/service.yaml spec.ports[0].nodePort 32001`,
);
exec(`yq w -i observability/monitoring-satellite/manifests/grafana/service.yaml spec.ports[0].nodePort 32000`);
}
}
Loading