Skip to content

[ws-manager] Make cluster selection filter by application cluster #14050

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Oct 26, 2022

Conversation

andrew-farries
Copy link
Contributor

@andrew-farries andrew-farries commented Oct 20, 2022

Description

Redo of #14000 (reverted in #14029). The original lacked the changes to components/gitpod-db/src/typeorm/workspace-cluster-db-impl.ts. This PR adds that filtering and adds tests for it.

Context As part of #9198 we want to start syncing the `d_b_workspace_cluster` table with `db-sync`.

Currently, the table differs between US and EU regions because each table contains only the data relevant to that region. For example, in the EU table, the eu70 workspace cluster is marked as available and the us70 cluster is cordoned. In the US cluster eu70 is cordoned and us70 is available.

In order to sync the table we need to get to a point where there is no difference in the data in the table between EU and US regions.

To do that we will introduce a new field in the table called applicationCluster which records the name of the application cluster to which the record belongs. Thus, for each workspace cluster there will be two rows in Gitpod SaaS:

name applicationCluster url tls state ...
eu70 eu02 url tls info available ...
eu70 us02 url tls info cordoned ...

Effectively the new applicationCluster column gives the table an extra dimension so that we can combine both tables (EU and US) into one.

#13722 added the column to the table and made gpctl fill the value when gpctl registering a new workspace cluster. The value is taken from the GITPOD_INSTALLATION_SHORTNAME environment variable in ws-manager-bridge.

Make the cluster selection mechanism in ws-manager-api aware of the application cluster to which each workspace cluster is registered. Previously, the selection was by workspace cluster name only. Soon, as described in the context section, each region in gitpod will store records in the database for all workspace clusters - not just the ones in its region. This PR ensures that workspace clusters not in the same region as the application cluster will not be considered.

Related Issue(s)

Part of #9198 and #13800

How to test

Edit the server deployment and change the WSMAN_CFG_MANAGERS environment variable to change the applicationCluster to which the default workspace cluster in the preview environment is registered:

[
  {
    "name": "",
    "url": "dns:///ws-manager:8080",
    "tls": {
      "ca": "/ws-manager-client-tls-certs/ca.crt",
      "crt": "/ws-manager-client-tls-certs/tls.crt",
      "key": "/ws-manager-client-tls-certs/tls.key"
    },
    "state": "available",
    "maxScore": 100,
    "score": 50,
    "govern": true,
    "admissionConstraints": null,
    "applicationCluster": "dev" // <- change this from "dev" to anything else
  }
]

The WSMAN_CFG_MANAGERS environment variable is base64 encoded.

Once the server deployment is edited it should no longer be possible to start a workspace as the only workspace cluster is now associated with a different (non-existent) application cluster:

image

Edit the WSMAN_CFG_MANAGERS environment variable once more and set the applicationCluster field back to "dev". Starting a workspace should now proceed as normal.

Release Notes

NONE

Documentation

Werft options:

  • /werft with-local-preview
    If enabled this will build install/preview
  • /werft with-preview
  • /werft with-integration-tests=all
    Valid options are all, workspace, webapp, ide

@werft-gitpod-dev-com
Copy link

started the job as gitpod-build-af-make-cluster-selection-app-cluster-aware.2 because the annotations in the pull request description changed
(with .werft/ from main)

@github-actions github-actions bot added team: webapp Issue belongs to the WebApp team labels Oct 20, 2022
@andrew-farries andrew-farries marked this pull request as draft October 21, 2022 07:25
@andrew-farries
Copy link
Contributor Author

I plan to close this draft in favour of a series of smaller PRs using a DI approach, stating with the dependency injection PR: #14084

I'll address your comments here in each of those PRs @geropl.

@geropl
Copy link
Member

geropl commented Oct 21, 2022

@andrew-farries Why not pass the applicationCluster in parallel to name? Then there is no need for DI, at all, no?

@andrew-farries andrew-farries force-pushed the af/make-cluster-selection-app-cluster-aware branch from d1bafaf to 37eb088 Compare October 21, 2022 13:49
@roboquat roboquat added size/XL and removed size/L labels Oct 21, 2022
@andrew-farries
Copy link
Contributor Author

@andrew-farries Why not pass the applicationCluster in parallel to name? Then there is no need for DI, at all, no?

@geropl The approach we synced about is implemented in 37eb088; passing the application cluster explicitly everywhere rather than injecting a new dependency into WorkspaceManagerClientProvider (as in #14084).

This approach results in more changes across the codebase, but it's more explicit.

Still not sure which I prefer tbh.

@andrew-farries andrew-farries force-pushed the af/make-cluster-selection-app-cluster-aware branch 3 times, most recently from 0abf8ea to ea2e38f Compare October 24, 2022 11:25
@andrew-farries andrew-farries requested a review from geropl October 24, 2022 12:03
Andrew Farries added 2 commits October 25, 2022 13:35
Check that it's possible to finder workspace clusters by name and
application cluster.
@andrew-farries andrew-farries force-pushed the af/make-cluster-selection-app-cluster-aware branch from ea2e38f to 16545a0 Compare October 25, 2022 13:53
Andrew Farries added 4 commits October 25, 2022 14:40
Connect the configured installationShortname aka applicationCluster from
server, ws-manager-bridge, and the image-builder-api to workspace
cluster provider.
Finding a workspace cluster by name only makes sense in the context of a
particular workspace cluster.
Deleting a workspace cluster by name only makes sense in the context of
a particular workspace cluster.
@andrew-farries andrew-farries force-pushed the af/make-cluster-selection-app-cluster-aware branch from 16545a0 to 3cdaa1d Compare October 25, 2022 14:41
@andrew-farries
Copy link
Contributor Author

taking this out of draft as I think we are past the 'use DI/don't use DI' discussion now.

@andrew-farries andrew-farries marked this pull request as ready for review October 25, 2022 14:46
@geropl
Copy link
Member

geropl commented Oct 25, 2022

Code looks good, starting to test now.

Copy link
Member

@geropl geropl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM, tested and works as expected. Thx @andrew-farries !

@geropl
Copy link
Member

geropl commented Oct 26, 2022

@andrew-farries Are we waiting for another review? Or can we start rolling it out?

@andrew-farries
Copy link
Contributor Author

andrew-farries commented Oct 26, 2022

It's waiting for review from @aledbf or @sagor999

@roboquat roboquat merged commit 076f35a into main Oct 26, 2022
@roboquat roboquat deleted the af/make-cluster-selection-app-cluster-aware branch October 26, 2022 15:26
@roboquat roboquat added deployed: webapp Meta team change is running in production deployed Change is completely running in production labels Oct 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deployed: webapp Meta team change is running in production deployed Change is completely running in production release-note-none size/XL team: webapp Issue belongs to the WebApp team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants