-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install web console server #6359
Conversation
@jcantrill FYI, this impacts metrics and logging install. |
03e4de1
to
19ead78
Compare
9a6c050
to
3ca69aa
Compare
It is looking like we're going to change extensions so we're not serving them from the master filesystem. See openshift/origin-web-console-server#11 |
36e77e1
to
6308086
Compare
6308086
to
d7a0c28
Compare
559c1a8
to
e11cb56
Compare
22c33b9
to
72a1bcf
Compare
@sdodson Do you mind taking a look? It'd be good to get feedback since feature freeze is next week. We might need to do some of this in phases. The console proxy in master can't be enabled until the install changes are there, but we can't remove asset config from the master-config.yaml until the proxy is enabled. |
72a1bcf
to
7845e56
Compare
good point, forgot that was how we triggered it, fine with me. We need to make sure we update the "how to disable the console" docs. |
Being accessed via a service, the service is in a restricted namespace
…On Thu, Jan 4, 2018 at 4:41 PM, Scott Dodson ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In roles/openshift_web_console/tasks/remove.yml
<#6359 (comment)>
:
> @@ -0,0 +1,27 @@
+---
+- command: mktemp -d /tmp/console-ansible-XXXXXX
+ register: mktemp
+ changed_when: False
+ become: no
+
+- copy:
+ src: "{{ __console_files_location }}/{{ item }}"
+ dest: "{{ mktemp.stdout }}/{{ item }}"
+ with_items:
+ - "{{ __console_template_file }}"
+
+- name: Delete web console objects
Nevermind, I'm not sure why that came to mind unless there's some risk of
a rogue pod hijacking traffic in the event that the console has been
removed.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#6359 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABZk7UcFp4CdscQn_-GUQA-T0tIafaC2ks5tHUWlgaJpZM4Q2vbr>
.
|
@deads2k @sdodson @michaelgugino Any objection to making each asset config option you can customize a variable in the template? That probably makes both the ansible and cluster up changes easier and gets rid of some of the |
I'm concerned about that taking us in the wrong direction with regard to template guidance. If you think about what component developers are looking for when writing these templates, they will have a set of specific inputs provided by the installer. These inputs are standard across all components and include things like serving_cert_ca, docker registry url, log level, and image prefix/suffix. The set is small, common to all components, and provides overall information about the cluster. By contrast, the information for the config is generally specific to the component being configured. Having them separate makes the separation between the kinds of config very clear. It helps to manage the information flow in the templates, in ansible, and in cluster-up. I haven't started writing the general cluster-up code yet, but I suspect we'll keep that strong separation in our information flow to help us organize information gathering and plumbing. Trying to templatize yaml embedded inside of yaml doesn't seem like a winning proposition from the component developer's perspective either. The separate config file cleanly represents the flow that our processes have for "read the config from disk" and matches the administration of "this file on disk is what you're using" and matches the support need for "this file on disk is the config you're using". If we separate the actors in play, I see:
I'd want to be careful about doing something that would make life harder for nearly everyone. Is there a concrete way that combining the files makes it easier for the other four people that I'm not seeing? |
782ae04
to
afa60c1
Compare
After reviewing the discussion around yedit a bit more we'll stick with the established pattern of using yedit to set configmap values. We'll open up a followup to clean up the configuration of metrics and logging urls in the future but we can move forward with that as implemented. |
@@ -0,0 +1,2 @@ | |||
--- | |||
openshift_web_console_nodeselector: {"region":"infra"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately the default we use for this is defined in openshift_hosted role but we can redefine it here. This pattern would align it with other components while still allowing them to override it specifically for the console.
openshift_hosted_infra_selector: "region=infra"
openshift_web_console_nodeselector: "{{ openshift_hosted_infra_selector }}"
@michaelgugino agreed or is there a better way?
I spoke with @sdodson, and we agreed to work on upgrade in a follow on PR. |
afa60c1
to
296ee5e
Compare
@sdodson I've done some additional testing and pushed my updates. Thanks for your help. PTAL |
/lgtm |
/test all [submit-queue is verifying that this PR is safe to merge] |
/retest |
1 similar comment
/retest |
flaked on openshift/origin#17556 |
@spadgett: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Another flake, reviewed the install logs and the console is clearly installed successfully. Merging. |
Reading through this code, it's pretty inconsistent between the use of "web-console" and "webconsole". Is there a reason 'web-console' wasn't used throughout? Also, the labels for the deployment should be |
@@ -0,0 +1,21 @@ | |||
kind: AssetConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This predates this PR, but this isn't the right name for this directory. And the directories should be versioned by major directory, because even with 3.10 we're going to have to change these config files across API versions.
Probably should be
files/components/web-console/3.9/config.yaml
files/components/web-console/3.9/deployment.yaml
files/components/web-console/3.9/rbac.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Am I missing something obvious? Why aren't these files in the files dir of their respective roles, i.e., role/openshift-web-console/files/{config,deployment,rbac}.yaml?
As for versioning, I'd expect the customer to only be consuming these from the RPMS for the specific version they're installing.
Automatic merge from submit-queue (batch tested with PRs 18529, 18214). Update web console template labels Consistently use `app: openshift-web-console` for labels in the web console template. Per openshift/openshift-ansible#6359 (comment) > Also, the labels for the deployment should be app: openshift-web-console across the board. /assign @smarterclayton /cc @deads2k @jwforres
https://trello.com/c/9oaUh8xP
Work in progress PR for installing the web console as a deployment on the platform based on the template service broker install.
Related cluster up changes here: openshift/origin#17575
TODO:
Follow on tasks:
cc @sdodson @jwforres