Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENV variable manager sluggish with lots of variables #620

Closed
andrewklau opened this issue Oct 4, 2016 · 16 comments
Closed

ENV variable manager sluggish with lots of variables #620

andrewklau opened this issue Oct 4, 2016 · 16 comments
Assignees
Labels
area/performance kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2

Comments

@andrewklau
Copy link
Contributor

We had a case where a pod had two containers, both with about 20+ ENV variables set. This ended up being very sluggish for the user when scrolling down and modifying these ENV variables.

Perhaps they could be split by a dropdown box similar to the terminal and logs. Although if an individual container still had a lot of ENV variables it would still also be quite slow.

@spadgett spadgett added kind/bug Categorizes issue or PR as related to a bug. priority/P2 area/performance labels Oct 4, 2016
@jwforres
Copy link
Member

jwforres commented Oct 6, 2016

we definitely shouldn't be sluggish at 40 env vars, its not that many. 1000 would have been a different story. @benjaminapetersen consider this next up after the membership stuff is done

@benjaminapetersen
Copy link
Contributor

Agree. Standalone the kve has no prob with multiple instances on a page and many vars. Its likely related to the # of $digest loops. I'll look at this.

@benjaminapetersen
Copy link
Contributor

@andrewklau did you mean "sluggish" as in browser lag/response? I'm creating a few scenarios with a fairly absurd # of env vars and not experiencing that thus far. However, I can definitely see that the user experience is less than idea at a certain point.

@andrewklau
Copy link
Contributor Author

@benjaminapetersen yeah browser response when scrolling it kept jolting as I scrolled up and down the list (also when using the browser search function)

@spadgett
Copy link
Member

@andrewklau Any chance there was a deployment in progress or something that could trigger many page updates?

@andrewklau
Copy link
Contributor Author

@spadgett there were a few deployments happening in the project, can't remember if it was for the one being modified. This occurred around the same time we were seeing #621 so possibly related

@jwforres
Copy link
Member

Ahhh OK, I can buy this causing a problem. So Ben you will need to set up a
pod crash loop example that has a bunch of env on it.

On Oct 15, 2016 1:54 AM, "Andrew Lau" [email protected] wrote:

@spadgett https://github.com/spadgett there were a few deployments
happening in the project, can't remember if it was for the one being
modified. This occurred around the same time we were seeing #621
#621 so possibly
related


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#620 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABZk7ZkvRFww9lmpusjiktrwFrjP3-M0ks5q0GqfgaJpZM4KNSb5
.

@benjaminapetersen
Copy link
Contributor

benjaminapetersen commented Oct 17, 2016

Right. I don't think the editor itself has this performance issue so much as we are allowing it to be used when things are in flux. Thinking about how to solve -- is it reasonable to disable the editor in/in between certain states?

@spadgett
Copy link
Member

@benjaminapetersen You might even use $timeout to simulate frequent digest loops and profile the KVE in Chrome to see where the time is spent.

@jwforres jwforres added this to the 1.4.0 milestone Oct 25, 2016
@jwforres jwforres modified the milestones: 1.5.0, 1.4.0 Jan 19, 2017
@spadgett
Copy link
Member

We have no track by here, which is probably contributing to this.

@spadgett spadgett removed this from the 1.5.0 milestone Jul 31, 2017
benjaminapetersen added a commit to benjaminapetersen/origin-web-console that referenced this issue Aug 23, 2017
- issue openshift#1863 could not pull second-to-last env var below last env var in kve
- issue openshift#620 possibly helped (?)
- bugzilla #1428991 reordering env vars down only works by twos
  - example: third moves to fifth, then seventh
- bugzilla #1369315 possibly helped (?)

I've mentioned 2 additional issues that may be helped by this fix as many quirky behaviors seem to resolve.  That said, I will test more before closing them.
benjaminapetersen added a commit to benjaminapetersen/origin-web-console that referenced this issue Aug 23, 2017
benjaminapetersen added a commit to benjaminapetersen/origin-web-console that referenced this issue Aug 23, 2017
@spadgett
Copy link
Member

#2416 should at least partially fix this.

f0x11 pushed a commit to f0x11/origin-web-console that referenced this issue Mar 26, 2018
- issue openshift#1863 could not pull second-to-last env var below last env var in kve
- issue openshift#620 possibly helped (?)
- bugzilla #1428991 reordering env vars down only works by twos
  - example: third moves to fifth, then seventh
- bugzilla #1369315 possibly helped (?)

I've mentioned 2 additional issues that may be helped by this fix as many quirky behaviors seem to resolve.  That said, I will test more before closing them.
f0x11 pushed a commit to f0x11/origin-web-console that referenced this issue Mar 26, 2018
- issue openshift#1863 could not pull second-to-last env var below last env var in kve
- issue openshift#620 possibly helped (?)
- bugzilla #1428991 reordering env vars down only works by twos
  - example: third moves to fifth, then seventh
- bugzilla #1369315 possibly helped (?)

I've mentioned 2 additional issues that may be helped by this fix as many quirky behaviors seem to resolve.  That said, I will test more before closing them.
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 15, 2020
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 14, 2020
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/performance kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P2
Projects
None yet
Development

No branches or pull requests

6 participants