Skip to content
This repository was archived by the owner on May 29, 2019. It is now read-only.

Potential performance regression between 0.14.3 and 1.1.2 #5496

Closed
crisbeto opened this issue Feb 17, 2016 · 6 comments
Closed

Potential performance regression between 0.14.3 and 1.1.2 #5496

crisbeto opened this issue Feb 17, 2016 · 6 comments

Comments

@crisbeto
Copy link
Contributor

EDIT: I managed to narrow it down to the 1.0.2 release, which consists of this commit: 74be568

I recently updated from 0.14.3 to 1.1.2 and I noticed that my unit tests are running 1 to 1.5 seconds slower than usual. The only changes were the UI-Bootstrap update. If I switch between the two versions, the test times go back to normal. Note that I'm not testing any UI-Bootstrap functionality, but I'm loading it together with the rest of my code.

I'm using a custom build with the following modules:

  • accordion
  • collapse
  • modal
  • stackedMap
  • position
  • pagination
  • tooltip
  • typeahead
  • dateparser
  • datepicker
  • timepicker

Also note that I'm using all of the prefixed providers/directives that were introduced in 0.14.0.
I'm running the tests with Karma 0.13.9 on PhantomJS 2.1.4.

@icfantv
Copy link
Contributor

icfantv commented Feb 17, 2016

What would you like us to do?

@crisbeto
Copy link
Contributor Author

I'm just reporting it as potential issue, I'm not sure what might be causing it. I can try to narrow down exactly which version introduced it.

@crisbeto
Copy link
Contributor Author

Looks like it got introduced in 1.0.2. Kinda strange considering that the whole release is only a Gruntfile change (74be568).

@varunvs
Copy link

varunvs commented Feb 22, 2016

I have observed the same. The tests completion time increased to 1minute (it was previously 10s), which is drastic.

@icfantv
Copy link
Contributor

icfantv commented Feb 22, 2016

Hey guys, one thing you could do is to list what UIBS components you are using in your tests. This might help us narrow down what components have become "slow." Additionally, posting the specific test or tests (as you can) would help as well. Your testing framework should give you individual test times so you can see which one(s) has/have gotten worse.

I'm afraid that without this information there not really anything we can do as we cannot just take stabs in the dark and guess.

@wesleycho
Copy link
Contributor

I'm going to close this - what the check does is inserts the appropriate CSS into a new style tag in the head. The prior CSP check was broken, so it would never insert it, but now it inserts it properly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants