Skip to content
This repository was archived by the owner on Jul 29, 2024. It is now read-only.

Pass the current runner config into the onPrepare hook #1708

Closed
wants to merge 17 commits into from

Conversation

crrobinson14
Copy link
Contributor

Currently the onPrepare() hook has no arguments passed into it. This means that, even though it is called once per capability, it has no idea which capability is being executed so it can't make context-sensitive decisions. This PR passes that configuration block into onPrepare() so it can do that, like including requirements that only certain tests need (for performance reasons, or where only certain environments require/support them).

Since runFilenameOrFn_() already supports the concept of passing arguments to the callback, this only requires a 1-line code change.

@googlebot
Copy link

Thanks for your pull request.

It looks like this may be your first contribution to a Google open source project, in which case you'll need to sign a Contributor License Agreement (CLA) at https://cla.developers.google.com/.

If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check the information on your CLA or see this help article on setting the email on your git commits.

Once you've done that, please reply here to let us know. If you signed the CLA as a corporation, please let us know the company's name.

@crrobinson14
Copy link
Contributor Author

Signed CLA.

@googlebot
Copy link

CLAs look good, thanks!

@sjelin
Copy link
Contributor

sjelin commented Jan 11, 2015

If you're going to pass anything like this, you should be passing this.config_.capabilities, not the whole config object.

Also, I don't like the idea of exposing internal variables like that regardless. This seems like a bit of an edge case, which you could always resolve (albeit inelegantly) by having multiple config files.

@crrobinson14
Copy link
Contributor Author

I'm not sure I understand the concern. These aren't "internal" variables - they came FROM the very same config that onPrepare() is being executed in. It's not passing this (which has driverprovider and all sorts of things)... just config:

var Runner = function(config) {    /// <------ I give you the config in the first place...
  log.set(config);
  this.preparer_ = null;
  this.driverprovider_ = null;
  this.config_ = config;                   /// <----- You're just giving it back as a convenience...
  // ...
};

The only vars in that block that don't come from the caller are:

     // The CURRENT capabilities being run
    capabilities: {},
    // The command used to start Protractor
    '$0': 'node ./node_modules/protractor/bin/protractor',
    // An array of args used to start Protractor
    _: [ 'test-client/protractor.conf.js' ],
    // ... There may be an additional var or two depending on how the Selenium config was interpreted

I would still argue that these additional vars are useful. onPrepare()'s purpose is very well aligned with the need for these variables. To do anything context-sensitive when you aren't using multiCapabilities you need to take an internal reference to your own config object before you export it. It's just a sensible convenience to pass it back for that case. That's especially valuable when onPrepare is an external script - a nice feature added recently but it's super awkward to pass config data to it...

Finally, I would argue that multiple configs is a terrible answer. This whole multiCapabilities feature must have been added for a reason, but there are half a dozen issues posted over the last year asking how to do things with it where the answer was "sorry, you can't, use multiple configs". When it comes down to the trade-offs, what advantage does Protractor gain by NOT making this easier in exchange for avoiding the perceived risk of... making this easier?

@juliemr
Copy link
Member

juliemr commented Jan 12, 2015

In general, we try to avoid feature creep in the configuration file. multiCapabilities was added because it was highly requested, addresses a common use case, and provides a lot of benefits. On the other hand, it has significant costs, as does anything that complicates the config. The idea of multiCapabilities is that you want to run the same setup on a couple different browsers. We don't want, and cannot support, one config file being able to support all cases. If what you're requesting is easy to do with multiple config files, that's a great solution!

For this specific request - you can get at the current capabilities using browser.getCapabilities (returns a promise). Could you let us know if this doesn't cover your use case? If so, what other context are you looking for?

If we do introduce a way to get the config (which is not out of the question), I'd like it to be more general than being passed as a parameter to onPrepare functions.

@crrobinson14
Copy link
Contributor Author

Well... this is certainly disheartening. :(

First, it's not one-browser-many-cases. It's one-case-many-browsers. But browser.getCapabilities() doesn't address it because it returns Selenium's block, not Protractor's. Suppose you wanted to use a configuration similar to the one discussed in this issue:

#1354

Suppose you wanted your screenshot tool at the end of each test to drop the file in a directory whose name contained the resolution being captured. There's no way to do this currently - browser.getCapabilities will give you the browser name and version back, but not the other options. But with a single additional option in the capabilities block you could pass a parameter to each test that was specific to that block. This makes it a single line of code to marry these up.

Over the past year it seems like this kind of thing has been repeatedly requested:

#1594
("I would definitely benefit from this...")

#1195
`var capsPromise = browser.getCapabilities(); //does not contain useful stuff``

http://stackoverflow.com/questions/22305297/run-protractor-tests-with-different-window-sizes

etc. Various requests to be able to pass in dynamic elements to a test like user/pass, test data, browser resolutions, etc. are all over StackOverflow and here.

config.params is messy because it's hard-coded. onPrepare is given as the specific answer for how to do things like set resolution in a number of places as well, e.g. here:

http://ramonvictor.github.io/protractor/slides/#/34

I'm not saying Ramon is the de-facto source for this information... but there certainly isn't anything to contradict it elsewhere...

Multiple configs is a bad answer because 95% of my protractor config is identical between each test - it's only the elements in each capabilities block that changes. (It's the same app... just across a range of browsers and resolutions...) Having to duplicate 200 lines of configuration data just to change ONE line of "put THESE screen shots in THIS folder" is a huge maintenance hassle.

Finally, it's disheartening because this was the first in a collection of contributions I had planned to make. I thought it was perfectly in line with a testing framework designed specifically for a JS framework that values declarative definitions, dependency injection, DRYness, etc. Discovering that the TESTING framework prefers basically the opposite - inferential connections between items, significant repetition, and call-for rather than injecting dependencies makes me wonder if the rest of what I had planned to suggest was misguided.

Is there a document somewhere that outlines the design principles behind this project? I have what I believe are some valuable ideas, but I don't want to be a nuisance... and I certainly don't want to repeatedly trigger 8-hour discussion about 8-byte code changes! :) I'm happy to comply with whatever those principles are... but where can I find them outlined?

@juliemr
Copy link
Member

juliemr commented Jan 13, 2015

I don't mean to be disheartening, and I definitely don't want to discourage you from submitting PRs! We just want to carefully consider the complexity of changes, guard against making breaking changes in the future, and make sure that PRs solve real use cases. There is no doc with design principles specifically laid out. This change is small in bytes, but conceptually it's a big question ("what information should the test be able to use about the configuration?"). I think that you bring up some legitimate concerns - let's work together to find a good solution.

FYI re multiple configs: Since configs are node objects, you can create a base config and require it to share code. This is what the angular.js project does.

@hankduan what do you think about adding a browser.getProcessedConfig (name could be better) function in Runner.prototype.createBrowser? This would give the config post TaskScheduler/TaskRunner processing so that the specs and capabilities line up with what's actually being run by this one browser instance.

@hankduan
Copy link
Contributor

@crrobinson14
From the examples you provided, it seems like you don't actually need the entire config. Rather, you want to do use protractor to help you distribute custom capability-specific configs like this?

multiCapabilities: [{
    'browserName': 'chrome',
    count, maxInstances, specs, shardTestFiles ... (configs used by protractor/webdriver),
    resolution, browser-specific data, ... (configs not used by protractor/webdriver)
  }],

and use the custom configs in your onPrepare to customize your test:

onPrepare: function(capability) {
  browser.resolution = capability.resolution;
  jasmine.on_fail = //take screenshot at "capability.browserName" + "capability.resolution"
}

@crrobinson14
Copy link
Contributor Author

Thanks, all, for bearing with me, especially @juliemr for being so diplomatic!

Yes, @hankduan, that's roughly the goal, although I'm trying to be a little generic with my description because running multi-resolution tests is just one use-case. I can think of dozens of others when you get into enterprise-app space.

My position is that onPrepare is perfectly time for doing test-run-specific setup - that could mean configuring the browser's resolution, setting up some environment variables such as where some test outputs will be dropped, setting up a timing analytic, etc. These are things that are not ideally done before starting Protractor itself (e.g. environment variables) because of the added value of multiCapabilities. m/c is perfect for setting up a collection of browser combinations that you want to test against. Using Chrome + Firefox and adding Safari later or IE on a box that supports it is the obvious fit there, and it's the one the documents suggest.

But as soon as you say "oh, I can say I want BOTH Firefox and Chrome" the logical next step is to say "Chrome 1400x1050 and also Chrome 1024x768". This is a single line of code in onPrepare... as long as you know that this is the resolution desired. But resolution is not the only use-case - as I said above perhaps you could be testing i18n functionality and setting the browser's language. You can do that with the --lang=LANG flag in Chrome's arguments in the capabilities - but how do you tell your test to go to LANG.mydomain.com and that you want your test reports dumped out to build/reports/LANG? Yes, multiple configs are possible - but they're almost identical so it's a pain to maintain if you decide to switch to SauceLabs one day. On the other hand, a 1-line-of-code solution is possible if you make the 'current' capabilities config accessible somewhere onPrepare (and others?) can get to it. You just browser.get('http://' + currentCapabilityConfig.lang + '.mydomain.com/'); in your test script, or browser.takeScreenshot().then(.... // Write the PNG to 'build/screens/' + currentCapabilityConfig.lang + '/home-page.png')

I also still think this would be very valuable when you start using Protractor's recently-added support for making these hooks call external scripts. You could have some very elegant and powerful configurations with very little work.

I could probably imagine several dozen cases where you might want to test some browser-specific aspect of your site/app, something whose only functional difference is a slight change in the capabilities config block.

I suppose the generic feature-request is to provide some way of enabling context-sensitive work to be performed inside hooks (and test scripts?) And I believe the single thing required to make all that happen is to have access to the current capability configuration object being executed. If you prefer not to inject it, perhaps a simple call to obtain it? IMO it's a little weird because there's already browser.getCapabilities() which suggests that it does this (but returns something else), but anything in this category would totally solve the problem for me.

Just brainstorming here... This may be just a horrific idea (I blame a sleepless night) but it seems like Angular's own dependency injection is a clean way to address a lot of these kinds of things. The provider/injector concept is already very familiar to Angular authors, and it seems like it wouldn't be much more than an else case and an array iterator (with calls to providers to obtain the things to inject) in the spot where hooks are being called. Totally understand if you hate the idea, but it does seem like another way to address this kind of problem in a relatively generic way that doesn't lead to hard-coding of hook arguments - and potentially even allows test authors/maintainers to create their own providers...

Or maybe just browser.getCapabiitiesConfig() and we're done here? :) Would you accept a PR that provided that?

hankduan and others added 2 commits January 13, 2015 10:30
…lement_`

Protractor crashes when one uses locators with findElementsOverride (i.e.
any custom protractor locator like by.binding/repeater/etc) in
map/filter/then/each/reduce
@juliemr
Copy link
Member

juliemr commented Jan 13, 2015

I'm leaning towards browser.get[...]Config since that has the flexibility of also being available inside a test (where we couldn't use dependency injection, since we shouldn't change the arguments of the test describe and it functions). It sounds like that would cover all of your use cases. I'd take a PR to do that!

I think a good place to add the function would be at https://github.com/angular/protractor/blob/master/lib/runner.js#L189

We should also add some documentation inside the reference config saying how it will be available.

@hankduan did you have an alternate idea?

@hankduan
Copy link
Contributor

No I think this is good, although it might be good for the config to be a bit more structured. (i.e. browser.config.getCapabilities(), browser.config.getSpecs()), because we switch those up a bit internally, so someone passing multicapabilities into the config might be surprised that their current capabilities is found in capabilities

@juliemr
Copy link
Member

juliemr commented Jan 13, 2015

Yeah, that's why I was thinking calling it browser.getProcessedCapabilities might hint that things have been processed :)

EDIT: I typod above, should be browser.getProcessedConfig

@crrobinson14
Copy link
Contributor Author

Let me know what naming convention you'd both prefer and I'd be happy to re-submit the PR around that!

@hankduan
Copy link
Contributor

It seems like @crrobinson14 is talking about wanting all configs, not just capabilities though. Is that right?

Alternatively we can make our config parser more featured, and parse all configs that we would use internally using the config parsers instead of accessing configs as key/value pairs. Then, we would just pass an instance of the config parser.

@juliemr
Copy link
Member

juliemr commented Jan 13, 2015

Bah I meant browser.getProcessedConfig.

Let's just pass it as an object and use that.

@crrobinson14
Copy link
Contributor Author

A lot of these methods return promises... but this method doesn't require it. Should it return one anyway (immediately resolved) just to follow the same convention as browser.getCapabilities()?

@googlebot
Copy link

We found a Contributor License Agreement for you (the sender of this pull request) and all commit authors, but as best as we can tell these commits were authored by someone else. If that's the case, please add them to this pull request and have them confirm that they're okay with these commits being contributed to Google. If we're mistaken and you did author these commits, just reply here to confirm.

@crrobinson14
Copy link
Contributor Author

Actually I'm going to close this PR while I fix this up. I borked my local repo reverting the original commit and don't want to keep triggering Travis. Back shortly.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants