Skip to content

In the test report, if test failed on all devices. Mark [All x test devices] #784

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Dec 4, 2021

Conversation

sunmou99
Copy link
Contributor

@sunmou99 sunmou99 commented Dec 2, 2021

Description

In the report, if test failed on all devices. Use [All test devices] instead of listing each devices
e.g.
[emulator_target, emulator_latest, android_target, android_latest, emulator_32bit]
-> [All 5 android_device]

[emulator_target, android_target, android_latest, emulator_32bit]
-> [All 2 FTL Devices, emulator_target, emulator_32bit]


Testing

Describe how you've tested these changes. Link any manually triggered Integration tests or CPP binary SDK Packaging Github Action workflows, if applicable.


Type of Change

Place an x the applicable box:

  • Bug fix. Add the issue # below if applicable.
  • New feature. A non-breaking change which adds functionality.
  • Other, such as a build process or documentation change.

Notes

  • Bug fixes and feature changes require an update to the Release Notes section of release_build_files/readme.md.
  • Read the contribution guidelines CONTRIBUTING.md.
  • Changes to the public API require an internal API review. If you'd like to help us make Firebase APIs better, please propose your change in a feature request so that we can discuss it together.

@google-cla google-cla bot added the cla: yes label Dec 2, 2021
@sunmou99 sunmou99 added the tests-requested: full Trigger a FULL set of integration tests (uses expanded test matrix). label Dec 2, 2021
@github-actions github-actions bot added tests: in-progress This PR's integration tests are in progress. and removed tests-requested: full Trigger a FULL set of integration tests (uses expanded test matrix). labels Dec 2, 2021
@github-actions
Copy link

github-actions bot commented Dec 2, 2021

Integration test with FLAKINESS (succeeded after retry)

Requested by @sunmou99 on commit 959ac24
Last updated: Fri Dec 3 23:57 PST 2021
View integration test log & download artifacts

Failures Configs
firestore [TEST] [FLAKINESS] [Android] [ubuntu, windows] [android_target]
(1 failed tests)  CRASH/TIMEOUT

Add flaky tests to go/fpl-cpp-flake-tracker

@github-actions github-actions bot added the tests: failed This PR's integration tests failed. label Dec 2, 2021
@firebase-workflow-trigger firebase-workflow-trigger bot removed the tests: in-progress This PR's integration tests are in progress. label Dec 2, 2021
@sunmou99 sunmou99 added the tests-requested: full Trigger a FULL set of integration tests (uses expanded test matrix). label Dec 2, 2021
@github-actions github-actions bot added tests: in-progress This PR's integration tests are in progress. and removed tests-requested: full Trigger a FULL set of integration tests (uses expanded test matrix). tests: failed This PR's integration tests failed. labels Dec 2, 2021
@jonsimantov
Copy link
Contributor

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

@sunmou99 sunmou99 changed the title In the test report, if test failed on all devices. Mark [All test devices] In the test report, if test failed on all devices. Mark [All x test devices] Dec 2, 2021
@sunmou99
Copy link
Contributor Author

sunmou99 commented Dec 2, 2021

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

Count added.
However, we only use [All *] when the len(devices) > 1. If there is only one device, then it just use the device name.

@jonsimantov
Copy link
Contributor

jonsimantov commented Dec 3, 2021

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

Count added. However, we only use [All *] when the len(devices) > 1. If there is only one device, then it just use the device name.

It would still be good to know if the ONLY device failed, as opposed to if 1 device out of many failed.

@github-actions github-actions bot added the tests: failed This PR's integration tests failed. label Dec 3, 2021
@sunmou99
Copy link
Contributor Author

sunmou99 commented Dec 3, 2021

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

Count added. However, we only use [All *] when the len(devices) > 1. If there is only one device, then it just use the device name.

It would still be good to know if the ONLY device failed, as opposed to if 1 device out of of many failed.

I see where you're coming from. If we could provide number like "x/y", which means "x" out of "y" configs has errors, it may be even better?
But enabling this feature requires some more effort:
Currently, the [All *] logic read config info from print_matrix_configuration.py, thus would work correctly for nightly test.
However, to simplify the logic, we ignores all succeeded configs. As a result, the report may not apply [All *] to customized matrix; instead, it just list all the failed configs.

I feel it's fine. Because we don't generate report in PR comment for manually triggered customized matrix. And for label trigger, the author knows what he/she triggers.

Please let me know what you think. And I can address your concern in a new PR.

@jonsimantov
Copy link
Contributor

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

Count added. However, we only use [All *] when the len(devices) > 1. If there is only one device, then it just use the device name.

It would still be good to know if the ONLY device failed, as opposed to if 1 device out of of many failed.

I see where you're coming from. If we could provide number like "x/y", which means "x" out of "y" configs has errors, it may be even better? But enabling this feature requires some more effort: Currently, the [All *] logic read config info from print_matrix_configuration.py, thus would work correctly for nightly test. However, to simplify the logic, we ignores all succeeded configs. As a result, the report may not apply [All *] to customized matrix; instead, it just list all the failed configs.

I feel it's fine. Because we don't generate report in PR comment for manually triggered customized matrix. And for label trigger, the author knows what he/she triggers.

Please let me know what you think. And I can address your concern in a new PR.

I like the idea of doing x/y. It would be fine to do that in a followup. So this version will just do ALL DEVICES for 2+ devices, and just name the device for = 1 device?

@firebase-workflow-trigger firebase-workflow-trigger bot removed the tests: in-progress This PR's integration tests are in progress. label Dec 3, 2021
@sunmou99
Copy link
Contributor Author

sunmou99 commented Dec 3, 2021

Would it be possible to put the count of devices there? It would mean something different to me if it failed on ALL 3 DEVICES vs ALL 1 DEVICES. Maybe phrase that better, too - ALL DEVICES (3) vs ALL DEVICES (1)

Count added. However, we only use [All *] when the len(devices) > 1. If there is only one device, then it just use the device name.

It would still be good to know if the ONLY device failed, as opposed to if 1 device out of of many failed.

I see where you're coming from. If we could provide number like "x/y", which means "x" out of "y" configs has errors, it may be even better? But enabling this feature requires some more effort: Currently, the [All *] logic read config info from print_matrix_configuration.py, thus would work correctly for nightly test. However, to simplify the logic, we ignores all succeeded configs. As a result, the report may not apply [All *] to customized matrix; instead, it just list all the failed configs.
I feel it's fine. Because we don't generate report in PR comment for manually triggered customized matrix. And for label trigger, the author knows what he/she triggers.
Please let me know what you think. And I can address your concern in a new PR.

I like the idea of doing x/y. It would be fine to do that in a followup. So this version will just do ALL DEVICES for 2+ devices, and just name the device for = 1 device?

Sure, I will apply this x/y to all configs in a followup PR.

And, no. This version will do [ALL *] if one failed config matches all the config values in the print_matrix_configuration.py.
This version cannot do [ALL DEVICES] for any 2+ devices. Because we ignores all succeeded configs(devices in this case). The count of succeeded devices is critical here. I cannot make sure [ALL DEVICES] failed until I know (the count of succeeded devices == 0).

@sunmou99 sunmou99 added skip-release-notes Skip release notes check and removed tests: failed This PR's integration tests failed. labels Dec 4, 2021
@sunmou99 sunmou99 enabled auto-merge (squash) December 4, 2021 04:47
@sunmou99 sunmou99 merged commit 959ac24 into main Dec 4, 2021
@github-actions github-actions bot added the tests: in-progress This PR's integration tests are in progress. label Dec 4, 2021
@github-actions github-actions bot added the tests: succeeded This PR's integration tests succeeded. label Dec 4, 2021
@firebase-workflow-trigger firebase-workflow-trigger bot removed the tests: in-progress This PR's integration tests are in progress. label Dec 4, 2021
@sunmou99 sunmou99 deleted the feature/report-device branch December 4, 2021 08:54
@firebase firebase locked and limited conversation to collaborators Jan 4, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
cla: yes skip-release-notes Skip release notes check tests: succeeded This PR's integration tests succeeded.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants