Unverified Commit 93c581a7 authored by gaaclarke's avatar gaaclarke Committed by GitHub

Formatted and removed lints from devicelab README.md (#117239)

parent ebeb4918
...@@ -7,29 +7,36 @@ the tests are referred to as "tasks" in the API, but since we primarily use it ...@@ -7,29 +7,36 @@ the tests are referred to as "tasks" in the API, but since we primarily use it
for testing, this document refers to them as "tests". for testing, this document refers to them as "tests".
Current statuses for the devicelab are available at Current statuses for the devicelab are available at
https://flutter-dashboard.appspot.com/#/build. See [dashboard user guide](https://github.com/flutter/cocoon/blob/master/app_flutter/USER_GUIDE.md) <https://flutter-dashboard.appspot.com/#/build>. See [dashboard user
guide](https://github.com/flutter/cocoon/blob/master/app_flutter/USER_GUIDE.md)
for information on using the dashboards. for information on using the dashboards.
## Table of Contents ## Table of Contents
* [How the DeviceLab runs tests](#how-the-devicelab-runs-tests) * [How the DeviceLab runs tests](#how-the-devicelab-runs-tests)
* [Running tests locally](#running-tests-locally) * [Running tests locally](#running-tests-locally)
* [Writing tests](#writing-tests) * [Writing tests](#writing-tests)
* [Adding tests to continuous integration](#adding-tests-to-continuous-integration) * [Adding tests to continuous
integration](#adding-tests-to-continuous-integration)
* [Adding tests to presubmit](#adding-tests-to-presubmit) * [Adding tests to presubmit](#adding-tests-to-presubmit)
## How the DeviceLab runs tests ## How the DeviceLab runs tests
DeviceLab tests are run against physical devices in Flutter's lab (the "DeviceLab"). DeviceLab tests are run against physical devices in Flutter's lab (the
"DeviceLab").
Tasks specify the type of device they are to run on (`linux_android`, `mac_ios`, `mac_android`, `windows_android`, etc). Tasks specify the type of device they are to run on (`linux_android`, `mac_ios`,
When a device in the lab is free, it will pickup tasks that need to be completed. `mac_android`, `windows_android`, etc). When a device in the lab is free, it
will pickup tasks that need to be completed.
1. If the task succeeds, the test runner reports the success and uploads its performance metrics to Flutter's infrastructure. Not 1. If the task succeeds, the test runner reports the success and uploads its
all tasks record performance metrics. performance metrics to Flutter's infrastructure. Not all tasks record
2. If task fails, an auto rerun happens. Whenever the last run succeeds, the task will be reported as a success. For this case, performance metrics.
a flake will be flagged and populated to the test result. 2. If task fails, an auto rerun happens. Whenever the last run succeeds, the
3. If the task fails in all reruns, the test runner reports the failure to Flutter's infrastructure and no performance metrics are collected task will be reported as a success. For this case, a flake will be flagged and
populated to the test result.
3. If the task fails in all reruns, the test runner reports the failure to
Flutter's infrastructure and no performance metrics are collected
## Running tests locally ## Running tests locally
...@@ -63,10 +70,11 @@ To run a test, use option `-t` (`--task`): ...@@ -63,10 +70,11 @@ To run a test, use option `-t` (`--task`):
Where `NAME_OR_PATH_OF_TEST` can be either of: Where `NAME_OR_PATH_OF_TEST` can be either of:
- the _name_ of a task, which is a file's basename in `bin/tasks`. Example: `complex_layout__start_up`. * the _name_ of a task, which is a file's basename in `bin/tasks`. Example:
- the path to a Dart _file_ corresponding to a task, which resides in `bin/tasks`. `complex_layout__start_up`.
Tip: most shells support path auto-completion using the Tab key. Example: * the path to a Dart _file_ corresponding to a task, which resides in
`bin/tasks/complex_layout__start_up.dart`. `bin/tasks`. Tip: most shells support path auto-completion using the Tab key.
Example: `bin/tasks/complex_layout__start_up.dart`.
To run multiple tests, repeat option `-t` (`--task`) multiple times: To run multiple tests, repeat option `-t` (`--task`) multiple times:
...@@ -107,19 +115,19 @@ Example: ...@@ -107,19 +115,19 @@ Example:
The `--ab=10` tells the runner to run an A/B test 10 times. The `--ab=10` tells the runner to run an A/B test 10 times.
`--local-engine=host_debug_unopt` tells the A/B test to use the `host_debug_unopt` `--local-engine=host_debug_unopt` tells the A/B test to use the
engine build. `--local-engine` is required for A/B test. `host_debug_unopt` engine build. `--local-engine` is required for A/B test.
`--ab-result-file=filename` can be used to provide an alternate location to output `--ab-result-file=filename` can be used to provide an alternate location to
the JSON results file (defaults to `ABresults#.json`). A single `#` character can be output the JSON results file (defaults to `ABresults#.json`). A single `#`
used to indicate where to insert a serial number if a file with that name already character can be used to indicate where to insert a serial number if a file with
exists, otherwise, the file will be overwritten. that name already exists, otherwise, the file will be overwritten.
A/B can run exactly one task. Multiple tasks are not supported. A/B can run exactly one task. Multiple tasks are not supported.
Example output: Example output:
``` ```text
Score Average A (noise) Average B (noise) Speed-up Score Average A (noise) Average B (noise) Speed-up
bench_card_infinite_scroll.canvaskit.drawFrameDuration.average 2900.20 (8.44%) 2426.70 (8.94%) 1.20x bench_card_infinite_scroll.canvaskit.drawFrameDuration.average 2900.20 (8.44%) 2426.70 (8.94%) 1.20x
bench_card_infinite_scroll.canvaskit.totalUiFrame.average 4964.00 (6.29%) 4098.00 (8.03%) 1.21x bench_card_infinite_scroll.canvaskit.totalUiFrame.average 4964.00 (6.29%) 4098.00 (8.03%) 1.21x
...@@ -142,13 +150,14 @@ Summarize tool example: ...@@ -142,13 +150,14 @@ Summarize tool example:
ABresults.json ABresults1.json ABresults2.json ... ABresults.json ABresults1.json ABresults2.json ...
``` ```
`--[no-]tsv-table` tells the tool to print the summary in a table with tabs for easy spreadsheet `--[no-]tsv-table` tells the tool to print the summary in a table with tabs for
entry. (defaults to on) easy spreadsheet entry. (defaults to on)
`--[no-]raw-summary` tells the tool to print all per-run data collected by the A/B test formatted `--[no-]raw-summary` tells the tool to print all per-run data collected by the
with tabs for easy spreadsheet entry. (defaults to on) A/B test formatted with tabs for easy spreadsheet entry. (defaults to on)
Multiple trailing filenames can be specified and each such results file will be processed in turn. Multiple trailing filenames can be specified and each such results file will be
processed in turn.
## Reproducing broken builds locally ## Reproducing broken builds locally
...@@ -208,7 +217,7 @@ _TASK_- the name of your test that also matches the name of the ...@@ -208,7 +217,7 @@ _TASK_- the name of your test that also matches the name of the
1. Add target to 1. Add target to
[.ci.yaml](https://github.com/flutter/flutter/blob/master/.ci.yaml) [.ci.yaml](https://github.com/flutter/flutter/blob/master/.ci.yaml)
- Mirror an existing one that has the recipe `devicelab_drone` * Mirror an existing one that has the recipe `devicelab_drone`
If your test needs to run on multiple operating systems, create a separate If your test needs to run on multiple operating systems, create a separate
target for each operating system. target for each operating system.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment