README.md 8.31 KB
Newer Older
1
# Flutter DeviceLab
2

3
DeviceLab is a physical lab that tests Flutter on real devices.
Yegor's avatar
Yegor committed
4

5
This package contains the code for the test framework and tests. More generally
Yegor's avatar
Yegor committed
6 7 8
the tests are referred to as "tasks" in the API, but since we primarily use it
for testing, this document refers to them as "tests".

9
Current statuses for the devicelab are available at
10 11
<https://flutter-dashboard.appspot.com/#/build>. See [dashboard user
guide](https://github.com/flutter/cocoon/blob/master/app_flutter/USER_GUIDE.md)
12 13
for information on using the dashboards.

14
## Table of Contents
15

16 17 18
* [How the DeviceLab runs tests](#how-the-devicelab-runs-tests)
* [Running tests locally](#running-tests-locally)
* [Writing tests](#writing-tests)
19 20
* [Adding tests to continuous
  integration](#adding-tests-to-continuous-integration)
21
* [Adding tests to presubmit](#adding-tests-to-presubmit)
22

23 24
## How the DeviceLab runs tests

25 26
DeviceLab tests are run against physical devices in Flutter's lab (the
"DeviceLab").
27

28 29 30
Tasks specify the type of device they are to run on (`linux_android`, `mac_ios`,
`mac_android`, `windows_android`, etc). When a device in the lab is free, it
will pickup tasks that need to be completed.
31

32 33 34 35 36 37 38 39
1. If the task succeeds, the test runner reports the success and uploads its
performance metrics to Flutter's infrastructure. Not all tasks record
performance metrics.
2. If task fails, an auto rerun happens. Whenever the last run succeeds, the
task will be reported as a success. For this case, a flake will be flagged and
populated to the test result.
3. If the task fails in all reruns, the test runner reports the failure to
   Flutter's infrastructure and no performance metrics are collected
Yegor's avatar
Yegor committed
40

41
## Running tests locally
Yegor's avatar
Yegor committed
42 43 44 45 46 47

Do make sure your tests pass locally before deploying to the CI environment.
Below is a handful of commands that run tests in a similar way to how the
CI environment runs them. These commands are also useful when you need to
reproduce a CI test failure locally.

48
### Prerequisites
49

50
You must set the `ANDROID_SDK_ROOT` environment variable to run
51 52
tests on Android. If you have a local build of the Flutter engine, then you have
a copy of the Android SDK at `.../engine/src/third_party/android_tools/sdk`.
53

54
You can find where your Android SDK is using `flutter doctor -v`.
55

56
### Warnings
57

58
Running the devicelab will do things to your environment.
59

60
Notably, it will start and stop Gradle, for instance.
61

62
### Running specific tests
63

Yegor's avatar
Yegor committed
64 65 66
To run a test, use option `-t` (`--task`):

```sh
Ian Hickson's avatar
Ian Hickson committed
67
# from the .../flutter/dev/devicelab directory
68
../../bin/cache/dart-sdk/bin/dart bin/test_runner.dart test -t {NAME_OR_PATH_OF_TEST}
Yegor's avatar
Yegor committed
69 70
```

71 72
Where `NAME_OR_PATH_OF_TEST` can be either of:

73 74 75 76 77
* the _name_ of a task, which is a file's basename in `bin/tasks`. Example:
  `complex_layout__start_up`.
* the path to a Dart _file_ corresponding to a task, which resides in
  `bin/tasks`. Tip: most shells support path auto-completion using the Tab key.
  Example: `bin/tasks/complex_layout__start_up.dart`.
Ian Hickson's avatar
Ian Hickson committed
78

Yegor's avatar
Yegor committed
79 80 81
To run multiple tests, repeat option `-t` (`--task`) multiple times:

```sh
82
../../bin/cache/dart-sdk/bin/dart bin/run.dart -t test1 -t test2 -t test3
Yegor's avatar
Yegor committed
83 84
```

85
### Running tests against a local engine build
86 87 88 89 90 91 92 93 94 95

To run device lab tests against a local engine build, pass the appropriate
flags to `bin/run.dart`:

```sh
../../bin/cache/dart-sdk/bin/dart bin/run.dart --task=[some_task] \
  --local-engine-src-path=[path_to_local]/engine/src \
  --local-engine=[local_engine_architecture]
```

96
An example of a local engine architecture is `android_debug_unopt_x86`.
97

98
### Running an A/B test for engine changes
99 100 101 102

You can run an A/B test that compares the performance of the default engine
against a local engine build. The test runs the same benchmark a specified
number of times against both engines, then outputs a tab-separated spreadsheet
103 104
with the results and stores them in a JSON file for future reference. The
results can be copied to a Google Spreadsheet for further inspection and the
105
JSON file can be reprocessed with the `summarize.dart` command for more detailed
106
output.
107 108 109 110 111 112 113 114 115 116 117

Example:

```sh
../../bin/cache/dart-sdk/bin/dart bin/run.dart --ab=10 \
  --local-engine=host_debug_unopt \
  -t bin/tasks/web_benchmarks_canvaskit.dart
```

The `--ab=10` tells the runner to run an A/B test 10 times.

118 119
`--local-engine=host_debug_unopt` tells the A/B test to use the
`host_debug_unopt` engine build. `--local-engine` is required for A/B test.
120

121 122 123 124
`--ab-result-file=filename` can be used to provide an alternate location to
output the JSON results file (defaults to `ABresults#.json`). A single `#`
character can be used to indicate where to insert a serial number if a file with
that name already exists, otherwise, the file will be overwritten.
125

126 127 128 129
A/B can run exactly one task. Multiple tasks are not supported.

Example output:

130
```text
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145
Score	Average A (noise)	Average B (noise)	Speed-up
bench_card_infinite_scroll.canvaskit.drawFrameDuration.average	2900.20 (8.44%)	2426.70 (8.94%)	1.20x
bench_card_infinite_scroll.canvaskit.totalUiFrame.average	4964.00 (6.29%)	4098.00 (8.03%)	1.21x
draw_rect.canvaskit.windowRenderDuration.average	1959.45 (16.56%)	2286.65 (0.61%)	0.86x
draw_rect.canvaskit.sceneBuildDuration.average	1969.45 (16.37%)	2294.90 (0.58%)	0.86x
draw_rect.canvaskit.drawFrameDuration.average	5335.20 (17.59%)	6437.60 (0.59%)	0.83x
draw_rect.canvaskit.totalUiFrame.average	6832.00 (13.16%)	7932.00 (0.34%)	0.86x
```

The output contains averages and noises for each score. More importantly, it
contains the speed-up value, i.e. how much _faster_ is the local engine than
the default engine. Values less than 1.0 indicate a slow-down. For example,
0.5x means the local engine is twice as slow as the default engine, and 2.0x
means it's twice as fast. Higher is better.

146 147 148 149 150 151 152
Summarize tool example:

```sh
../../bin/cache/dart-sdk/bin/dart bin/summarize.dart  --[no-]tsv-table --[no-]raw-summary \
    ABresults.json ABresults1.json ABresults2.json ...
```

153 154
`--[no-]tsv-table` tells the tool to print the summary in a table with tabs for
easy spreadsheet entry. (defaults to on)
155

156 157
`--[no-]raw-summary` tells the tool to print all per-run data collected by the
A/B test formatted with tabs for easy spreadsheet entry. (defaults to on)
158

159 160
Multiple trailing filenames can be specified and each such results file will be
processed in turn.
161

162
## Reproducing broken builds locally
Yegor's avatar
Yegor committed
163 164 165 166 167 168 169

To reproduce the breakage locally `git checkout` the corresponding Flutter
revision. Note the name of the test that failed. In the example above the
failing test is `flutter_gallery__transition_perf`. This name can be passed to
the `run.dart` command. For example:

```sh
170
../../bin/cache/dart-sdk/bin/dart bin/run.dart -t flutter_gallery__transition_perf
Yegor's avatar
Yegor committed
171
```
172

173
## Writing tests
174

keyonghan's avatar
keyonghan committed
175
A test is a simple Dart program that lives under `bin/tasks` and uses
176 177 178 179 180 181 182 183 184
`package:flutter_devicelab/framework/framework.dart` to define and run a _task_.

Example:

```dart
import 'dart:async';

import 'package:flutter_devicelab/framework/framework.dart';

185
Future<void> main() async {
186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208
  await task(() async {
    ... do something interesting ...

    // Aggregate results into a JSONable Map structure.
    Map<String, dynamic> testResults = ...;

    // Report success.
    return new TaskResult.success(testResults);

    // Or you can also report a failure.
    return new TaskResult.failure('Something went wrong!');
  });
}
```

Only one `task` is permitted per program. However, that task can run any number
of tests internally. A task has a name. It succeeds and fails independently of
other tasks, and is reported to the dashboard independently of other tasks.

A task runs in its own standalone Dart VM and reports results via Dart VM
service protocol. This ensures that tasks do not interfere with each other and
lets the CI system time out and clean up tasks that get stuck.

209
## Adding tests to continuous integration
210

211
Host only tests should be added to `flutter_tools`.
212

213
There are several PRs needed to add a DeviceLab task to CI.
214

215
_TASK_- the name of your test that also matches the name of the
216
  file in `bin/tasks` without the `.dart` extension.
217

218 219
1. Add target to
   [.ci.yaml](https://github.com/flutter/flutter/blob/master/.ci.yaml)
220
   * Mirror an existing one that has the recipe `devicelab_drone`
221

222 223
If your test needs to run on multiple operating systems, create a separate
target for each operating system.
224

225
## Adding tests to presubmit
226

227 228
Flutter's DeviceLab has a limited capacity in presubmit. File an infra ticket
to investigate feasibility of adding a test to presubmit.