-
Jackson Gardner authored
By default, the browser fuzzes the timer APIs such that they have a granularity of approximately 100 microseconds (this is due to Spectre mitigation techniques). However, many of the thing we are trying to measure actually have a much finer granularity than 100 microseconds. As a result, many of our benchmarks are extremely noisy and don't provide accurate data. By serving the initial script files with the `Cross-Origin-Opener-Policy: same-origin` and `Cross-Origin-Embedder-Policy: require-corp` HTTP headers, the browser runs the benchmarks in a `crossOriginIsolated` context, which restores the fine granularity of APIs such as `performance.now()` to microsecond precision. Also, we were considering anything an outlier that was more than one standard deviation away from the mean. In a normal distribution, that means we are only capturing 68% of the data and the rest are considered outliers. This is not ideal. Doing two standard deviations away captures 95% of the data, and the outliers are in the remaining 5%, which seems much more reasonable.
Name |
Last commit
|
Last update |
---|---|---|
.. | ||
android | ||
assets | ||
ios | ||
lib | ||
macos | ||
test | ||
test_driver | ||
test_memory | ||
web | ||
.gitignore | ||
README.md | ||
pubspec.yaml |