Some notes about the measurement methodology:
- The benchmark repeats a series of operations on a small template, and appends that template to a table. It repeats this times.
- Each operation is measured using
Performance.now()
. The resolution of this timer varies by COOP policy and by browser. When the site is cross origin isolated, the resolution is 5us for Chromium, 20us for WebKit, and 100us for Gecko. When non-isolated, these numbers are 100us for Chromium and 1000us for WebKit and Gecko.
- Because each operation being measured runs on the order of ~1us, but is being measured with a lower-than-1us-resolution timer, we must rely on "dithering" to average across N measuremeents, improving the precision by 1/N.
- This page is currently , and the measured quantization is microseconds. We have chosen a value of N = , which should give an effective timer resolution of microseconds.
- Due to differences in timer precision across browsers, this benchmark will take a different amount of total time to run on different browsers. The overall runtime of the benchmark should therefore not be used as a comparison.
- Garbage collection can occur during the benchmark run, which will be additive to the time of the measurment being made at the time. It is hard to control GC timing on non-Chromium browsers (pointers appreciated if you know how!). To be fair, no attempt is made to control GC on any browser, and GC is just assumed to be a fixed added time, randomly added to all measurements. It is then ignored. Note that noise reduction methods such as median(), which otherwise might be usable to remove the periodic GC noise, cannot be used due to the resolution issue mentioned above.
- Each measurement is marked "trivial" or "non-trivial". That is just a guess at the implementation complexity in C++ for the actual operation, and is nothing more than a guess, especially across browsers. A "trivial" operation is something like `firstChild` which should be just a pointer lookup. A "non-trivial" operation is something like `innerHTML()` which needs to run the parser.
- Because each measurement is made using two calls to performance.now, the cost of *one* of those calls is included in each measurement. To subtract this cost, a loop is made at the beginning of the run to measure this cost, and then it is subtracted from each measurement. On this browser, that cost was measured to be us.
- This benchmark is hosted at https://main--earnest-semifreddo-71f4ac.netlify.app/benchmark.html