Closed
Description
Summary
I've been curious what is the impact of automatically detecting host component names. Due to the lack of better large test suite I've taken RNTL own tests.
Here is my process:
- Disable tests that fail with hardcode
hostComponentNames
withtest.skip()
. - Measure execution time with
time yarn test
in relevant scenarios - Add
console.log()
todetectHostComponentNames()
and count execution count (separate run from measuring time).
Note: we also have beforeEach(() => resetToDefaults())
, so I measured it with and without it.
- Without
detectHostComponentNames
: 39,4 s / 0 runs - With
detectHostComponentNames
+ withoutbeforeEach(() => resetToDefaults())
: 44,6 s / 63 runs => 13% overhead - With
detectHostComponentNames
+ withbeforeEach(() => resetToDefaults())
: 47,1 s / 585 runs => 20% overhead
Seems like Jet by default clears the runtime state per each file. So we have detectHostComponentNames()
being called for each file.
Here are my questions:
- if if my methodology is correct?
- did you observe the same/different effect on your own test-base? (here is a link previous perf impact issue of
detect...
) - is that number significant enough to warrant a research into alternative approaches?
Metadata
Metadata
Labels
No labels