Description
Describe the bug
In case a file is covered by thousands of contexts, the HTML file produces is too big to be opened in Firefox - the browser simply hangs.
To Reproduce
Our codebase's .coverage
is around 55MiB and looks like this:
sqlite> select count(*) from file;
1249
sqlite> select count(*) from context;
26184
sqlite> select count(*) from line_bits;
567073
Running coverage html
with show_contexts = True
takes a long while.
without show_contexts:
HTML size = 124MiB
gen time = 1m30s
with show_contexts:
HMTL size = 1.5GiB
gen time = 2m40s
The bulk of the size in generated HTML is occupied by:
<span class="ctxs">
<span>python_fname::test_name|run</span>
...
</span>
Expected behavior / Proposed solution
Maybe some kind of ad-hoc compression could work?
Replace the span with:
<span class="ctxs">
4,8,15,16,23,42
...
</span>
Then embed a JS block with a dictionary that maps the numbers above to python_fname::test_name|run
. On expanding the ctxts span, JS could populate it with content.
I've estimated that at 120 chars per pytest context name and all 26184 contexts included into the JS block, the block will be around 3.14MiB. Which is a lot less than the 192MiB HTML file that Firefox can't open. And that's just for the largest file.
The size of the smaller files could improve. One 3.5MiB file that I looked at has a context repeating 45 times. Compressing it to tar.gz has reduced the size to 292K.
A simple alternative is to add an option to render only the first 10 contexts in HTML. Feels a bit bad to generate a lossy report though.