Closed
Description
I hacked asv to log the total execution time, including setup, for each benchmark. Some of these are parametrized over several cases, so they may not actually be slow. time
is in seconds.
time | file | klass | method | |
---|---|---|---|---|
68 | 00:03:11.379267 | frame_ctor | FrameConstructorDTIndexFromOffsets | time_frame_ctor |
394 | 00:01:01.671905 | inference | to_numeric_downcast | time_downcast |
199 | 00:00:46.330012 | groupby | GroupBySuite | time_describe |
559 | 00:00:24.698904 | replace | replace_large_dict | time_replace_large_dict |
204 | 00:00:24.212386 | groupby | GroupBySuite | time_mad |
210 | 00:00:22.481497 | groupby | GroupBySuite | time_pct_change |
215 | 00:00:18.909368 | groupby | GroupBySuite | time_skew |
200 | 00:00:18.732072 | groupby | GroupBySuite | time_diff |
212 | 00:00:18.317290 | groupby | GroupBySuite | time_rank |
219 | 00:00:16.845357 | groupby | GroupBySuite | time_unique |
Ideally we could optimize the setup time on these as well. We could maybe modify the benchmark to do less work / run faster, but I'd like to avoid that if possible.
Link to the full CSV: https://gist.github.com/9d80aa45750224d7453863f2f754160d