“Your code is gradual” is one thing that’s simply mentioned, however it might take quite a lot of trial and error and testing to search out out which a part of the code is gradual, and the way gradual is gradual? As soon as the bottleneck of the code is discovered, does it scale properly with an enter that’s 100 occasions or 1000 occasions bigger, with outcomes averaged throughout 10 iterations?
That is the place pytest-benchmark turns out to be useful
Complementing the concept of unit testing, which is to check a single unit or small a part of the codebase, we are able to increase on this and measure code efficiency simply with pytest-benchmark
.
This text will contact on how one can arrange, run, and interpret the benchmark timing outcomes of pytest-benchmark
. To correctly implement benchmarking in a mission, the superior sections additionally contact on how one can examine benchmark timing outcomes throughout runs and reject commits in the event that they fail sure thresholds, and how one can retailer and think about historic benchmark timing outcomes in a histogram!
This could merely be accomplished with pip set up pytest-benchmark
on the Terminal.
To allow extra options, equivalent to visualizing the benchmark outcomes, we are able to carry out pip set up 'pytest-benchmark[histogram]'
to put in the extra packages required.
Just like pytest with added
benchmark
fixture