Skip navigation

Tag Archives: Benchmark

Mr. Bunce is asking for examples of how NYTProf told you what to fix with poorly performing code.

Instead, I’m going to give an example of how NYTProf told me why I shouldn’t fix something a particular way.

I’ve got an app that needs to analyze a bunch of logs. Having used NYTProf in the past to identify that creation of DateTime objects is my biggest issue, I was thinking of ways to improve the situation. One thing I thought of is being able to create a DateTime based upon an epoch time instead of individual date components (year, month, etc.), with the idea that the cost of deriving the epoch time from the date components was what was expensive (I can affect the format of some of the logs, and thus could get the epoch time emitted). So, I quickly threw together a little Benchmark program that compared the costs of constructing DateTimes from each type of data. I was shocked to discover that creating objects based upon epoch times was over 20% more expensive than creating them out of date components.

What was going on? NYTProf provided me with the answer. I created two simple programs that created 100,000 DateTime objects, one for each kind of constructor arguments. I profiled both of them and compared the NYTProf output for them. This led me to looking at the from_epoch() code in the NYTProf generated HTML, and finding that DateTime passes the epoch time to gmtime to get the date components and then calls the regular constructor with those. The extra cost isn’t really in the gmtime calls, but a good part of it is in the use of Params::Validate in all of the public methods to check arguments and the epoch time path takes the code path through an additional public method.

Benchmark told me that I shouldn’t make a change and NYTProf allowed me to understand why that that result was correct and there wasn’t something wrong with my Benchmark use.