Exceptions and Performance

Note: This page has now been superceded by a newer version which is significantly different. As the new version is responding to some flaws pointed out elsewhere, it seemed unfair to just change the contents of this page and make the comments look foolish. The newer page is probably a more balanced one - this page should be considered obsolete now.

Almost every time exceptions are mentioned in mailing lists and newsgroups, people say they're really expensive, and should be avoided in almost all situations. As an idea of just how expensive some people think they can be, in one article someone asked whether the fact that his web application was throwing about 200 exceptions an hour was likely to be harming his performance. Various people replied saying that it would indeed be causing a problem. Let's examine that claim, shall we?

The True Cost of Exceptions

Here's a short program which just throws exceptions and catches them, just to see how fast it can do it:

using System;

public class Test
{
    const int Iterations = 5000000;
    
    static void Main()
    {
        DateTime start = DateTime.UtcNow;
        for (int i=0; i < Iterations; i++)
        {
            try
            {
                throw new ApplicationException();
            }
            catch (ApplicationException)
            {
            }
        }
        DateTime end = DateTime.UtcNow;
        long millis = (long) (end-start).TotalMilliseconds;
        
        Console.WriteLine ("Total time taken: {0}", end-start);
        Console.WriteLine ("Exceptions per millisecond: {0}", Iterations/millis);
    }
}

Now, the above isn't geared towards absolute accuracy - it's using DateTime.Now to measure time, just for convenience, but if you give it enough iterations to make the test run for a fair time (half a minute or so) then any inaccuracies due to a low resolution timer and the JIT compiler are likely to get lost in the noise. The main thing is to see roughly how expensive exceptions are. Here are the results on my laptop, using .NET 1.1, running outside the debugger (see later for the reason for emphasis):

Total time taken: 00:00:42.0312500
Exceptions per millisecond: 118

Update (2006): .NET 2.0 makes exceptions significantly slower. It depends on the exact hardware involved, but on my laptop it can "only" throw about 21 exceptions per millisecond. The rest of the figures in the article still refer to 1.1.

Now, that doesn't involve any significant depth of stack, and indeed if you change the test to recurse until it reaches a certain stack depth, it does become significantly slower - recursing to a depth of 20 takes the results down to about 42 exceptions per millisecond. Also, running with .NET beta 2 gives fairly different results - even the test above only manages to throw about 40 exceptions per millisecond. However, those differences are only a factor of three - not enough to change the overall performance of exceptions, which is clearly pretty good.

Let's look back at the example from the newsgroups - 200 exceptions being thrown in an hour. Even assuming a server which was 10 times slower than my laptop (which seems unlikely) and assuming a fairly deep stack, those 200 exceptions would still only take about 50ms. That's less than 0.002% of the hour. In other words, those exceptions weren't significant at all when it came to performance.

Performance In The Debugger Is Unimportant (But Poor)

I have received an email in response to this article which claimed that throwing 5000 exceptions to a stack depth of 1000 was taking nearly ten minutes. Now, the correspondent acknowledged that a stack depth of 1000 was very rare, but even so this figure seemed very high to me. I ran the short but complete program that he'd kindly sent me and which he'd used to get his figures. (He'd also provided the results he'd seen. This is very much the kind of thing I look for when someone disagrees with me - the cause of the disagreement and the means of reproducing it. It's a far more productive basis for discussion than just words.) So of course, I ran his program.

The same program that took over nine minutes on his box took about about 47 milliseconds to run on mine. Now, while I'm very happy with my recent laptop purchase, I find it hard to believe it's over 10,000 times faster than my correspondent's box. Looking for explanations, I ran the same program having recompiled it under the .NET 2.0 beta 2, and it took 250 milliseconds. That's significantly slower, but not nearly slow enough to explain the discrepancy.

I then ran it under the debugger of Visual Studio 2003. It took nearly 6 minutes. Now, that's much closer - close enough to easily be explicable by hardware differences. So, it's reasonable to assume that my correspondent was running his code under a debugger. Why is that important? Because the performance when running under a debugger is unimportant.

You should usually only run code under a debugger when you're debugging - hence the name! When you're debugging, performance shouldn't usually be important - the only case I can immediately think of where it is important is where you have an error which crops up one in a million times, and you need to run some code a million times before you can diagnose the problem. Well, if you're in that situation and each of those million runs requires an exception to be thrown from a very deep stack, you may have a problem unless you're willing to leave a machine running the test overnight. Personally, I can't remember the last time I was in that situation.

I am utterly convinced that the cost of exceptions in the debugger is the cause of a lot of the myth about the general cost of exceptions. People assume that a similar cost is present when running outside the debugger when it just isn't. (It doesn't matter much whether you run a debug version of the code or not - it's whether a debugger is attached or not that matters.) This is made worse by the way that the first exception thrown under a debugger can take a few seconds (although I still haven't worked out why this happens - it doesn't seem to involve hard disk or CPU activity). I've seen developers claim that exceptions each take a second or two to be thrown - something that would render them much, much less useful.

Note that it's not only exceptions which fall foul of this. The debugger disables lots of optimisations the JIT can normally make, including inlining (in at least some places). If you take too much notice of performance in the debugger without testing outside the debugger, you could easily end up making optimisations which only make a significant difference in the debugger, but which hurt your design and readability.

So Should I Throw Millions Of Exceptions?

My rebuttal of the claims of the cost of exceptions has sometimes been misinterpreted as a belief that exceptions should be used anywhere and everywhere, and that you shouldn't be worried if your application throws millions of them. This is not the case - I merely think that in almost every situation where the performance of exceptions would be significant, there are bigger design problems to worry about. While it really wouldn't matter in performance terms if every so often you threw several thousand exceptions in a second, it would be suggestive of a situation where either exceptions shouldn't be used, or the first exception should have triggered a back-off of some description.

As an example of the first, if you're trying to parse a huge list of strings which were meant to be integers, knowing in advance that many of them wouldn't be valid integers to start with, it would be wise to use some kind of validation prior to the actual parsing. (For simplicity, the validation would probably only want to be a very simple filter which weeded out obviously bad data - you'd still want to handle a possible exception in the real parsing.) Alternatively, if a suitable library call were available, you could use a version which didn't throw exceptions: in .NET 2.0, for example, each of the numeric types has a TryParse method for precisely this kind of situation.

As an example where back-off (or indeed total failure) should be used, you shouldn't have a loop which constantly tries to access a database, retrying the same operation again eternally if something goes wrong. Here, using exceptions is the right way of indicating that something has failed, but you'd want to put some kind of limited retry mechanism in, with the overall operation failing if any part failed more than three times, or something similar. Whether that overall operation then goes into a retry state (after a suitable back-off delay) or whether it's aborted permanently depends on the situation - but changing the interface not to use exceptions to indicate failure wouldn't help anything, and would just make the error handling more complicated.

This article isn't meant to be an in-depth style guide for exceptions - there are many, many articles on that topic, and frankly I'm not sufficiently convinced that I have "the right answer" to try to write another one. The crucial point is that although exceptions are indeed slower than simply returning an error code, unless you are abusing exceptions to start with, you're very, very unlikely to find that exceptions are a performance bottleneck. When designing the contract of a method, it's worth considering whether it's reasonable to call that method in a fairly tight loop, continuing to call it even if the previous call fails. In that case, it may be worth having two methods, one of which uses exceptions to indicate errors and one of which doesn't (like Parse and TryParse) but in all other situations, using exceptions is unlikely to cause performance problems, but is likely to make the calling code much easier to understand.


Back to the main page.