If you are a frequent C/C++ programmer, you know how annoying a code plagued of preprocessor conditionals can be: they hide build problems quite often either because of, for example, trivial syntax errors or unused/undefined variables.

I was recently given some C++ code to rewrite^Wclean up and one of the things I did not like was a macro called DPRINT alongside with its use of fprintf. Why? First because this is C++, so you should be using iostreams. Second because by using iostreams you do not have to think about the correct printf-formatter for every type you need to print. And third because it obviously relied on the preprocessor and, how not, debug builds were already broken.

I wanted to come up with an approach to print debug messages that involved the preprocessor as less as possible. This application (a simulator) needs to be extremely efficient in non-debug builds, so leaving calls to printf all around that internally translated to noops at runtime wasn't a nice option because some serious overhead would still be left. So, if you don't use the preprocessor, how can you achieve this? Simple: current compilers have very good optimizers so you can rely on them to do the right thing for release builds.

The approach I use is as follows: I define a custom debug_stream class that contains a reference to a std::ostream object. Then, I provide a custom operator<< that delegates the insertion to the output stream. Here is the only place where the preprocessor is involved: a small conditional is used to omit the delegation in release builds:
template< typename T >
operator<<(debug_stream& d, const T& t)
#if !defined(NDEBUG)
d.get() << t;
#endif // !defined(NDEBUG)
return d;
There is also a global instance of a debug_stream called debug. With this in mind, I can later print debugging messages anywhere in the code as follows:
debug << "This is a debug message!n";
So how does this not introduce any overhead in release builds?

In release builds, operator<< is effectively a noop. It does nothing. As long as the compiler can determine this, it will strip out the calls to the insertion operator.

But there is an important caveat. This approach requires you to be extremely careful in what you insert in the stream. Any object you construct as part of the insertion or any function you call may have side effects. Therefore, the compiler must generate the call to the code anyway because it cannot predict what its effects will be. How do you avoid that? There are two approaches. The first one involves defining everything involved in the debug call as inline or static; the trick is to make the compiler see all the code involved and thus be able to strip it out after seeing it has no side effects. The second approach is simply to avoid such object constructions or function calls completely. Debug-specific code should not have side effects, or otherwise you risk having different application behavior in debug and release builds! Not nice at all.

A last note: the above is just a proof of concept. The code we have right now is more complex than what I showed above as it supports debug classes, the selection of which classes to print at runtime and prefixes every line with the class name. All of this requires several inline magic to get things right but it seems to be working just fine now :-)

So, the conclusion: in most situations, you do not need to use the preprocessor. Find a way around it and your developers will be happier. Really.

Go to posts index

Comments from the original Blogger-hosted post: