• Comments for old posts now moderated

    After waking up today and finding 80+ spam comments all around old posts in this blog, I have decided to set all new comments for posts older than 14 days old to be moderated. Took half an hour to clean them all. Thank you, spammers. [Continue reading]

  • What are unnamed namespaces for in C++?

    In the past, I had come by some C++ code that used unnamed namespaces everywhere as the following code shows, and I didn't really know what the meaning of it was:namespace {class something {...};} // namespaceUntil now.Not using unnamed namespaces in my own code bit me with name clash errors. How? Take ATF. Some of its files declare classes in .cpp files (not headers). I just copy/pasted some ATF code in another project and linked the libraries produced by each project together. Boom! Link error because of duplicate symbols. And the linker is quite right in saying so!For some reason, I always assumed that classes declared in the .cpp files would be private to the module. But if you just think a little bit about it, just a little, this cannot ever be the case: how could the compiler tell the difference between a class definition in a header file and a class definition in a source file? The compiler sees preprocessed sources, not what the programmer wrote, so all class definitions look the same!So how do you resolve this problem? Can you have a static class, pretty much like you can have a static variable or function? No, you cannot. Then, how do you declare implementation-specific classes private to a module? Put them in an unnamed namespace as the code above shows and you are all set. Every translation unit has its own unnamed namespace and everything you put in it will not conflict with any other translation unit. [Continue reading]

  • Making ATF 'compiler-aware'

    For a long time, ATF has shipped with build-time tests for its own header files to ensure that these files are self-contained and can be included from other sources without having to manually pull in obscure dependencies. However, the way I wrote these tests was a hack since the first day: I use automake to generate a temporary library that builds small source files, each one including one of the public header files. This approach works but has two drawbacks. First, if you do not have the source tree, you cannot reproduce these tests -- and one of ATF's major features is the ability to install tests and reproduce them even if you install from binaries, remember? And second, it's not reusable: I now find myself needing to do this exact same thing in another project... what if I could just use ATF for it?Even if the above were not an issue, build-time checks are a nice thing to have in virtually every project that installs libraries. You need to make sure that the installed library is linkable to new source code and, currently, there is no easy way to do this. As a matter of fact, the NetBSD tree has such tests and they haven't been migrated to ATF for a reason.I'm trying to implement this in ATF at the moment. However, running the compiler in a transparent way is a tricky thing. Which compiler do you execute? Which flags do you need to pass? How do you provide a portable-enough interface for the callers?The approach I have in mind involves caching the same compiler and flags used to build ATF itself and using those as defaults anywhere ATF needs to run the compiler itself. Then, make ATF provide some helper check functions that call the compiler for specific purposes and hide all the required logic inside them. That should work, I expect. Any better ideas? [Continue reading]

  • Debug messages without using the C++ preprocessor

    If you are a frequent C/C++ programmer, you know how annoying a code plagued of preprocessor conditionals can be: they hide build problems quite often either because of, for example, trivial syntax errors or unused/undefined variables.I was recently given some C++ code to rewrite^Wclean up and one of the things I did not like was a macro called DPRINT alongside with its use of fprintf. Why? First because this is C++, so you should be using iostreams. Second because by using iostreams you do not have to think about the correct printf-formatter for every type you need to print. And third because it obviously relied on the preprocessor and, how not, debug builds were already broken.I wanted to come up with an approach to print debug messages that involved the preprocessor as less as possible. This application (a simulator) needs to be extremely efficient in non-debug builds, so leaving calls to printf all around that internally translated to noops at runtime wasn't a nice option because some serious overhead would still be left. So, if you don't use the preprocessor, how can you achieve this? Simple: current compilers have very good optimizers so you can rely on them to do the right thing for release builds.The approach I use is as follows: I define a custom debug_stream class that contains a reference to a std::ostream object. Then, I provide a custom operator<< that delegates the insertion to the output stream. Here is the only place where the preprocessor is involved: a small conditional is used to omit the delegation in release builds:template< typename T >inlinedebug_stream&operator{#if !defined(NDEBUG) d.get() << t;#endif // !defined(NDEBUG) return d;}There is also a global instance of a debug_stream called debug. With this in mind, I can later print debugging messages anywhere in the code as follows:debug So how does this not introduce any overhead in release builds?In release builds, operator<< is effectively a noop. It does nothing. As long as the compiler can determine this, it will strip out the calls to the insertion operator.But there is an important caveat. This approach requires you to be extremely careful in what you insert in the stream. Any object you construct as part of the insertion or any function you call may have side effects. Therefore, the compiler must generate the call to the code anyway because it cannot predict what its effects will be. How do you avoid that? There are two approaches. The first one involves defining everything involved in the debug call as inline or static; the trick is to make the compiler see all the code involved and thus be able to strip it out after seeing it has no side effects. The second approach is simply to avoid such object constructions or function calls completely. Debug-specific code should not have side effects, or otherwise you risk having different application behavior in debug and release builds! Not nice at all.A last note: the above is just a proof of concept. The code we have right now is more complex than what I showed above as it supports debug classes, the selection of which classes to print at runtime and prefixes every line with the class name. All of this requires several inline magic to get things right but it seems to be working just fine now :-)So, the conclusion: in most situations, you do not need to use the preprocessor. Find a way around it and your developers will be happier. Really. [Continue reading]