• Paella in NYC

    These days, I'm starting to cook by myself (aka learning) and yesterday I made paella for 6 people while staying in NYC (leaving on Sunday...). This is the third time in two weeks that I cook this Spanish dish, but I think the results were pretty good despite the lack of ingredients. After all, cooking is not as hard as I originally thought! And it's pretty fun too!Just blogging this because the results look nice: P.S. I'm now eating the leftovers from yesterday. Yummm! :-) [Continue reading]

  • Mailing lists for commit notifications

    The project I'm currently working on at university uses Subversion as its version control system. Unfortunately, the project itself has no mailing list to receive notifications on every commit, and the managers refuse to set this up. They do not see the value of such a list and they are scared of it because they probably assume that everyone ought to be subscribed to it.Having worked on projects that have a commit notification mailing list available, I strongly advise to have such a list anytime you have more than one developer working on a project[1]. Bonus points if every commit message comes with a bundled copy of the change's diff (in unified form!). This list must be independent from the regular development mailing list and it must be opt-in: i.e. never subscribe anyone by default, let themselves subscribe if they want to! Not everyone will need to receive this information, but it comes very useful... and it's extremely valuable for the project managers themselves!Why is this useful? Being subscribed to the commit notification mailing list, it is extremely easy to know what is going on on the project[2]. It is also really easy to review the code submissions as soon as they are made which, with proper reviews by other developers, trains the authors and improves their skills. And if the revision diff is inlined, it is trivial to pinpoint mistakes in it (be them style errors, subtle bugs, or serious design problems) by replying to the email.So, to my current project managers: if you read me, here is a wish-list item. And, for everyone else, if you need to set up a new project, consider creating this mailing list as soon as possible. Maybe few developers will subscribe to it, but those that do will pay attention and will provide very valuable feedback in the form of replies.1: Shame on me for not having such a mailing list for ATF. Haven't investigated how to do so with Monotone.2: Of course, the developers must be conscious to commit early and often, and to provide well-formed changesets: i.e. self-contained and with descriptive logs. [Continue reading]

  • DEBUG.EXE dropped in Windows 7

    Wow. DEBUG.EXE is being finally phased out in Windows 7. I can't believe it was still there.This brings me back two different memories. I had used this program in the past (a long while ago!) and it caused me both pain and joy.Regarding pain: I had an MS-DOS 5.x book that spent a whole section on DEBUG.EXE, and one of the examples in it contained a command that caused the program in memory to be written to some specific sectors of the floppy disk. Guess what I tried? I executed that same command but told it to use my hard disk instead of the floppy drive. Result: a corrupted file system. Had to run scandisk (remember it?), which marked some sectors as faulty and I thought I had ruined my precious 125MB WD Caviar hard disk. It wasn't until much, much, much later that I learnt that such a thing was not possible, and that really formatting the disk with a tool that had no memory of "bad" sectors (aka, Linux's newfs) could revert the disk to a clean state. (Actually, I kept that hard disk until very recently.)Regarding joy: On a boring weekend away from home, I used DEBUG.EXE on an old portable machine without internet connection to hack a version of PacMan. I disassembled the code until I found where it kept track of the player's lives and tweaked the counter to be infinite (or extra large, can't remember). That was fun. I could get to levels me and my father (who used to be an avid player) had never seen before!It's a pity this tool is going, but it must go. It is way too outdated compared to current debuggers. I wonder if anyone is still using it.Edit (Apr 1st, 2011): This is not a support forum for Windows issues. I've had to disable posting in this particular article because it was receiving lots of traffic and I don't want to moderate posts any more. [Continue reading]

  • Using C++ templates to optimize code

    As part of the project I'm currently involved in at university, I started (re)writing a Pin tool to gather run-time traces of applications parallelized with OpenMP. This tool has to support two modes: one to generate a single trace for the whole application and one to generate one trace per parallel region of the application.In the initial versions of my rewrite, I followed the idea of the previous version of the tool: have a -split flag in the frontend that enables or disables the behavior described above. This flag was backed by an abstract class, Tracer, and two implementations: PlainTracer and SplittedTracer. The thread-initialization callback of the tool then allocated one of these objects for every new thread and the per-instruction injected code used a pointer to the interface to call the appropriate specialized instrumentation routine. This pretty much looked like this:voidthread_start_callback(int tid, ...){ if (splitting) tracers[tid] = new SplittedTracer(); else tracers[tid] = new PlainTracer();}voidper_instruction_callback(...){ Tracer* t = tracers[PIN_ThreadId()]; t->instruction_callback(...);}I knew from the very beginning that such an implementation was going to be inefficient due to the pointer dereference at each instruction and the vtable lookup for the correct virtual method implementation. However, it was a very quick way to move forward because I could reuse some small parts of the old implementation.There were two ways to optimize this: the first one involved writing different versions of per_instruction_callback, one for plain tracing and the other for splitted tracing, and then deciding which one to insert depending on the flag. The other way was to use template metaprogramming.As you can imagine, this being C++, I opted to use template metaprogramming to heavily abstract the code in the Pin tool. Now, I have an abstract core parametrized on the Tracer type. When instantiated, I provide the correct Tracer class and the compiler does all the magic for me. With this design, there is no need to have a parent Tracer class — though I'd welcome having C++0x concepts available —, and the callbacks can be easily inlined because there is no run-time vtable lookup. It looks something like this:templateclass BasicTool { Tracer* tracers[MAX_THREADS]; Tracer* allocate_tracer(void) const = 0;public: Tracer* get_tracer(int tid) { return tracers[tid]; }};class PlainTool : public BasicTool { PlainTracer* allocate_tracer(void) const { return new PlainTracer(); }public: ...} the_plain_tool;// This is tool-specific, non-templated yet.voidper_instruction_callback(...){ the_plain_tool.get_tracer(PIN_ThreadId()).instruction_callback(...);}What this design also does is force me to have two different Pin tools: one for plain tracing and another one for splitted tracing. Of course, I chose it to be this way because I'm not a fan of run-time options (the -split flag). Having two separate tools with well-defined, non-optional features makes testing much, much easier and... follows the Unix philosophy of having each tool do exactly one thing, but doing it right!Result: around a 15% speedup. And C++ was supposed to be slow? ;-) You just need to know what the language provides you and choose wisely. (Read: my initial, naive prototype had a run-time of 10 minutes to trace part of a small benchmark; after several rounds of optimizations, it's down to 1 minute and 50 seconds to trace the whole benchmark!)Disclaimer: The code above is an oversimplification of what the tool contains. It is completely fictitious and obviates many details. I will admit, though, that the real code is too complex at the moment. I'm looking for ways to simplify it. [Continue reading]

  • Numeric limits in C++

    By pure chance when trying to understand a build error of some C++ code I'm working on, I came across the correct C++ way of checking for numeric limits. Here is how.In C, when you need to check for the limits of native numeric types, such as int or unsigned long, you include the limits.h header file and then use the INT_MIN/INT_MAX and ULONG_MAX macros respectively. In the C++ world, there is a corresponding climits header file to get the definition of these macros, so I always thought this was the way to follow.However, it turns out that the C++ standard defines a limits header file too, which provides the numeric_limits<T> template. This template class is specialized in T for every numeric type and provides a set of static methods to query properties about the corresponding type. The simplest ones are min() and max(), which are what we need to replace the old-style *_MIN and *_MAX macros.As an example, this C code:#include <limits.h>#include <stdio.h>#include <stdlib.h>intmain(void){ printf("Integer range: %d to %dn", INT_MIN, INT_MAX); return EXIT_SUCCESS;}becomes the following in C++:#include <cstdlib>#include <iostream>#include <limits>intmain(void){ std::cout << "Integer range: " << std::numeric_limits< int >::min() << " to " << std::numeric_limits< int >::max() << "n"; return EXIT_SUCCESS;}Check out the documentation for more details on additional methods! [Continue reading]

  • The NetBSD Blog

    The NetBSD Project recently launched a new official blog for NetBSD. From here, I'd like to invite you to visit it and subscribe to it. It's only with your support (through reading and, specially, commenting) that developers will post more entries! Enjoy :-) [Continue reading]