• Kyua: Weekly status report

    The post title should mention ATF instead of Kyua... but I'm keeping the usual one for consistency:Integrated timestamps into the XML and HTML reports generated by atf-report. These should soon show up in the continuous tests of NetBSD.Work on integrating the use of POSIX timers into atf-run after Christos Zoulas performed these changes in the NetBSD tree. The result is quite awful because I need to keep compatibility with systems that do not provide the "new" interface... [Continue reading]

  • Kyua: Weekly status report

    Unfortunately, not much activity this week due to travel reasons. Anyway, some work went in:Preliminary code to generate HTML reports from kyua report. This is easy peasy but boring. The current code was written as a proof of concept and is awful, hence why it was not committed. I'm now working in cleaning it up.Backported test program and test case timestamping into ATF based on a patch from Paul Goyette. This is a very useful feature to have, and it will have to be added to Kyua later. (It has always been planned to be added, but have not had the time yet.) [Continue reading]

  • Kyua: Weekly status report

    Some significant improvements this week:Finally submitted the code to store and load full test case definitions. This is quite tricky (and currently very, very ugly) but it works and it will allow the reports to include all kinds of information from the test cases.Removed the Atffiles from the tree; yay! For a long time, I had been using atf-run to run broken tests because atf-run allowed me to watch the output of the test case being debugged. However, this has been unnecessary since the introduction of the debug command in late August. I now feel confident that these files can go. (And debug is much more powerful than atf-run because you can target a single test case instead of a whole test program.)Some crazy work attempting to hide the name of SQLite types from the sqlite::statement interface. I've been only able to do so somewhat decently for bind, but all my attempts at doing the same with column result in horrible code so far. So no, such changes have not been submitted.As of a few minutes ago, kyua test now records the output of the test cases (stdout and stderr) into the database. These will be invaluable for debugging of test cases, particularly when the reports are posted online.Some preliminary work at implementing HTML reports. This, however, has not received much progress due to the previous item requiring completion.I'm quite excited at this point. HTML reports are a few weeks away at most. Once that happens, it will be time to start considering replacing the atf-run / atf-report duo for good, particularly within NetBSD. This will certainly not be easy... but all the work that has gone into Kyua so far has this sole goal! [Continue reading]

  • Kyua: Weekly status report

    Some more work towards allowing storing and loading of full test case definitions (which in turn will allow us to provide detailed HTML reports). I have local changes that do this, but they are gated by the lack of some additional tests and probably some optimizations, because they slow down kyua report significantly. Regarding commits, I have only submitted some related cleanup-changes.I got distracted by invalid Doxygen comments and traced down why they were not being correctly validated (which is the whole point of running Doxygen during the build). It turns out I had to enable some extra EXTRACT_* settings in the Doxyfile. After doing so, I realized there are many, many documentation problems in the code... and I have been fixing them since. It's a tough operation, but I'm more than half-way through already. To give you an idea, the current diffstat shows about 650 new comment lines (!). [Continue reading]

  • Kyua: Weekly status report

    My goal for this past week was to change the database layer to be able to store full definitions of the test cases, and to later be able to load these while scanning an action. This is to allow the report command to provide all kinds of information about the executed tests, not just their names and their results. However, adding this functionality has proven to be more complex than I wished because the current types to represent test programs and test cases are kinda broken: that the abstractions chosen a while ago do not seem to be appropriate, and this is complicating further changes.Due to this, I ended up doing some cleanups. First, I reimplemented the way in which test programs that fail to list their test cases are represented. And second, I got rid of the useless test_case_id class, which exposed even further problems in the data types that represent test cases.It's now time to sit and think if the current representations of test programs and test cases make sense and, if not, how to better redo them. Not going to be easy, but I hope to have some time for this cleanup during this upcoming week. [Continue reading]

  • Kyua: Weekly status report

    I only have one thing to report this week, but oh boy it's big: the report command finally reports the results of a run of a test suite! Yes, you heard well: Kyua is, finally, able to keep a record of all the previous executions of test suites and allows you to extract reports of any of them a posteriori.At the moment, the report is just some disorganized plain-text.  For example:$ kyua test[... wait for the tests to run ...]Committed action 82$ kyua report===> Skipped tests[... too long to show ...]===> Expected failuresintegration/cmd_report_test:output__console__change_file  ->    expected_failure: --output not implemented yet:    atf-check failed; see the output of the test for details===> Failed testsstore/transaction_test:put_test_case__ok  ->    failed: Line 663: stmt.step() not met===> SummaryAction: 82Test cases: 934 total, 15 skipped, 1 expected failures,0 broken, 1 failedI'm now working on changing the database schema to be able to really store all the data about test cases, because at the moment I'm only storing their names. Once all the original data is stored, the report command will have lots more information to work with, and then will be the time to start improving the format of the reports. As mentioned earlier, having interactive HTML dashboards is high in the priority list, and a very important goal of Kyua altogether.Stay tuned :-) [Continue reading]

  • Kyua: Weekly status report

    Kyua has finally gained a report subcommand, aimed at processing the output data of an action (stored in the database) and generating a user-friendly report in a variety of formats. This is still extremely incomplete, so don't get your hopes too high yet ;-) The current version of the report command takes an action and all it does is dump its runtime context (run directory, environment variables, etc.). Consider it just a proof of concept.I have now started work on loading the data of test case results for a particular action, and once that is done, the report command will start yielding really useful data: i.e. it will actually tell you what happened during a particular execution of a test suite. The way I'm approaching the work these days is by building the skeleton code to implement the basic functionality first (which actually involves writing a lot of nasty code), with the goal of adding missing pieces later bit by bit.For example, at this moment I'm only targeting text-based outputs with a limited set of data. However, when that is done, adding extra data or different formats will be relatively easy. Generating HTML dashboards (without going through XML, as was the case of atf-report!) is definitely highly prioritized.By the way: I just realized it has already been one year since Kyua saw life. Wow, time flies. And only now we are approaching a point where killing the atf-run / atf-report pair is doable. I'm excited. [Continue reading]

  • Kyua: Weekly status report

    Many things have happened this week, but they can all be summarized in one single sentence: kyua test now records the results of the execution of a test suite into the SQLite database."Why is this important?", you ask. Well, that's a good question. Recording test results opens the gate to many long-awaited features that should be coming soon, such as the ability to inspect the history of a particular test, to query all available data of a test result and/or to generate a dashboard of test results. It's interesting to realize that most of these features are just one SQL query away. If you install Kyua, you can already run a few tests and then use kyua db-exec to issue arbitrary SQL queries against the database; the schema (see store/schema.sql) might look a bit convoluted, but a bunch of NATURAL JOINs will yield the desired output.The feature requests that have the highest priority at this point are the ability to generate a report of the last tests run both as a text file and as an HTML dashboard, because having these features means we can finally kill the atf-run and atf-report pair. At this point I'm, once again, "stuck" while figuring out how to best organize the code to make all these things possible while still keeping a nice separation across the existing layers (cli, engine and store)... all without introducing much unnecessary complexity. But exciting times lie ahead! [Continue reading]

  • Kyua: Weekly status report

    Submitted a placeholder implementation of the new persistence layer (store top-level directory). This only supports opening a database and ensuring its metadata is valid. See r253.Added a db-exec CLI command to execute arbitrary SQL commands onto the database. This is handy during development and testing, but may also help users to extract information out of the database in those cases where the CLI does not cover their needs just yet. See r255.Miscellaneous fixes and improvements to utils::env and utils::sqlite.Preliminary code to support putting objects (like actions and contexts) into the database. I've been thinking about this for a while and finally came up with a design that completely decouples the persistence needs from the higher-level classes in the engine layer. I haven't submitted the code yet though, as it lacks tests. (Still thinking how to write the loading of objects though.) [Continue reading]

  • Kyua: Weekly status report

    Moved all the logic code of the "debug", "list" and "test" commands from the CLI layer to the engine layer.Up until now, the CLI modules implementing these commands contained all the logic to load Kyuafiles and iterating over them to find matching tests and applying the desired operation to them. This kind of code belongs in "driver" modules (aka controllers) of the engine layer, because there is nothing UI-related in them.After this refactoring, the code left in the CLI modules is purely presentation-related, and the code in the engine implements all the logic.The goal of these changes is to be able to hide the interactions with the database in these controllers. The CLI layer has no business in dealing with the database connection (other than allowing the user to specify which database to talk to, of course).Implemented a very simple RAII model for SQLite transactions.Some additions to the utils::sqlite bindings to simplify some common calling patterns (e.g. binding statement parameters by name).Preliminary prototypes at database initialization. This involves creating new databases and populating them with the initial schema, plus dealing with database metadata to, e.g. detect if we are dealing with the correct schema version.The code for this is still too crappy to be submitted, so don't look for it in the repository just yet!The design document details many things that should be part of the schema (e.g. "sessions"0, but I've decided that I'll start easy with a simplified schema and later build on top of it. Otherwise there will be too many clunky moving parts to deal with while the fundamental ideas are not yet completely clear.Fixes to let the code build and run again in NetBSD (macppc at least).I've now been stuck for a few days trying to figure out what the best way to implement the conversion of (new) in-memory objects to database objects is, and how to later recover these objects. E.g. what the correct abstractions are to take test case results and put them in the database, and how to retrieve these results to generate reports later on. I now start to have a clear mental picture on how this should look like, but I have yet to see how it will scale. [Continue reading]

  • Kyua: Weekly status report

    Added support to prepare and execute statements to utils::sqlite.Added the UTILS_PURE definition to tag functions as pure. I should now sweep through old code to apply the attribute where possible.Created a pkgsrc package for Vera++. Investigating if I can use this tool for coding style validation, as the current code of Kyua is a bit messy in this regard.Made a quick attempt at getting kyua test record test results in a simple database; success! This was just a proof of concept, so it is not submitted yet.Started to refactor the code to move many logic constructions from the cli to the engine. With the need to store test results in the database, it's clear that these semantics do not belong in the CLI, but the current code structure do not easily allow this. Need to add some "controller" classes in the engine to hide all the interaction among internal components. [Continue reading]

  • Kyua: Weekly status report

    Some more reading on SQLite. The Using SQLite book is close to awesome; very easy to read and full of insightful tips.Cleaned up my preliminary SQLite wrapper code. Still very incomplete, but I've bitten the bullet and decided to commit the new library as is; otherwise I'll just keep procrastinating. So, ladies and gentlemen, welcome the new utils::sqlite module in Kyua. At the moment, this just provides the barebones to open and close a database. I need to start playing around with preparing statements before I can model these in my wrapper classes. (Yes, the API structure is extremely similar to that in Lutok.)Update (23:40): I sent this too early, thinking I would not have anything nice to report this week either. But as it turns out, I finally had some spare time this late evening and got a chance to submit the extremely-incomplete SQLite C++ bindings I've been toying around with. See the update above! :-) [Continue reading]

  • Kyua: Weekly status report

    Unfortunately, not much time this week either :-(I am currently working on some adjustments to the design document of the database to describe new ideas; the previous design was incomplete in some areas and/or not accurate enough to support the described use cases.  However, I've not had the to time to finish these edits and publish them. [Continue reading]

  • Kyua: Weekly status report

    Unfortunately, no activity this week other than some brainstorming on the database design.Why? My shipment container from Dublin arrived and I spent most of the weekend organizing stuff, setting up my little NetBSD box and attending a friend's wedding! [Continue reading]

  • VirtualBox and port forwarding (and Fusion)

    I have been a VMware Fusion user for a long time and I think I am about to be a VirtualBox convert. Let me explain you my story.With the release of the Windows 8 Developer Preview, I felt like giving the system a try. Not because I particularly like or care about Windows, but just because I am genuinely curious about operating systems and wanted to check out their new Metro interface. So I went ahead, downloaded the ISO image, created a new virtual machine under Fusion 3, started the virtual machine, and saw Fusion crash while the Windows installer was booting.Oh well, an emulation problem. I tried to fiddle with the virtual machine settings a bit, selecting different CPU virtualization settings and different hardware devices. No luck. After that, I resorted to Google to look for a solution to the problem and found that other people were experiencing the same issue. But, at the same time, I also found that VMware had just released Fusion 4 and that this new release works with Windows 8 just fine. Ah, cool! Or not... $50 for the upgrade. For my use cases (which amount to booting a few BSD and Linux virtual machines in console mode), it is hard to justify the upgrade — specially when it would just be to fulfill my curiosity and then delete the virtual machine 10 minutes later.But wait! There is that thing called VirtualBox. I had used this product a while ago and did not enjoy doing so because its interface was utterly (really) confusing. It was nowhere near the polish of Fusion or Parallels. However, to my surprise, most of my major itches have been fixed and the new interface of VirtualBox is bearable. And it boots the Windows 8 preview just fine!Right. So after installing Windows 8, playing with it for 10 minutes, and later deleting it, I decided to look around and see what else VirtualBox has to offer. It has many settings. Yes, I know, Fusion has many settings too, but most are hidden from the UI and can only be accessed by modifying the vmx text file. In the case of VirtualBox they are easily accessible.I kept looking around... and there it was, the "Port Forwarding" option staring at me under the Network settings.One of the features I've been wishing for a long, long, long time is the ability to easily configure a NATed (not bridged) virtual machine in my laptop, assign a host name to it from the graphical interface and then be able to SSH into the virtual machine from the laptop. As it turns out, this is doable in Fusion but it requires some nasty tinkering: one has to edit a dhcpd configuration file by hand to add a new stanza for the virtual machine (/Library/Application Support/VMware Fusion/vmnet8/dhcpd.conf). This involves manually copy/pasting the MAC address of the virtual machine into that file (which you must first extract from the vmx file), assigning it an IP in the corresponding subnet, and sticking the IP plus host name in the /etc/hosts file. This gets boring pretty quickly.VirtualBox may or may not support this kind of setting, but it has the aforementioned "Port Forwarding" option which is interesting on its own. As you can imagine, this sets up forwarding of a port from the host system to a port of the virtual machine. Creating an entry for SSH is as easy as clicking a couple of buttons, choosing a local port on the host system (e.g. 2022) and forwarding it to port 22 of the guest system. Voila. You can now access your virtual machine locally and remotely without having to go through any of the host name nor DHCP madness I experienced before:$ ssh -p 2022 localhostYou can later create an alias in your ~/.ssh/config file to make accessing the virtual machine a breeze:Host myvmHostname localhostPort 2022And later:$ ssh myvmIt couldn't be easier. This trivial feature is a killer for the kind of environment I am interested in. And I also know that VirtualBox provides some other interesting features, like headless mode, that are not (easily) available with Fusion.So yes, I think I am a convert. [Continue reading]

  • Kyua: Weekly status report (db designdoc edition!)

    Moved the code of utils::lua to a new project, Lutok.Attempted to integrate a copy of Lutok into the Kyua source code to simplify installing Kyua. I have been playing with Subversion externals and Autoconf/Automake hacks to make this happen, but unfortunately haven't got a pleasant solution yet.Modified Lutok to not expose the Lua C API at all from header files and cleaned up the Kyua code to cope with the changes.Been chewing through the first chapters of the "Using SQLite" book to refresh my SQL skills. And, most importantly, wrote a preliminary design document for the database store of Kyua and the reporting features. Comments certainly welcome! Be aware that this is how atf-report will be replaced, so once this is done we should be able to finally kill atf-run and atf-report altogether :-) [Continue reading]

  • Name your C++ auto-cleaners

    As you may already know, RAII is a very powerful and popular pattern in the C++ language. With RAII, you can wrap non-stack-managed resources into a stack-managed object such that, when the stack-managed object goes out of scope, it releases the corresponding non-stack-managed object. Smart pointers are just one example of this technique, but so are IO streams too.Before getting into the point of the article, bear with me for a second while I explain what the  stack_cleaner object of Lutok is. The "stack cleaner" takes a reference to a Lua state and records the height of the Lua stack on creation. When the object is destroyed (which happens when the declaring function exits), the stack is returned to its previous height thus ensuring it is clean. It is always a good idea for a function to prevent side-effects by leaving its outside world as it was — and, like it or not, the Lua state is part of the outside world because it is an input/output parameter to many functions.Let's consider a piece of code without using the stack cleaner:voidmy_function(lutok::state& state, const int foo){    state.push_integer(foo);    ... do something else in the state ...    const int bar = state.to_integer();    if (bar != 3) {        state.pop(1);         throw std::runtime_error("Invalid data!");    }    state.pop(1);}Note that we have had to call state.pop(1) from "all" exit points of the function to ensure that the stack is left unmodified upon return of my_function. Also note that "all exit points" may not be accurate: in a language that supports exceptions, any statement may potentially raise an exception so to be really safe we should do:voidmy_function(lutok::state& state, const int foo){    state.push_integer(foo);    try {         ... do something else in the state ...        const int bar = state.to_integer();        if (bar != 3            throw std::runtime_error("Invalid data!");    } catch (...) {        state.pop(1);        throw;    }    state.pop(1);}... which gets old very quickly. Writing this kind of code is error-prone and boring.With an "auto-cleaner" object such as the stack_cleaner, we can simplify our code like this:voidmy_function(lutok::state& state, const int foo){    lutok::stack_cleaner cleaner(state);    state.push_integer(foo);    ... do something else in the state ...    const int bar = state.to_integer();    if (bar != 3)         throw std::runtime_error("Invalid data!"); }And we leave the boring task of determining when to actually call state.pop(1) to the compiler and the runtime environment. In this particular case, no matter how the my_function terminates, we ensure that the Lua stack will be left as the same size as it was before.But, as I said earlier, all this was just an introduction to the idea that made me write this post.When you declare an auto-cleaner object of any kind, be sure to give it a name. It has happened to me a few times already that I have written the following construct:lutok::stack_cleaner(state);... which is syntactically correct, harmless and "looks good" if you don't look closely. The compiler will chew along just fine because, even though we are declaring an anonymous object, its constructor and destructor may be doing who-knows-what, so their code must be called and thus the "unused variable" warning cannot really be raised.However this does not give us the desired behavior. The cleaner object will be constructed and destructed in the same statement without having a chance to wrap any of the following code, because its scope is just the statement in which it was defined. In other words, the cleaner will have absolutely no effect on the rest of the function and thus will be useless.So, moral of the story: always give a name to your auto-cleaner objects so that their scope is correctly defined and their destructor is run when you actually expect:lutok::stack_cleaner ANY_NAME_HERE(state); [Continue reading]

  • Introducing Lutok: A lightweight C++ API for Lua

    It has finally happened. Lutok is the result of what was promised in the "Splitting utils::lua from Kyua" web post.Quoting the project web page:Lutok provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a small subset of miscellaneous utility functions built on top of the wrappers.Lutok focuses on providing a clean and safe C++ interface; the drawback is that it is not suitable for performance-critical environments. In order to implement error-safe C++ wrappers on top of a Lua C binary library, Lutok adds several layers or abstraction and error checking that go against the original spirit of the Lua C API and thus degrade performance.Lutok was originally developed within Kyua but was later split into its own project to make it available to general developers.Coming up with a name for this project was quite an odyssey, and is what has delayed is release more than I wanted. My original candidate was "luawrap" which, although not very original, was to-the-point and easy to understand. Unfortunately, that name did not clear with the legal department and I had to propose several other names, some of which were not acceptable either. Eventually, I settled with "Lutok", which comes from "LUa TOolKit".At this point, the source tree of Lutok provides pretty much the same code as the utils::lua module of Kyua. While it may be enough to get you started, I'm pretty sure you will lack some functions in the state class. If that is the case, don't hesitate to file a bug report to let me know what is missing.In case you missed the link above, the project page is here: Lutok in Google Code. [Continue reading]

  • Using va_copy to safely pass ap arguments around

    Update (2014-12-19): The advice provided in this blog post is questionable and, in fact, probably incorrect. The bug described below must have happened for some unrelated reason (like, maybe, reuse of ap), but at this point (three years later!) I do not really remember what was going on here nor have much interest in retrying. [Continue reading]

  • Kyua: Weekly status report

    My plan for this week was to release utils::lua as a separate module. Unfortunately, this has not been possible because I've been fighting with legal to clear the name for the project. I don't have an approved name yet, so this will have to wait a bit more :-(On another order of things, I have started writing a design document for the database that will collect test case results and other information. Will share it as soon as it is readable and more or less complete. [Continue reading]

  • Diversions in Autoconf (actually, in M4sugar)

    Have you ever wondered how Autoconf reorganizes certain parts of your script regardless of the order in which you invoke the macros in your configure.ac script? For example, how come you can define --with-* and --enable-* flags anywhere in your script and these are all magically moved to the option-processing section of the final shell script? After all, Autoconf is just a collection of M4 macros, and a macro preprocessor's only work is to expand macros in the input with predefined output texts. Isn't it?Enter M4sugar's diversions. Diversions are a mechanism that allows M4 macros to output code to different text blocks, which are later concatenated in a specific order to form the final script.Let's consider an example (based on a few M4 macros to detect the ATF bindings from your own configure scripts). Suppose you want to define a macro FROB_ARG to provide a --with-frob argument whose argument must be either "yes" or "no". Also suppose you want to have another macro FROB_CHECK to detect whether libfrob exists. Lastly, you want the user to be able to use these two independently: when FROB_CHECK is used without invoking FROB_ARG first, you want it to unconditionally look for the library; otherwise, if FROB_ARG has been used, you want to honor its value.We could define these macros as follows:AC_DEFUN([FROB_ARG], [    AC_ARG_WITH([frob],                [AS_HELP_STRING([--with-frob=yes|no], [enable frob])],                [with_frob=${withval}, [with_frob=yes]) ])AC_DEFUN([FROB_CHECK], [    m4_divert_text([DEFAULTS], [with_frob=yes])    if test "${with_frob}" = yes; then        ... code to search for libfrob ...     elif test "${with_frob}" = no; then        :  # Nothing to do.     else        AC_MSG_ERROR([--with-frob must be yes or not])     fi])Note the m4_divert_text call above: this macro invocation tells M4sugar to store the given text (with_frob=yes) in the DEFAULTS diversion. When the script is later generated, this text will appear at the beginning of the script before the command-line options are processed, completely separated from the shell logic that consumes this value later on.With this we ensure that the with_frob shell variable is always defined regardless of the call to the FROB_ARG macro. If this macro is called, with_frob will be defined during the processing of the options and will override the value of the variable defined in the DEFAULTS section. However, if the macro has not been called, the variable will keep its default value for the duration of the script.Of course, this example is fictitious and could be simplified in other ways. But, as you can see in the referred change and in the Autoconf code itself, diversions are extensively used for trickier purposes. In fact, Autoconf uses diversions to topologically sort macro dependencies in your script and output them in a specific order to satisfy cross-dependencies.Isn't that cool?  I can't cease to be amazed, but I also don't dare to look at how this works internally for my own sanity... [Continue reading]

  • Introducing 'Dirt and the City'

    A friend of mine recently started a blog called Dirt and the City and, because he has invited me to contribute to it, I feel like posting a reference here ;-)Dirt and the City is a blog that intends to show you the "ugly" (literally) side of NYC. The blog is named after a famous TV show filmed in this same city; I haven't watched the show myself, but I bet that there is no dirt to be seen in it. If you are wondering what the city really looks like, this blog is your place.But, before visiting the blog, be sure to take it as it is meant to be: just a fun collection of photos! If you ever want to visit NYC, don't let the contents of the blog change your plans. You will definitely enjoy your visit regardless.Without further ado, and just in case you missed the link above, click here to visit Dirt and the City. [Continue reading]

  • Kyua: Weekly status report

    Slow week.  We had some team-related events at work and I have friends over from Dublin, so it has been hard to find some time to do work on Kyua.  Regardless, here comes the weekly report: Split utils::lua into its own package, per some user's request.  I'm still setting up the separate project and have to do lots of cleanup on the code, so nothing is available yet.Started experimenting on the long promised "tests results database".  So far, I have started writing a small utils::sqlite3 C++, RAII-based wrapper for SQLite 3 and a SQL schema for the database.  My rusty SQL skills don't help :-PHowever, all this work is local so there have been no commits to the repository this week; sorry! [Continue reading]

  • Splitting utils::lua from Kyua

    If you remember a post from January titled C++ interface to Lua for Kyua (wow, time flies), the Kyua codebase includes a small library to wrap the native Lua C library into a more natural C++ interface. You can take a look at the current code as of r129.Quoting the previous post:The utils::lua library provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a place (the operations module) to add miscellaneous utility functions built on top of the wrappers.While the RAII wrappers and other C++-specific constructions are a very nice thing to have, this library has to jump through a lot of hoops to interact with binary Lua versions built for C. This makes utils::lua not usable for performance-critical environments. Things would be way easier if utils::lua linked to a Lua binary built for C++, but unfortunately that is not viable in most, if not all, systems with binary packaging systems (read: most Linux distributions, BSD systems, etc.).That said, I've had requests from a bunch of people to provide utils::lua separately from Kyua regardless of any performance shortcomings it may have, and this is what I have started doing this weekend. So far, I already have a pretty clumsy standalone package (I'll keep the name to myself for now ;-) that provides this library on its own with the traditional Automake, Autoconf and Libtool support. Once this is a bit better quality, and once I modify Kyua to link against this external library and assess that things work fine, I'll make the decision on how to publish this (but most likely it should be a separate project in Google Code).Splitting the code doesn't come with its own issues though: maintaining a separate package will involve more work and hopefully/supposedly, dealing with quite a few feature requests to add missing functionality! Also, it means that utils::lua cannot use any of the other Kyua libraries (utils::sanity for example), so I lose a bit of consistency across the Kyua codebase. I am also not sure about how to share non-library code (in particular, the m4 macros for Autoconf) across the two packages.So, my question is: are you interested in utils::lua being shipped separately? :-)  Do you have any cool use cases for it that you can share here?Thanks! [Continue reading]

  • Kyua: Weekly status report

    Not a very active week: I've been on-call four days and they have been quite intense. Plus I have had to go through a "hurricane" in NYC. That said, I had some time to do a bit of work on Kyua and the results have been nice :-) Made calls to getopt_long(3) work with GNU Getopt by using the correct value of optind to reset option processing.Improved the configure script to error out in a clearer way when missing dependencies (pkg.m4 and Lua) are not found.Did some portability fixes.And released Kyua 0.2! (along with a pkgsrc package) At this point, I have to start thinking how to implement test suite reporting within Kyua (i.e. how to replace atf-report). This probably means learning SQLite and refreshing my incredibly rusty SQL skills. Also, it's time to (probably) split the utils::lua library in a separate package because there is several people interested in this. [Continue reading]

  • Kyua 0.2 released

    Dear readers,I am very proud to announce the second formal release of Kyua, 0.2.  This release comes with lots of internal changes and a few user-visible changes.  Instead of listing all the changes and news here, I'll just recommend you to visit the 0.2 release page and read all the notes.This release has been tested under NetBSD-current, Mac OS X Snow Leopard, Debian sid and Ubuntu 10.04.1 LTS.Please report any problems that you encounter.Have fun! [Continue reading]

  • Kyua: Weekly status report

    Implemented the "debug" command. Still very rudimentary, this command allows the user to run a test case without capturing its stdout nor stderr to aid in debugging of failed test cases. In the future, this command will also allow things like keeping the work directory for manual inspection, or spawning a shell or a debugger in the work directory after a test case is executed. Many build fixes under different platforms in preparation for a 0.2 release. In particular, Kyua now builds under Ubuntu 10.04.1 LTS but some tests fail. Had to disable the execution of the bootstrap test suite within Kyua because it stalls in systems where the default shell is not bash.  I presume this is a bug in GNU Autotest, so I filed a report. [Continue reading]

  • Kyua: Weekly status report

    Changed the --config and --variable options to be program-wide instead of command-specific. The configuration file should be able to hold properties that tune the behavior of Kyua, not just the execution of tests, so this makes sense. Added the config subcommand, which provides a way to inspect the configuration as read by Kyua. Got rid of the test_suites_var function from configuration files and replaced it by simple assignments to variables in the test_suites global table. Enabled detection of unused parameters by the compiler and fixed all warnings. Changed developer mode to only control whether warnings are enforced or not (not to enable the warnings themselves) and made developer mode be disabled on formal releases. Barring release testing, Kyua 0.2 should be ready soon :-) [Continue reading]

  • Kyua: Weekly status report

    Added the ability to explicitly define timeouts for plain test programs. This just completes the work from past week that made running plain test programs at all but had an ugly TODO in it to implement this missing feature. The bootstrap test suite now runs as a single test case within the whole Kyua test suite. Demonstrates the plain test programs interface functionality :-) Started reshuffling code to make the <tt>--config</tt> and related flags program-wide. The goal is to allow the configuration file to tune the behavior of all of Kyua, so these flags must be made generic. They were previously specific to the <tt>test</tt> subcommand only. [Continue reading]

  • Kyua: Weekly status report

    Implemented the "plain" test interface. This allows plugging "foreign test programs" into a Kyua-based test suite. (A foreign test program is a program that does not use any testing framework: it reports success and failure by means of an exit code.)Generalized code between the atf and plain interfaces and did some cleanups.Attempted to fix the ATF_REQUIRE_EQ macros in ATF to evaluate their arguments only once. This has proven to be tricky and therefore it is not done yet. At the moment, the macros reevaluate the arguments on a failure condition, which is not too big of a deal. [Continue reading]

  • Kyua: Weekly status report

    Finished splitting the atf-specific code from the generic structures in the engine. The engine now supports the addition of extra test interfaces with minimal effort.Started implementing a "plain" test interface for test programs that do not use any test framework. This is to allow muxing non-atf tests into atf-based test cases, which is required in the NetBSD tree. [Continue reading]

  • Kyua: Weekly status report

    Slow week. I've been busy moving to NYC!Kept working on the splitting of ATF-specific code from the core test abstractions. The work is now focused on refactoring the results-reporting pieces of the code, which are non-trivial. [Continue reading]

  • Kyua: Weekly status report

    One of the major features I want in place for Kyua 0.2 is the ability to run "foreign" test programs as part of a test suite: i.e. to be able to plug non-ATF test programs into a Kyuafile. The rationale for this is to lower the entry barrier of newcomers to Kyua and, also, to allow running some foreign test suites that exist in the NetBSD source tree but that are currently not run.The work this week has gone in the direction outlined above among other things:Created an abstract base class to represent test programs and provided an implementation for ATF.Did the same thing for test cases.Moved the kyua-cli package from pkgsrc-wip into pkgsrc head. Installing Kyua is now a breeze under NetBSD (and possibly under other platforms supported by pkgsrc!) The next steps are to generalize the test case results, clearly separate the ATF-specific code from the general abstractions, and add an implementation to run simple test programs. [Continue reading]

  • Kyua: Weekly status report

    Belated update:Created a pkgsrc package for kyua-cli. Still in pkgsrc-wip though because pkgsrc is in a feature freeze.Wrote a little tutorial on how to run NetBSD tests using Kyua.Started work on 0.2 by doing a minor UI fix in the about command.I've now started to look at how to split the engine into different "runners" to add support for test programs written without ATF. Not that I plan to use this feature... but having it in place will ensure that the internal interfaces are clean and may help in adoption of Kyua. [Continue reading]

  • Kyua: Weekly status report

    This has been the big week:Wrote some user documentation for the kyua binary.Fixed some distcheck problems.Released Kyua 0.1!The next immediate thing to do is to write a short tutorial on how to run the NetBSD tests with Kyua and get some people to actually try it. After that, there are many things to improve and features to add :-) [Continue reading]

  • Kyua 0.1 released!

    Dear readers,I'm very proud to announce that the very first release of Kyua, obviously numbered 0.1, has been published! The release page for kyua-cli-0.1 contains more details about what this release offers, and don't hesitate to join the kyua-discuss mailing list and ask if you have questions.Kyua was started sometime in past October and it has taken over six months to get the first public version. (It is true that my free time has heavily fluctuated during this period though, so it does not mean that six months of intensive coding have gone into the release ;-) Kyua 0.1 hovers at almost 40K lines of code according to Ohloh and comes with 800 test cases, yet it only implements a replacement for the atf-run component of ATF.I have a package ready for pkgsrc, but unfortunately can't submit it because pkgsrc is in a freeze in preparation for the next stable branch. No matter what, expect detailed instructions on how to install Kyua and how to run the NetBSD-current test suite with it very soon.Enjoy, and keep the bug reports coming! [Continue reading]

  • Kyua: Weekly status report

    A couple of things have happened:Released ATF 0.14. This release was followed by an import into NetBSD and fixing of subsequent fallout.Some performance improvements to atf-sh. After killing a bunch of complex shell constructions and removing lots of obsolete functions, the performance results are significant. There is still room for improvement of course, and I still need to quantify how these optimizations behave in single-core machines.I certainly expected more progress this past week... but in case you don't know: I am moving countries very soon now, and as the move date approaches, there is more and more stuff to be done at home so less and less time for hacking. [Continue reading]

  • Validating format strings in custom C functions

    In C, particularly due to the lack of dynamic strings, it's common to pass format strings around together with a variable set of arguments. A prototype like this is very common:void my_printf(const char*, ...);For the standard printf and similar functions, some compilers will ensure that the variable list of arguments matches the positional parameters in the format string and, if they don't match, raise a warning.  This is, however, just a warning "hardcoded" to match these functions, as the compiler can't know how the variable arguments of our custom my_printf function relate to the first argument.Or can it?I was made aware of a nice GCC attribute that allows developers to tag printf-like functions in a manner that allows the compiler to perform the same validation of variable arguments and format strings.  This is in the form of a GCC __attribute__ that also happens to work with CLang.  Let's see an example to illustrate how this works:#include <stdarg.h>#include <stdio.h>static void my_printf(const char*, ...)    __attribute__((format(printf, 1, 2)));static voidmy_printf(const char* format, ...){    va_list ap;    printf("Custom printf: ");    va_start(ap, format);    vprintf(format, ap);    va_end(ap);}intmain(void){    my_printf("this is valid %dn", 3);    my_printf("but this is not %fn", 3);}If we compile the code above:$ clang example.cexample.c:22:33: warning: conversion specifies type 'double' butthe argument has type 'int' [-Wformat]    my_printf("but this is not %fn", 3);                               ~^     ~1 warning generated.Very useful.  This function attribute has been applied to many functions in the NetBSD tree and many bugs have been spotted thanks to it.Instead of me explaining how the format attribute works, I'll refer you to the official documentation. The attribute recognizes several format styles and takes different arguments depending on them, so it is a bit tricky to explain. Plus, if you look at the extensive list of attributes, you may find some useful stuff ;-)Happy debugging! [Continue reading]

  • Kyua: Weekly status report

    Added support for recursion from the top-level Kyuafile. This Kyuafile should not reference any directories explicitly because the directories at the top level are supposed to be created by the installation of packages. Closed issue 9.Improved error messages when the test programs are bogus. Closed issue 13.Backported format-printf attribute improvements from NetBSD head to ATF.Miscellaneous build and run fixes for both Kyua and ATF in NetBSD and OS X.Cut a release candidate for atf-0.14 and started testing on NetBSD.The kyua-cli codebase is now feature complete. Blocking the 0.1 release are the need to polish the release documents and the requirement of releasing atf-0.14 beforehand. Should happen soon :-) [Continue reading]

  • Kyua: Weekly status report

    Some long-standing bug fixes / improvements have gone in this week:Improvements to the cleanup routine, which is used to destroy the work directory of a test case after the test case has terminated:Heavy refactoring to be tolerant to failures. These failures may arise when a child of the test case does not exit immediately and holds temporary files in the work directory open for longer than expected.Any file systems that the test case leaves mounted within the work directory will now be unmounted, just as the ATF test interface mandates. I realize that this adds a lot of complexity to the runtime engine for very little gain. If/when we revise the tests interface, it will be worth to reconsider this and maybe leave the cleanup of mounted file systems to the test case altogether.As a result, issue 17 has been fixed!Kyua now captures common termination signals (such as SIGINT) and exits in a controlled manner. What this means is that Kyua will now kill any active test programs and clean up any existing work directories before exiting. What this also means is that issue 4 is fixed.To increase amusements, a little FYI: the above points have never worked correctly in ATF, and the codebase of ATF makes it extremely hard to implement them right. I have to confess that it has been tricky to implement the above in Kyua as well, but I feel much more confident in that the implementation works well. Of course, there may be some corner cases left... but, all in all, it's more robust and easier to manage.The list of pending tasks for 0.1 shortens! [Continue reading]

  • Kyua: Weekly status report

    Some cool stuff this week, albeit not large-scale:Implemented the --variable flag in the test command. This flag, which can be specified multiple times, allows a user to override any configuration variable (be it a built-in or a test-suite variable) from the command line. This is actually the same as atf-run's -v flag, but with a clear separation between built-in configuration settings and test-suite specific settings.Added support for several environment variables to allow users (and tests) to override built-in paths. I can't imagine right now any legitimate use for these variables, but hardcoded values are bad in general, atf-run provided these same variables, and these variables are very handy for testing purposes.Added support for the new require.files test-case metadata property to both ATF and Kyua. This new property allows tests to specify a set of files that they require in order to run, and is useful for those tests that can't run before make install is executed.The functionality planned for the 0.1 release is now pretty much complete. There is still a few rough edges to clean, some documentation to write, and some little features to implement/fix. See the open bugs for 0.1 to get an idea of the remaining tasks. [Continue reading]

  • Kyua: Weekly status report

    This week:Cleaned up the internal code of the "list" command and added a few unit tests.Added integration tests for the "test" command that focus mostly on the behavior of the "test" command itself. There is still a need for full integration tests that validate the execution of the test cases themselves and their cleanup, and these will be tricky to write.Changed atf-c, atf-c++ and atf-sh to show a warning when a test program is run by hand. Users should really be using atf-run to execute the tests, or otherwise things like isolation or timeouts will not work (and they'll conclude that atf is broken!). [Continue reading]

  • Joining the Board of Directors of The NetBSD Foundation

    Dear readers,It is a great pleasure for me to announce that I have just joined the Board of Directors of The NetBSD Foundation.If you are curious about how this all happened, here it goes: as described in the election procedure, someone who I don't know nominated me back in November of 2010 to become part of the new board composition. After the Nomination Committee made their way through the long list of nominees, interviews and deliberation, they proposed a slate for the new members of the board. This slate included two people who were renewing their previous term (tron@ and reed@) and two new members (spz@ and jmmv@) to replace the two members stepping down (agc@ and david@). The final approval of the proposed slate happened in April 2011 and, yesterday, it became official.Looking back, I can't believe it has been already ~10 years since I first started using NetBSD. Ah, those were the times of 1.5. My responsibilities within the project have shifted a lot during this time, ranging from the maintenance of GNOME and several web site tasks, the development of the testing framework (which is still ongoing today), and, from today, to my new duties within the board.I'm looking forward to working on this new area of the project and I hope to meet the requirements of such position. The two members being replaced will be missed, and keeping up to the high bar they left behind will be tough. That said, I hope spz@ and I will be able to meet the expectations.If you have any ideas or concerns regarding the direction of The NetBSD Project, don't hesitate to send them to the board@! [Continue reading]

  • Kyua: Weekly status report, BSDCan 2011 edition

    I spent past week in Ottawa, Canada, attending the BSDCan 2011 conference. The conference was composed of lots of interesting content and hosted many influential and insightful BSD developers. While the NetBSD presence was very reduced, I could have some valuable talks with both NetBSD and FreeBSD developers.Anyway. As part of BSDCan 2011, I gave a talk titled "Automated testing in NetBSD: past, present and future". The talk focused on explaining what led to the development of ATF in the context of NetBSD, what related technologies exist in NetBSD (rump, anita and dashboards), what ATF's shortcomings are and how Kyua plans to resolve them. (Video coming soon, I hope.)The talk was later followed by several questions and off-session conversations about testing in general in BSDs. Among these, I gathered a few random ideas / feelings:The POSIX 1003.3 standard defines the particular results a test can emit (see the corresponding DejaGnu documentation). Both ATF and Kyua already implement all the results defined in the standard, but they use different names and extend the standard with many extra results. Given that the standard does not define useful concepts like "expected failures", an idea that came up is to provide a flag to force POSIX compliance at the cost of being less useful. Why? Just for the sake of saying that Kyua conforms to this standard.The audience seemed to like the idea of a "tests results store" quite a bit, and the sound of SQLite for the implementation was not bullied. This is something I'm eager to work on, but not before I publish a 0.1 release.I highlighted the possibility of allowing Kyua to run "foreign" test programs so that we could integrate the results into the database. This could be useful to run tests for which we (*BSD) have no control (e.g. gcc) in an integrated manner. The idea was not bullied by anyone either.FreeBSD has already been looking at ATF / Kyua and they are open to collaboration.OpenBSD won't import any new C++ code, and adding C-based tests to the tree while relegating the C++ runtime to the ports is not an option. Somehow I expected this.Junos (the FreeBSD-based operating system from Jupiter Networks) recently imported ATF and they are happy with it so far. Yay!Would be nice to have a feature to run tests remotely after, maybe, deploying a single particular test and its dependencies. This is gonna be tricky and not in my current immediate plans.Other than that, I had little time to do some coding:Fixed a problem in which both ATF and Kyua were not correctly resetting the timezone of the executed tests. I only found this because, after arriving in Canada, some Kyua tests would start to fail. (Yes, the fix is in both code bases!)Added some support to capture deadly signals that terminate Kyua so that Kyua can print an informational message stating that something went wrong and which log file contains more information. See r121.That's it folks! Thanks to those attending the conference and, in particular, to those that came to my talk :-) [Continue reading]

  • Kyua: Weekly status report

    Unfortunately, no progress whatsoever this week :-( Too busy at work and preparing my upcoming trips. Time to fly to BSDCan 2011 tomorrow. [Continue reading]

  • Kyua: Weekly status report

    Unfortunately, I have had no time for coding this week. The only things I could do were:Fixed a few build problems on NetBSD introduced during past week's changes.Built Kyua and ran a few tests on NetBSD/macppc (just for the joy of it).Coding has been eclipsed by the preparation of my presentation for BSDCan 2011; at this point, this has priority over any code changes. I'd argue that preparing the presentation is also part of the project, so some time has been invested ;-)More next week, hopefully, but I don't expect being able to do any big code improvements until after BSDCan. [Continue reading]

  • Use explicit conditionals

    In C — or, for that matter, several other languages such as Python or C++ — most native types can be coerced to a boolean type: expressions that deliver integers, pointers or characters are automatically treated as boolean values whenever needed. For example: non-zero integer expression and non-NULL pointers evaluate to true whereas zero or NULL evaluate to false.Many programmers take advantage of this fact by stating their conditionals like this:void func(const int in) { if (in) { ... do something when in != 0 ... } else { ... do something else when in == 0 ... }}... or even do something like:bool func(const struct mystruct *ptr) { int out = calculate_out(in); ... do something more with out ... return out; // The return type is bool though; is this ok?}... but such idioms are sloppy and can introduce confusion or bugs. Yes, things work in many cases, but "work" is not a synonym for "good" ;-)Taking advantage of coercions (automatic type conversions) typically makes the code harder to follow. In statically-typed languages, whenever you define a variable to be of a particular type, you should adhere to said type by all means so that no confusion is possible. Conceptually, an integer is not a boolean and therefore it should not be treated as such. But this is just a matter of style.On the other hand, the automatic conversions can lead to hidden bugs; for example, in the second function above, did we really want to convert the "out" value to a boolean or was that a mistake?  This is not so much a matter of style but a matter of careful programming, and having as much state as possible "in your face" can help prevent these kind of trivial errors.In contrast, one would argue that having to provide explicit checks or casts on every expression is a waste of time. But keep in mind that code is supposed to be written once and read many times. Anything you can do to communicate your intent to the reader will surely be appreciated.Disclaimer: the above is my personal opinion only. Not following the style above should have no implications on the resulting binary code. (Wow, this post had been sitting as a draft around here for a too long time.) [Continue reading]

  • Kyua: Weekly status report

    Ouch; I'm exhausted. I just finished a multi-hour hacking session to get the implementation of the list subcommand in control. It is now in a very nice user-facing shape, although its code deserves a little bit of house cleaning (coming soon).Anyway, this week's progress:Added the kyuaify.sh script. This little tool takes a test suite (say, NetBSD's /usr/tests directory) and converts all its Atffiles into Kyuafiles. The tool is no sophisticated at all; in fact, it is a pretty simple script that I haven't tested with any other test suites so far. See the announcement of kyuaify for some extra details.Added logging support for the Lua code and changed the Lua modules to spit some logging information while processing Kyuafiles and configuration files.Added a mechanism in the user interface module to consistently print informational, warning and error messages.Implemented proper test filtering (after several iterations). What does proper mean? Well, for starters, test filters are always relative to the test suite's root (although we already saw this in last week's report). But the most important thing is that the filters are now validated: nice, user-friendly errors will be reported when the collection of tests is non-disjoint, when it includes duplicate names or when any of the provided filters does not match any test case. I really need to document the rationale of these in the manual, but for now the r118 commit message includes a few details.Drafted some notes for the BSDCan 2011 conference. I am quite tempted to reuse parts of the presentation from NYCBSDCon 2010, but I really want to give more emphasis on Kyua this time. In case you don't know, Kyua was first announced in NYCBSDCon 2010 and it was still a very immature project. The project has changed a lot since then.The wishful plan for next week is to clean up the internals of the list command (by refactoring and adding unit tests) and implement preliminary integration tests for the test subcommand. The latter scares me quite a bit. But... hmm... I guess preparing the presentation for BSDCan 2011 has priority. [Continue reading]

  • Kyua: Weekly status report

    This week started easy:Added integration tests for the about and help subcommands. These were pretty easy to do.Added integration tests for the list subcommand. I initially added these tests as expected failures to reason about the appearance and behavior of this command from the point of view of the user before actually working on the code... and I am still writing such code to make these tests pass!This is where things got a bit awry. Polishing the behavior of the list command so that its interface is consistent among all flag and argument combinations is tricky and non-trivial. I ended up having to change code deep down in the source tree to implement missing features and to change existing bits and pieces:Implemented support to print all test case properties, not only the user-specific ones. The properties recognized by the runtime engine are stored as individual arguments of a structure, so these required some externalization code.Implemented "test case filtering". This allows users to select what tests to run on the command line at the test case granularity. For example, foo/bar selects all tests in a subdirectory if bar is a directory, or all the tests in the bar test program if it is a binary. But what is new (read: not found in ATF) is that you can even do foo/bar:test-1 where test-1 is the name of a test case within the foo/bar test program. I've been silently wishing for this feature to be available in ATF because it shortens the build/test/edit cycle, but it was not easy to add.Changed the internal representation of test suites to ensure all test program names are relative to the root of the test suite. The root of the test suite is considered to be the directory in which the top-level Kyuafile lives (/usr/tests/ in NetBSD). The whole point of doing this is to provide some consistency to the filters when the user decides to execute tests that are not in the current directory. For example, the following are now equivalent:$ cd /usr/tests && kyua list fs/tmpfs$ kyua list -k /usr/tests/Kyuafile fs/tmpfsThe plans for the upcoming week are to finish with the clean up of the list command (which involves adding proper error reporting, refactoring of the command module and addition of unit tests) and start cleaning up the test command.Also, remember that BSDCan 2011 is now around the corner and that I will be talking about ATF, Kyua and NetBSD in it!  My plan was to have a kyua-cli-0.1 release by the time of the conference, although this will be tricky to achieve... [Continue reading]

  • Kyua: Weekly status report

    Few things worth mentioning this week as reviewing Summer of Code student applications has taken priority. The good thing is that there are several strong applications for NetBSD; the bad thing is that none relate directly to testing. Anyway, the work this week:Added a pkg-config file for atf-sh as well as an Autoconf macro to detect its presence. This is needed by Kyua to easily find atf-sh. (Yes, I know: this is an abuse of pkg-config, but it works pretty well and is consistent with atf-c and atf-c++.)Implemented basic integration tests for Kyua in r98 using atf-sh. These tests are still very simple but provide a placeholder into which new tests will be plugged. Having good integration test coverage is key in preparation for a 0.1 release. Oh, and by the way, this revision has bumped the number of tests to 601, crossing the 600 barrier :-)That's pretty much it. Now, back to attempting to fix my home server as a fresh installation of NetBSD/macppc has decided to not boot any more.  (Yes, this has blocked most of my weekend...) [Continue reading]

  • Kyua: Weekly status report

    This week's work has been quite active on the ATF front but not so much in the Kyua one. I keep being incredibly busy on the weekends (read: traveling!) so it's hard to get any serious development work done.What has happened?Finally tracked down and fixed some random atf-run crashes that had been hunting the NetBSD test suite for months (see PR bin/44176). The fix is in reality an ugly workaround for the fact that a work directory cannot be considered "stable" even after the test case terminates. There may be dying processes around that touch the work directory contents, and the cleanup code in atf-run was not coping well with those. As it turns out, this problem also exists in Kyua (even though it's not as pronounced because arbitrary failures when running a test case do not crash the runtime engine) so I filed issue 17 to address it.Released ATF 0.13 and imported it both to NetBSD-current and pkgsrc. As a side note, Kyua requires the new features in this release, so putting it out there is a requirement to release Kyua 0.1. This new release does not have a big effect on NetBSD though, because the copy of ATF in NetBSD has been constantly receiving cherry-picks of the upstream fixes.Replaced several TODO items in the Kyua code with proper calls to the logging subsystem. These TODO items were referring to conditions in the code that should not happen, but for which we cannot do any proper recovery (like errors in a destructor). Sure, these could be better signaled as an assertion... but these code paths can be triggered in extremely-tricky conditions and having Kyua crash because of them is not nice (particularly when the side-effects of executing that code paths are non-critical).So, in retrospect, I have fulfilled the goal set past week of releasing ATF 0.13, but I haven't got to the addition of integration tests. Oh well... let's see if this upcoming week provides more spare time. [Continue reading]

  • Kyua: Weekly status report

    This has been a slow week. In the previous report, I set the goal of getting Kyua to run the NetBSD test suite accurately (i.e. to report the same results as atf-run), and this has been accomplished. Actually, the changes required in Kyua to make this happen were minimal, but I got side-tracked fixing issues in NetBSD itself (both in the test suite and in the kernel!). So, the things done:Fixed Kyua to correctly kill any dangling subprocesses of a test case and thus match the behavior of atf-run. This was the only change required to make issue 16 happen: i.e. to get Kyua to report the same results as atf-run for the NetBSD test suite.Based on a suggestion from Antti Kantee, an alternative way to handle this would be to not kill any processes and just report the test case as broken if it fails to clean itself up. The rationale being that the runtime engine can kill dangling subprocesses in 99% of the occasions, but not always. The exception are those subprocesses that change their process group. It'd be better to make all cleanups explicit instead of hiding this corner case, as it can lead to confusion. Addressing this will have to wait though, as it is a pretty invasive change.Before closing issue 16, I want to implement some integration tests for Kyua to ensure that the whole system behaves as we expect (which is what the NetBSD test suite is currently doing implicitly).Kyua is pickier than atf-run: if a cleanup routine of a test case fails or crashes, Kyua will (correctly) report the test case as broken while atf-run will silently ignore this situation. Some NetBSD tests had crashing cleanup parts, so I fixed them.Some test programs in NetBSD were leaving unkilled subprocesses behind. These subprocesses are daemons and thus fall out of the scope of what Kyua can detect and kill during the cleanup phase. I mistakenly tracked down the problem to rump, but Antti Kantee kindly found the real problem in the kernel (not in rump!).As a side effect of processes being left behind, I extended the functionality of pidfile(3) and implemented pid file support in bozohttpd.  This is to make the tests that spawn a bozohttpd in the background more robust, by giving them a way to forcibly kill the server during cleanup.These changes are still under review and not committed yet.For the upcoming week, I plan to add some basic integration tests to Kyua and release ATF 0.13. I've been running a NetBSD system with the latest ATF code integrated for a while (because Kyua requires it) and things have been working well. [Continue reading]

  • Kyua: Weekly status report

    These days, I find myself talking about Kyua to "many" people. In particular, whenever a new feature request for ATF comes in, I promise the requester that the feature will be addressed as part of Kyua. However, I can imagine that this behavior leaves the requester with mixed feelings: it is nice that the feature will be implemented but, at the same time, it is very hard to know when because the web site of Kyua does not provide many details about its current status.In an attempt to give Kyua some more visibility, I will start posting weekly activity reports in this blog. These reports will also include any work done on the ATF front, as the two projects are highly related at this point. I write these reports regularly at work and I feel like it is a pretty good habit: every week, you have to spend some time thinking about what you did for the project and you feel guilty if the list of tasks is ~zero ;-) It also, as I said, gives more visibility to the work being done so that outsiders know that the project is not being ignored.Before starting with what has happened this week, a bit of context. I have been traveling like crazy and hosting guests over for the last 2 months. This has given me virtually no time to work on Kyua but, finally, I have got a chance to do some work this past week.So, what are the news?Implemented the --loglevel command line flag, which closes issue 14. Kyua now generates run-time logs of its internal activity to aid in postmortem debugging and this flag allows the user to control the verbosity of such logs.Antti Kantee hacked support for atf-run in the NetBSD source tree to dump a stack trace of any crashing test program. I have backported this code to the upstream ATF code and filed issue 15 to implement this same functionality in Kyua.Fixed a hang in atf-run that made it get stuck when a test case spawned a child processes and atf-run failed to terminate it. A quick test seems to indicate that Kyua is affected by a similar problem: it does not get stuck but it does not correctly kill the subprocesses. The problem will be addressed as part of issue 16.Oh, and by the way: Kyua will be presented at BSDCan 2011.My plans for this week are to make Kyua run the full NetBSD test suite without regressions when compared to ATF. Basically, the results of a test run with Kyua should be exactly the same as those of a test run with ATF. No dangling processes should be left behind.Lastly, if you are interested in these reports and other Kyua news, you can subscribe to the kyua label feed and, if you want to stay up to date with any changes performed to the code, subscribe to the kyua-log mailing list. [Continue reading]

  • A teeny tiny review of Twitterville

    After about two months, I finally finished reading Twitterville by Shel Israel (@shelisrael). One of my followers (@drio) asked for a review of the book, so here is my attempt to do so.But first, a quick summary: Twitterville is a book that focuses on the dynamics of Twitter. It starts by explaining how Twitter works, but that is only a tiny introductory part of the book. The majority of the contents explain how people and business interact with each other by means of Twitter, and it does so by providing lots of real-life stories. The stories range from topics as diverse as businesses offering deals, to individuals raising funds for specific causes.The book is easy to read and is well structured, and as you read through it you will realize that the author had to do some major research efforts to collect all the stories that he presents. I personally enjoyed the first half of the book a lot, but at some point I ran out of time for reading and my interest dropped. It was hard to pick on reading again because the book becomes quite repetitive after a few chapters; just keep in mind that it is a collection of personal experiences organized by different major topics and you won't be disappointed.Twitterville has changed my view on Twitter. I have discovered many "use cases" for Twitter that I could not imagine and, as many people do, I used to disregard Twitter as a useless "status updates" system. Today, however, I have set up several Twitter searches to monitor some topics of my interest and I engage with people that I did not know beforehand. It kinda feels like a world-wide unorganized chat room to me... but, as the author mentions many times, the way you see and use Twitter is up to you and you alone! [Continue reading]

  • Injecting C++ functions into Lua

    The C++ interface to Lua implemented in Kyua exposes a lua::state class that wraps the lower-level lua_State* type. This class completely hides the internal C type of Lua to ensure that all calls that affect the state go through the lua::state class.Things get a bit messy when we want to inject native functions into the Lua environment. These functions follow the prototype represented by the lua_CFunction type:typedef int (*lua_CFunction)(lua_State*);Now, let's consider this code:intawesome_native_function(lua_State* state){ // Uh, we have access to s, so we bypass the lua::state! ... do something nasty ... // Oh, and we can throw an exception here... //with bad consequences.}voidsetup(...){ lua::state state; state.push_c_function(awesome_native_function); state.set_global("myfunc"); ... run some script ...}The fact that we must pass a lua_CFunction prototype to the lua_pushcfunction object means that such function must have access to the raw lua_State* pointer... which we want to avoid.What we really want is the caller code to define a function such as:typedef int (*cxx_function)(lua::state&)In an ideal world, the lua::state class would implement a push_cxx_function that took a cxx_function, generated a thin C wrapper and injected such generated wrapper into Lua. Unfortunately, we are not in an ideal world: C++ does not have high-order functions and thus the "generate a wrapper function" part of the previous proposal does not really work.What we can do instead, though, is to make the creation of C wrappers for these C++ functions trivial. And this is what r42 did. The approach I took is similar to this overly-simplified (and broken) example:template< cxx_function Function >intwrap_cxx_function(lua_State* state){ try { lua::state state_wrapper(state); return Function(state_wrapper); } catch (...) { luaL_error(state, "Geez, don't go into C's land!"); }}This template wrapper takes a cxx_function object and generates a corresponding C function at compile time. This wrapper function ensures that C++ state does not propagate into the C world, as that often has catastrophical consequences. (Due to language limitations, the input function must have external linkage. So no, it cannot be static.)As a result, we can rewrite our original snippet as:intawesome_native_function(lua::state& state){ // See, we cannot access lua_State* now. ... do something ... throw std::runtime_error("And we can even do this!");}voidsetup(...){ lua::state state; state.push_c_function( wrap_cxx_function); state.set_global("myfunc"); ... run some script ...}Neat? I think so, but maybe not so much. I'm pretty sure there are cooler ways of achieving the above purpose in a cleaner way, but this one works nicely and has few overhead. [Continue reading]

  • Error handling in Lua: the Kyua approach

    About a week ago, I detailed the different approaches I encountered to deal with errors raised by the Lua C API. Later, I announced the new C++ interface for Lua implemented within Kyua. And today, I would like to talk about the specific mechanism I implemented in this library to deal with the Lua errors.The first thing to keep in mind is that the whole purpose of Lua in the context of Kyua is to parse configuration files. This is an infrequent operation, so high performance does not matter: it is more valuable to me to be able to write robust algorithms fast than to have them run at optimal speed. The other key point to consider is that I want Kyua to be able to use prebuilt Lua libraries, which are built as C binaries.The approach I took is to wrap every single unsafe Lua C API call in a "thin" (FSVO thin depending on the case) wrapper that gets called by lua_pcall. Anything that runs inside the wrapper is safe to Lua errors, as they are caught and safely reported to the caller.Lets examine how this works by taking a look at an example: the wrapping of lua_getglobal. We have the following code (copy pasted from the utils/lua/wrap.cpp file but hand-edited for publishing here):static intprotected_getglobal(lua_State* state){ lua_getglobal(state, lua_tostring(state, -1)); return 1;}voidlua::state::get_global(const std::string& name){ lua_pushcfunction(_pimpl->lua_state, protected_getglobal); lua_pushstring(_pimpl->lua_state, name.c_str()); if (lua_pcall(_pimpl->lua_state, 1, 1, 0) != 0) throw lua::api_error::from_stack(_pimpl->lua_state, "lua_getglobal");}The state::get_global method is my public wrapper for the lua_getglobal Lua C API call. This wrapper first prepares the Lua stack by pushing the address of the C function to call and its parameters and then issues a lua_pcall call that executes the C function in a Lua protected environment.In this case, the argument preparation for protected_getglobal is trivial because the lua_getglobal call does not require access to any preexisting values on the Lua stack. Things get much trickier when that happens as in the case of the lua_getglobal wrapper. I'll leave understanding how to do this as an exercise to the reader (but you can cheat by looking at line 154).Anyway. The above looks all very nice and safe and the tests for the state::get_global function, even the ones that intentionally cause a failure, all work fine. So we are good, right? Nope! Unfortunately, the code above is not fully safe to Lua errors.In order to prepare the lua_pcall execution, the code must push values on the stack. As it turns out, both lua_pushcfunction and lua_pushstring can fail if they run out of memory (OOM). Such failure would of course be captured inside a protected environment... but we have a little chicken'n'egg problem here. That said, OOM failures are rare so I'm going to leverage this fact and not worry about it. (Note to self: install a lua_atpanic handler to complain loudly if that ever happens.)Addendum: Bundling Lua within my program and building it as a C++ binary with exception reporting enabled in luaconf.h would magically solve all my issues. I know. But I don't fancy the idea of bundling the library into my source tree for a variety of reasons. [Continue reading]

  • C++ interface to Lua for Kyua

    Finally! After two weeks of holidays work, I have finally been able to submit Kyua's r39: a generic library that implements a C++ interface to Lua. The code is hosted in the utils/lua/ subdirectory.From the revision description:The utils::lua library provides thin C++ wrappers around the Lua C API to ease the interaction between C++ and Lua. These wrappers make intensive use of RAII to prevent resource leakage, expose C++-friendly data types, report errors by means of exceptions and ensure that the Lua stack is always left untouched in the face of errors. The library also provides a place (the operations module) to add miscellaneous utility functions built on top of the wrappers.In other words: this code aims to decouple all details of the interaction with the Lua C API from the main code of Kyua so that the high level algorithms do not have to worry about Lua C API idiosyncrasies.Further changes to Kyua to implement the new configuration system will follow soon as all the basic code to talk to Lua has been ironed out. Also expect some extra posts regarding the design decisions that went on this helper code and, in particular, about error reporting as mentioned in the previous post.(Yep, Lua and Kyua sound similar. But that was never intended; promise!) [Continue reading]

  • Error handling in Lua

    Some of the methods of the Lua C API can raise errors. To get an initial idea on what these are, take a look at the Functions and Types section and pay attention to the third field of a function description (the one denoted by 'x' in the introduction).Dealing with the errors raised by these functions is tricky, not to say a nightmare. Also, the ridiculously-short documentation on this topic does not help. This post is dedicated to explain how these errors may be handled along with the advantages and disadvantages of each case.The Lua C API provides two modes of execution: protected and unprotected. When in protected mode, all errors caused by Lua are caught and reported to the caller in a controlled manner. When in unprotected mode, the errors just abort the execution of the calling process by default. So, one would think: just run the code in protected mode, right? Yeah, well... entering protected mode is nontrivial and it has its own particularities that make interaction with C++ problematic.Let's analyze error reporting by considering a simple example: the lua_gettable function. The following Lua code would error out when executed:my_array = nilreturn my_array["test"]... which is obvious because indexing a non-table object is a mistake. Now let's consider how this code would look like in C (modulo the my_array assignment):lua_getglobal(state, "my_array");lua_pushstring(state, "test");lua_gettable(state, -2);Simple, huh? Sure, but as it turns out, any of the API calls (not just lua_gettable) in this code can raise errors (I'll call them unsafe functions). What this means is that, unless you run the code with a lua_pcall wrapper, your program will simply exit in the face of a Lua error. Uh, your scripting language can "crash" your host program out of your control? Not nice.What would be nice is if each of the Lua C API unsafe functions reported an error (as a return value or whatever) and allowed the caller to decide what to do. Ideally, no state would change in the face of an error. Unfortunately, that is not the case but it is exactly what I would like to do. I am writing a C++ wrapper for Lua in the context of Kyua and fine granularity in error reporting means that automatic cleanup of resources managed by RAII is trivial.Let's analyze the options that we have to control errors caused within the Lua C API. I will explain in a later post the one I have chosen for the wrapper in Kyua (it has to be later because I'm not settled yet!).Install a panic handlerWhenever Lua code runs in an unprotected environment, one can use lua_atpanic to install a handler for errors. The function provided by the user is executed when the error occurs and, if the panic function returns, the program exits. To prevent exiting prematurely, one could opt for two mechanisms:Make the panic handler raise a C++ exception. Sounds nice, right? Well, it does not work. The Lua library is generally built as a C binary which means that our panic handler will be called from within a C environment. As a result, we cannot throw an exception from our C++ handler and expect things to work: the exception won't propagate correctly from a C++ context to a C context and then back to C++. Most likely, the program will abort as soon as we leave the C++ world and enter C to unwind the stack.Use setjmp before the call to the unsafe Lua function and recover with longjmp from within the panic handler. It turns out that this does work but with one important caveat: the stack is completely cleared before the call to the panic handler. As a result, this prevents the requirement of "leave the stack unmodified on failure" as is desired of any function (report errors early before changing state).Run every single call in a protected environmentThis is doable but complex and not completely right: to do this, we need to write a C wrapper function for every unsafe API function and run it with lua_pcall. The overhead of this approach is significant: something as simple as a call to lua_gettable turns into several stack manipulation operations, a call to lua_pcall and then further stack modifications to adjust the results.Additionally, in order to prepare the call to lua_pcall, one has to use the multiple lua_push* functions to prepare the stack for the call. And, guess what, most of these functions that push values onto the stack can themselves fail. So... in order to prepare the environment for a safe call, we are already executing unsafe calls. (Granted, the errors in these case are only due to memory exhaustion... but still, the solution is not fully robust.)Lastly, note that we cannot use lua_cpcall because it does discard all return values of the executed function. Which means that we can't really wrap single Lua operations. (We could wrap a whole algorithm though.)Run the whole algorithm in a protected environmentThis defeats the whole purpose of the per-function wrapping. We would need to provide a separate C/C++ function that runs all unsafe code and then call it by means of lua_pcall (or lua_cpcall) so that errors are captured and reported in a controlled manner. This seems very efficient... albeit not transparent and will surely cause issues.Why is this problematic? Errors that happen inside the protected environment are managed by means of a longjmp. If the code wrapped by lua_pcall is a C++ function, it can instantiate objects. These objects have destructors. A longjmp outside of the function means that no destructors will run... so objects will leak memory, file descriptors, and anything you can imagine. Doom's day.Yes, I know Lua can be rebuilt to report internal errors by means of exceptions which would make this particular problem a non-issue... but this rules out any pre-packaged Lua binaries (the default is to use longjmp and henceforth what packaged binaries use). I do not want to embed Lua into my source tree. I want to use Lua binary packages shipped with pretty much any OS (hey, including NetBSD!), which means that my code needs to be able to cope with Lua binaries that use setjmp/longjmp internally.Closing remarksI hope the above description makes any sense because I had to omit many, many details in order to make the post reasonably short. It could also be that there are other alternatives I have not considered, in which case I'd love to know them. Trying to find a solution to the above problem has already sucked several days of my free time, which translates in Kyua not seeing any further development until a solution is found! [Continue reading]

  • Understanding setjmp/longjmp

    For a long time, I have been aware of the existence of the standard C functions setjmp and longjmp and that they can be used to simulate exceptions in C code. However, it wasn't until yesterday that I had to use them... and it was not trivial. The documentation for these functions tends to be confusing, and understanding them required looking for additional documents and a bit of experimentation. Let's see if this post helps in clarifying how these functions work.The first call to setjmp causes the process state (stack, CPU registers, etc.) to be saved in the provided jmp_buf structure and, then, a value of 0 to be returned. A subsequent call to longjmp with the same jmp_buf structure causes the process to go "back in time" to the state stored in said structure. The way this is useful is that, when going back in time, we tweak the return value of the setjmp call so we can actually run a second (or third or more) path as if nothing had happened.Let's see an example:#include <setjmp.h>#include <stdio.h>#include <stdlib.h>static jmp_buf buf;static voidmyfunc(void){ printf("In the function.n"); ... do some complex stuff ... /* Go back in time: restore the execution context of setjmp * but make the call return 1 instead of 0. */ longjmp(buf, 1); printf("Not reached.n");}intmain(void) { if (setjmp(buf) == 0) { /* Try block. */ printf("Trying some function that may throw.n"); myfunc(); printf("Not reached.n"); } else { /* Catch block. */ printf("Exception caught.n"); } return EXIT_SUCCESS;}The example above shows the following when executed:Trying some function that may throw.In the function.Exception caught.So, what happened above? The code starts by calling setjmp to record the execution state and the call returns 0, which causes the first part of the conditional to run. You can think of this clause as the "try" part of an exception-based code. At some point during the execution of myfunc, an error is detected and is "thrown" by a call to longjmp and a value of 1. This causes the process to go back to the execution of setjmp but this time the call returns 1, which causes the second part of the conditional to run. You can think of this second clause as the "catch" part of an exception-based code.It is still unclear to me what the "execution context" stored in jmp_buf is: the documentation does not explain what kind of resources are correctly unwinded when the call to longjmp is made... which makes me wary of using this technique for exception-like handling purposes. Oh, and this is even less clear in the context of C++ code and, e.g. calls to destructors. Would be nice to expand the description of these APIs in the manual pages. [Continue reading]

  • Happy new year!

    Dear readers,2011 is here so...Happy new year!I hope you all are having a nice holiday season and enjoyed the new year's eve celebration, should it be something special for you.My tentative resolutions for this year related to non-work and non-personal areas would be:First, to revive this blog. I have been lately posting more frequently than has been usual and it has been a pleasant task. I would like to recover the habit of blogging several times per week and, for that, I need topics! Keep'em coming! I already have some topics on the queue but they need a bit of research on my side first.Second, to bring Kyua to reality (i.e. to deprecate ATF). Work is continuing intensively and a preliminary release should be ready during Q1. This release will bring the much-needed replacement for atf-run.And third... well, haven't thought that much about resolutions ;-) We will see. [Continue reading]