• Software bloat, 2

    A long while ago — just before buying the MacBook Pro — I already complained about software bloat. A year and two months later, it is time to complain again.I am thinking on renewing my MacBook Pro assuming I can sell this one for a good price. The reasons for this are to get slightly better hardware (more disk, better GPU and maybe 4GB of RAM) and software updates. The problem is: if I am able to find a buyer, I will be left without a computer for some days, and that's not a good scenario. I certainly don't want to order the new one without being certain that I will be paid enough for the current one.So yesterday I started assembling some old components I had lying around aiming at having an old but functional computer to work with. But today I realized that I also had the PlayStation 3 with Fedora 8 already installed, and that it'd be enough to use as a desktop for a week or so. I had trimmed down the installation to the bare minimum so that it'd boot as fast as possible and to leave free resources for testing Cell-related stuff. But if I wanted to use the PS3 as a desktop, I needed, for example, GNOME.Ew. Doing a yum groupinstall "GNOME Desktop Environment" took quite a while, and not because of the network connection. But even if we leave that aside, starting the environment was painful. Really painful. And Mono was not there, at all! It is amazing how unusable the desktop is with "only" 256MB of RAM; the machine is constantly going to swap, and the disk being slow does not help either. I still remember the days when 256MB was a lot, and desktop machines were snappy enough with only half of that, or even less.OK, so GNOME is a lot for 256MB of RAM. I am now writing this from the PS3 itself running WindowMaker. Which unfortunately does not solve all the problems — and the biggest one is that it is not a desktop environment. Firefox also requires lots of resources to start, and doing something else in the background still makes the machine use swap. (Note that I have disabled almost all of the system services enabled by default in Fedora, including SELinux.)If I finally sell my MBP, this will certainly be enough for a few days... but it's a pity to see how unusable it is. (Yeah, by today's standards, the PS3 is extremely short on RAM, I know, but GNOME used to run quite well with this amount of RAM just a few years ago.) [Continue reading]

  • ATF's error handling in C

    One of the things I miss a lot when writing the C-only code bits of ATF is an easy way to raise and handle errors. In C++, the normal control flow of the execution is not disturbed by error handling because any part of the code is free to notify error conditions by means of exceptions. Unfortunately, C has no such mechanism, so errors must be handled explicitly.At the very beginning I just made functions return integers indicating error codes and reusing the standard error codes of the C library. However, that turned out to be too simple for my needs and, depending on the return value of a function (not an integer), was not easily applicable.What I ended up doing was defining a new type, atf_error_t, which must be returned by all functions that can raise errors. This type is a pointer to a memory region that can vary in contents (and size) depending on the error raised by the code. For example, if the error comes from libc, I mux the original error code and an informative message into the error type so that the original, non-mangled information is available to the caller; or, if the error is caused by the user's misuse of the application, I simply return a string that contains the reason for the failure. The error structure contains a type field that the receiver can query to know which specific information is available and, based on that, cast down the structure to the specific type that contains detailed information. Yes, this is very similar to how you work with exceptions.In the case of no errors, a null pointer is returned. This way checking for an error condition is just a simple pointer check, which is no more expensive than an integer check. However, handling error conditions is more costly, but given that these are rare, it is certainly not a problem.What I don't like too much of this approach is that any other return value must be returned as an output parameter, which makes things a bit confusing. Furthermore, robust code ends up cluttered with error checks all around given that virtually any call to the library can produce an error somewhere. This, together with the lack of RAII modeling, complicates error handling a lot. But I can't think of any other way that could be simpler but, at the same time, as flexible as this one. Ideas? :PMore details are available in the atf-c/error.h and atf-c/error.c files. [Continue reading]

  • Rewriting parts of ATF in C

    I have spent part of past week and this whole weekend working on a C-only library for ATF test programs. An extremely exhausting task. However, I wanted to do it because there is reluctancy in NetBSD to write test programs in C++, which is understandable, and delaying it more would have made things worse in the future. I found this situation myself some days ago when writing tests for very low level stuff; using C++ there felt clunky, but it was still possible of course.I have had to reimplement lots of stuff that are given for-free in any other, higher-level (not necessarily high-level) language. This includes, for example, a "class" to deal with dynamic strings, another one for dynamic linked lists and iterators, a way to propagate errors until the point where they can be managed... and I have spent quite a bit of time debugging crashes due to memory management bugs, something that I rarely encountered in the C++ version.However, the new interface is, I believe, quite neat. This is not because of the language per se, but because the C++ interface has grown "incorrectly". It was the first code in the project and it shows. The C version has been written from the ground up with all the requirements known beforehand, so it is cleaner. This will surely help in cleaning up the C++ version later on, which cannot die anyway.The code for this interface is in a new branch, org.NetBSD.atf.src.c, and will hopefully make it to ATF 0.5: it still lacks a lot of features, hence why it is not on mainline. Ah, the joys of a distributed VCS: I have been able to develop this experiment locally and privately until it was decent enough to be published, and now it is online with all history available!From now on C++ use will be restricted to the ATF tools inside ATF itself, and to those users who want to use it in their projects. Test cases will be written using the C library except for those that unit-test C++ code. [Continue reading]

  • C++: Little teaser about std::set

    This does not build. Can you guess why? Without testing it?std::set< int > numbers;for (int i = 0; i < 10; i++) numbers.insert(i);for (std::set< int >::iterator iter = numbers.begin(); iter != numbers.end(); iter++) { int& i = *iter; i++;}Update (23:40): John gave a correct answer in the comments. [Continue reading]

  • BenQ RMA adventures, part 2

    My monitor is back from service! It was picked up on January 30th, and it has been returned today just after 6 days (4 work days). Note that the technical service's office was located in Portugal, just at the opposite side of the peninsula.And best of all, the monitor is fixed: firmware updated, so I can now disable the Overscan feature and get a perfect 1:1 pixel mapping on the HDMI input. Kudos to the BenQ RMA service for this quick and effective service! [Continue reading]

  • ATF 0.4 released

    I'm pleased to announce that the fourth release of ATF, 0.4, just saw the light. The NetBSD source tree has also been updated to reflect this new release.For more details please see the announcement. [Continue reading]

  • Home-made build farm

    I'm about to publish the 0.4 release of ATF. It has been delayed more than I wanted due to the difficulty in getting time-limited test cases working and due to my laziness in testing the final tarball in multiple operating systems (because I knew I'd have to fight portability problems).But finally, this weekend I have been setting up a rather-automated build farm at home, which is composed so far of 13 systems. Yes, 13! But do I use so much machines? Of course not! Ah, the joys of virtualization.What I have done is set up a virtual machine for each system I want to test using VMware Fusion. If possible, I configure both 32-bit and 64-bit versions of the same system, because different problems can arise in them. Each virtual machine has a builder user, and that user is configured to allow passwordless SSH logins by using a private key. It also has full sudo access to the machine, so that it can issue root-only tests and can shutdown the virtual machine. And about the software it has, I only need a C++ compiler, the make tool and pkg-config.Then I have a script that, for a given virtual machine:Starts the virtual machine.Copies the distfile inside the virtual machine.Unpacks the distfile.Configures the sources.Builds the sources.Installs the results.Runs the build-time tests.Runs the install-time tests as a regular user.Runs the install-time tests as root.Powers down the virtual machine.Ideally I should also run some different combinations of compilers inside each system (for example, SUNpro and GCC in Solaris) and make tools (BSD make and GNU make). I'm also considering in replacing some of the steps above by a simple make distcheck.I take a log of the whole process for later manual inspection. This way I can simply call this script for all the virtual machines I have and get the results of all the tests for all the platforms. I still need to do some manual testing in non-virtual machines such as in my PS3 or in Mac OS X, but these are minor (but yes, they should also be automated).Starting and stopping the virtual machines is what was trickiest, but in the end I got it working. Now I would like to adapt the code to work with other virtual machines (Parallels and qemu), clean it up and publish it somehow. Parts of it do certainly belong inside ATF (such as the formatting of all logs into HTML for later publication on a web server), and I hope they will make it into the next release.For the curious, I currently have virtual machines for: Debian 4.0r2, Fedora 8, FreeBSD 6.3, NetBSD-current, openSUSE 10.2, Solaris Express Developer Edition 2007/09 and Ubuntu Server 7.10. All of them have 32-bit and 64-bit variants except for Solaris, which is only 64-bit. Setting all of them up manually was quite a tedious and boring process. And the testing process is slow. Each system takes around 10 minutes to run through the whole "start, do stuff, stop" process, and SXDE almost doubles that. In total, more than 2 hours to do all the testing. Argh, an 8-way Mac Pro could be so sweet now :-) [Continue reading]

  • unlink(2) can actually remove directories

    I have always thought that unlink(2) was meant to remove files only but, yesterday, SunOS (SXDE 200709) proved my wrong. I was sanity-checking the source tree for the imminent ATF 0.4 release under this platform, which is always scary, and the tests for the atf::fs::remove function were failing — only when run as root.The failure happened in the cleanup phase of the test case, in which ATF attempts to recursively remove the temporary work directory. When it attempted to remove one of the directories inside it, it failed with a ENOENT message, which in SunOS may mean that the directory is not empty. Strangely, when inspecting the left-over work tree, that directory was indeed empty and it could not be removed with rm -rf nor with rmdir.The manual page for unlink(2) finally gave me the clue of what was happening:If the path argument is a directory and the filesystem supports unlink() and unlinkat() on directories, the directory is unlinked from its parent with no cleanup being performed. In UFS, the disconnected directory will be found the next time the filesystem is checked with fsck(1M). The unlink() and unlinkat() functions will not fail simply because a directory is not empty. The user with appropriate privileges can orphan a non-empty directory without generating an error message.The solution was easy: as my custom remove function is supposed to remove files only, I added a check before the call to unlink(2) to ensure that the path name does not point to a directory. Not the prettiest possibility (because it is subject to race-conditions even though it is not critical), but it works. [Continue reading]

  • Linux is just an implementation detail

    You can't imagine how happy I was today when I read the interview with KDE 4's developer Sebastian Kuegler. Question 6 asks him:6. Are there any misconceptions about KDE 4 you see regularly and would like to address?And around the middle of the answer, he says:Frankly, I don’t like the whole concept of the “Linux Desktop”. Linux is really just a kernel, and in this case very much a buzzword. Having to mention Linux (which is just a technical implementation detail of a desktop system) suggests that something is wrong. Should it matter to the user if he runs Linux or BSD on his machine? Not at all. It only matters because things just don’t work so well (mostly caused by to driver problems, often a matter of ignorance on some vendor’s side).Thanks Sebastian. I couldn't have said it better.What virtually all application developers are targeting —or should be targeting— is KDE or GNOME. These are the development platforms; i.e. what provide the libraries and services required for easy development and deployment.  It doesn't make any sense to "write a graphical application for Linux", because Linux has no standard graphical interface (unless you mean the framebuffer!) and, again, Linux is just a kernel.I think I have already blogged about the problems of software redistribution under Linux... will look for that post and, if it is not there, it is worth a future entry. [Continue reading]