• Hibernating a Mac

    Mac OS X has supported for a very long time putting Macs to sleep. This is a must-have feature for laptops, but is also convenient for desktop machines. However, it hasn't been since the transition to Intel-based Macs that it also supports hibernation, also called deep sleep. When entering the hibernation mode, the system stores all memory contents to disk as well as the status of the devices. It then powers off the machine completely. Later on, the on-disk copy is used to restore the machine to its previous state when it is powered on. It takes longer than resuming from sleep status, but all your applications will be there as you left them.Now, every time you put your Intel Mac to sleep it is also preparing itself to hibernate. This is why Intel Macs take longer than PowerPC-based ones to enter the sleep mode. This way, if the machine's battery drains completely in the case of notebooks, or the machine is unplugged in the case of desktops, the machine will be able to quickly recover itself to a safe state and you won't lose data.As I mentioned yesterday, I've been running my MacBook Pro for a while without the battery, so I had an easy chance to experiment hibernation. And it's marvelous. No flaws so far.The thing is that I always powered down my Mac at night. The reason is that putting it to sleep during the whole night consumed few but enough battery to require a recharge next morning to bring it back to 100%, so I didn't do it. But now I usually put it to hibernate; this way, on the next boot, I can continue work straight from where I left it and I don't have to restart any applications.Now... putting a Mac notebook into this mode is painful if you have to remove the battery every time to force it to enter hibernation mode, and unfortunately Mac OS X does not have any "Hibernate" option. But... there is this sweet DashBoard widget called Deep Sleep that lets you do exactly that! No more boots from cold state any more :-) [Continue reading]

  • SoC: Status report

    It has already been a week since the last SoC-related post, so I owe you an status report.Development has continued at a constant rate and, despite I work a lot on the project, it may seem to advance slowly from an external point of view. The thing is that getting the ATF core components complete and right is a tough job! Just look at the current and incomplete TODO list to see what I mean.Some things worth to note:The NetBSD cross-build tool-chain no longer requires a C++ compiler to build the atf-compile host tool. I wrote a simplified version in POSIX shell to be used as the host tool alone (not to be installed). This is also used by the ATF's distfile to allow "cross-building" its own test programs.Improved the cleanup procedure of the test case's work directories by handling mount points in them. This is done through a new tool called atf-cleanup.Added a property to allow test cases specify if they require root privileges or not.Many bug fixes, cleanups and new test cases; these are driving development right now.On the NetBSD front, there have also been several cosmetic improvements and bug fixes, but most importantly I've converted the tmpfs' test suite to ATF. This conversion is what has spotted many bugs and missing features in ATF's code. The TODO file has grown basically due to this.So, at the moment, both the regress/bin and regress/sys/fs/tmpfs trees in NetBSD have been converted to ATF. I think that's enough for now and that I should focus on adding the necessary features to ATF to improve these tests. One of these is to add support for a configuration file to let the user specify how certain tests should behave; e.g. how to become root or which specific file system to use for certain tests.I also have a partial implementation to add a "fork" property to test cases to execute them in subprocesses. This way they will be able to mess all they want with the open file descriptors without disturbing the main test program. But to get here, I first need to clean up the reporting of test case's results.On the other hand, I also started preparing manual pages for the user tools as some of them should remain fairly stable at this point. [Continue reading]

  • Processor speed and desktop usage

    Back in July 7th, I disassembled my MacBook Pro to see if I could easily replace its hard disk for a faster one. I hadn't bought it yet because I first wanted to check that the whole process was easy. The thing is that, after a couple of problems, I could disassemble it. So I then ran to the local store to buy the new drive. But oh! They didn't have it. I decided to not reassemble the computer as one of the disassembling steps was quite scary and I didn't want to repeat it unless really necessary.Stupid me. It has already been three weeks and they have not yet received any unit; I hate them at this point. And yes, I've been all this time with the laptop partly disassembled, working with external peripherals and without the battery. Which is very annoying because, even though I didn't think I really needed mobility, it is important once you get used to it.Anyway. I have been using the machine as usual all these three weeks, and have kept working on my SoC project intensively. Lately, I noticed that my builds were running slower than as I remembered: for example, I went away for two hours and when I came back a full NetBSD/i386 release build had not finished yet. That was strange, but I blamed the software: things keep growing continuously, and a change in, e.g., GCC, could easily slow down everything.But yesterday, based on this thread, I installed CoreDuoTemp because I wanted to see how the processor's frequency throttling behaved. I panicked. The frequency meter was constantly at 1GHz (and the laptop carries a 2.16GHz processor) no matter what I did. Thinking that it'd be CoreDuoTemp's fault, I rebooted into Windows and installed CPU-Z. Same results. For a moment I was worried that the machine could be faulty or that I had broken it in the disassembly process. Fortunately, I later remembered another post that mentioned that MacBook Pros without a battery installed will run with the processor at the minimum speed; seems to be a firmware bug.Effectively: I reassembled the machine today — with the old, painful, slow, stupid, ugly, etc. disk! —, installed the battery and all is fine again.Why I am mentioning all this, though? Well, the thing is... if it wasn't for the software rebuilds, I wouldn't have noticed any slowdown in typical desktop usage tasks such as browsing the web, reading the email, chatting, editing photos or watching videos. And the processor was running at half of its full power! In other words, it confirms me that extra MHz are worthless for most people. It is "annoying" to see companies throwing away lots of perfectly-capable desktop machines, replacing them with more powerful ones that won't be used to its full capacity. (OK, there are other reasons for the switch aside the machine's speed.)Just some numbers. Building ATF inside a Parallels NetBSD/i386 virtual machine took "real 4m42.004s, user 1m20.466s, sys 3m16.839s" without the battery, and with it: "real 2m9.984s, user 0m22.725s, sys 1m39.053s". Here, the speed is noticeable :-)I will blog again when I have the replacement disk and possibly post some pictures of the whole procedure. [Continue reading]

  • Parallels Desktop and VMware Fusion

    Back in February, I bought a copy of Parallels Desktop 2 and have been a very happy user of it since then. However, when Parallels 3 appeared, I hesitated to pre-order it (even at a very low price) and I did well: after it was released, I tried it on my MacBook Pro and their 3D support is useless for me. I could not play neither Half-Life 2 nor Doom 3 at acceptable speeds, being the former much worse than the latter in this regard.Now, I'm evaluating VMware Fusion RC1, and I'm almost convinced to pre-order it. This product is very similar to Parallels and in fact several of its features seem "inspired" on it, such as Unity (Coherence in Parallels speak). But it has some important features that Parallels cannot currently match. To know: support for 64-bit guests, support for 2 virtual CPUs in guests and support to network-boot the virtual machine. All of these are cool from a development point of view. The first two allow one to run some more versions of specific operating systems, such as NetBSD/amd64, as well as enabling SMP support in them. The last one makes it easy to boot development kernels without modifying any virtual disk (haven't tried this yet, though).All is not good though. Fusion is also supposed to have experimental 3D support inside the guest machines (up to DirectX 8.1). However, when trying to launch Half-Life 2 inside a Windows XP SP2 virtual machine, Windows crashed with a BSOD. At least I could launch it in Parallels, albeit it was simply unplayable. But as none of the two products make for a good gaming experience, I personally don't care about this feature.Let's conclude with some numbers about the speed of each product. I installed Debian GNU/Linux leeny under Parallels and Fusion and built monotone-0.35 from scratch under them. The virtual machines were configured to have 768MB of RAM of a total of 2GB and the machine was idle aside from the build jobs. Obviously, in the case of Parallels I could only run the test with the i386 port, but in Fusion I used both the i386 and amd64 ports with 1 and 2 virtual CPUs. I also ran the same tests on the native machine using Mac OS X 10.4.10. The timings only include the make command, not ./configure.Parallels, 32-bit, 1 virtual CPU, 'make':real 17m33.048s, user 14m24.342s, sys 3m4.080sFusion, 32-bit, 1 virtual CPU, 'make':real 16m35.507s, user 14m57.016s, sys 1m29.134sFusion, 32-bit, 2 virtual CPUs, 'make -j2':real 10m0.341s, user 17m23.541s, sys 2m12.604sFusion, 64-bit, 2 virtual CPUs, 'make -j2':real 10m24.617s, user 18m26.985s, sys 1m26.133sNative, Mac OS X, 'make':real 12m50.640s, user 11m12.997s, sys 1m20.344sNative, Mac OS X, 'make -j2':real 7m3.536s, user 11m22.875s, sys 1m26.366sSee this thread for other opinions. [Continue reading]

  • Death star!

    A cool photo I found today:I think I can say: don't be scared! That seems to be a power adapter so, supposedly, all those plugs are switched off when one of them is connected. If they weren't... this would not pass any quality assurance control... So, it is a pretty nice product :-)See the original post for more details.Edit (23rd July): Corrected the (invented) title. [Continue reading]

  • SoC: ATF self-testing

    ATF is a program, and as happens with any application, it must be (automatically) tested to ensure it works according to its specifications. But as you already know, ATF is a testing framework so... is it possible to automatically test it? Can it test itself? Should it do it? The thing is: it can and it should, but things are not so simple.ATF can test itself because it is possible to define test programs through ATF to check the ATF tools and libraries. ATF should test itself because the resulting test suite will be a great source of example code and because its execution will be on its own a good stress test for the framework. See the tests/atf directory to check what I mean; specially, the unit tests for the fs module, which I've just committed, are quite nice :-) (For the record: there currently are 14 test programs in that directory, which account for a total of 60 test cases.)However, ATF should not be tested exclusively by means of itself. If it did so, any failure (even the most trivial one) in the ATF's code could result in false positives or false negatives during the execution of the test suite, leading to wrong results hard to discover and diagnose. Imagine, for example, that a subtle bug made the reporting of test failures to appear as test passes. All tests could start to succeed immediately and nobody could easily notice, surely leading to errors in further modifications.This is why a bootstrapping test suite is required: one that ensures that the most basic functionality of ATF works as expected, but which does not use ATF to run itself. This additional test suite is already present in the source tree, and is written using GNU Autotest, given that I'm using the GNU Autotools as the build system. Check the tests/bootstrap directory to see what all this is about.ATF's self-testing is, probably, the hardest thing I've encountered in this project so far. It is quite tricky and complex to get right, but it's cool! Despite being hard, having a complete test suite for ATF is a must so it cannot be left aside. Would you trust a testing framework if you could not quickly check that it worked as advertised? I couldn't ;-) [Continue reading]

  • Daggy fixes (in Monotone)

    If you inspect the ATF's source code history, you'll see a lot of merges. But why is that, if I'm the only developer working in the project? Shouldn't the revision history be linear?Well, the thing is it needn't and it shouldn't; the subtle difference is important here :-) It needn't be linear because Monotone is a VCS that stores history in a DAG, so it is completely natural to have a non-linear history. In fact, distributed development requires such a model if you want to preserve the original history (instead of stacking changes on top of revisions different than the original ones).On the other hand, it shouldn't be linear because there are better ways to organize the history. As the DaggyFixes page in the Monotone Wiki mentions:All software has bugs, and not all changes that you commit to a source tree are entirely good. Therefore, some commits can be considered "development" (new features), and others can be considered "bugfixes" (redoing or sometimes undoing previous changes). It can often be advantageous to separate the two: it is common practice to try and avoid mixing new code and bugfixes together in the same commit, often as a matter of project policy. This is because the fix can be important on its own, such as for applying critical bugfixes to stable releases without carrying along other unrelated changes.The key idea here is that you should group bug fixes alongside the original change that introduced them, if it is clear which commit is that and you can easily locate it. And if you do that, you end up with a non-linear history that requires a merge per each bug-fix to resolve the divergences inside a single branch.I certainly recommend you to read the DaggyFixes page. One more reason to do the switch to Monotone (or any other DAG-based VCS system, of course)? ;-) Oh, I now notice I once blogged about this same idea, but that page is far more clear than my explanation.That is why you'll notice lots of merges in the ATF source tree: I've started applying this methodology to see how well it behaves and I find it very interesting so far. I'd now hate switching to CVS and losing all the history for the project (because attempting to convert it to CVS's model could be painful), even if it is that not interesting. [Continue reading]

  • Recovering two old Macs

    Wow, it has already been three years since a friend an I found a couple of old Macintoshes in a trash container1. Each of us picked one, and maybe a year ago or so I gave mine to him as I had no space at home to keep it. Given that he did not use them and that I enjoy playing with old hardware, I exchanged those two machines by an old Pentium 3 I had laying around :-) The plan is to install NetBSD-current on at least one of them and some other system (or NetBSD version) in the other one to let me ensure ATF is really portable to bizarre hardware (running sane systems, though).The machines are these:A Performa 475: Motorola 68040 LC, 4MB of RAM, 250MB SCSI hard disk, no CD-ROM, Ethernet card.A Performa 630: Motorola 68040 LC, 40MB of RAM, 500-something IDE hard disk (will replace it with something bigger), CD-ROM, Ethernet card.I originally kept the Performa 630 and already played with it when we found the machines. Among other things, I replaced the PRAM battery with a home-grown solution, added support to change the NetBSD's console colors (because the black on white default on NetBSD/mac68k is annoying to say the least) and imported the softfloat support for this platform.Then, the turn for Performa 475 came past week. When I tried to boot it, it failed miserably. I could hear the typical Mac's boot-time chime, but after that the screen was black and the machine was completely unresponsive. After Googling a bit, I found that the black screen could be caused by the dead PRAM battery, but I assumed that the machine could still work; the thing is I could not hear the hard disk at all, and therefore was reluctant to put a new battery in it. Anyway, I finally bought the battery (very expensive, around 7€!), put it in and the machine booted!Once it was up, I noticed that there was a huge amount of software installed: Microsoft Office, LaTeX tools, Internet utilities (including Netscape Navigator), etc. And then, when checking what hardware was on the machine I was really, really surprised. All these programs were working with only 250MB of hard disk space and 4MB of RAM! Software bloat nowadays? Maybe...Well, if I want this second machine to be usable, I'll have to find some more RAM for it. But afterwards I hope it'll be able to run another version of NetBSD or maybe a Linux system.1 That also reminds me that this blog is three years old too! [Continue reading]

  • SoC: Web site for ATF

    While waiting for a NetBSD release build to finish, I've prepared the web site for ATF. It currently lacks information in a lot of areas, but the most important ones for now — the RSS feed for news and the Repository page — are quite complete.Hope you like it! Comments welcome, of course :-) [Continue reading]

  • SoC: Converting NetBSD 'regress' tests

    I've finally got at a point where I can start converting some of the current regression tests in the NetBSD tree to use the new ATF system. To prove this point, I've migrated all the tests that currently live in regress/bin to the new framework. They all now live in /usr/tests/util/. This has not been a trivial task — and it is not completely done yet, as there still are some rough edges — but I'm quite happy with the results. They show me that I'm on the right track :-) and, more importantly, they show outsiders how things are supposed to work.If you want more information on this specific change you can look at the revision that implements it or simply inspect the corresponding patch file. By the way, some of the tests already fail! That's because they were not run often enough in the past, something that ATF is supposed to resolve.While waiting for a NetBSD release build to complete, I have started working on a real web site for ATF. Don't know if I'll keep working on it now because it's a tough job and there is still a lot of coding to do. Really! But, on the other hand, having a nice project page is very good marketing. [Continue reading]

  • Book: Producing Open Source Software

    This year, Google sent all the Summer of Code students the Producing Open Source Software: How to run a successful free software project book by Karl Fogel (ISBN 0-596-00759-0) as a welcome present.I've just finished reading it and I can say that it was a very nice read. The book is very easy to follow and is very complete: it covers areas such as the project's start-up, how to set things up for promoting it, how to behave in mailing lists, how to prepare releases, how to deal with volunteers or with paid developers, etc. Everything you need to drive your project correctly and without gaining much enemies.While many of the things stated in the book are obvious to anyone who has been in the open source world for a while (and already started a project on its own or contributed to an existing one), it is still a worthy read. I wish all the people involved in NetBSD (some more than others) read it and applied the suggestions given there. We'd certainly improve in many key areas and reduce pointless (or better said, unpleasant) discussions!Oh, and by the way: you can read the book online at its web page, as it is licensed under a Creative Commons Attribution-ShareAlike license. Kudos to Karl Fogel. [Continue reading]

  • SoC: Code is public now

    Just in time for the mid-term evaluation (well, with one day of delay), I've made the atf's source code public. This is possible thanks to the public and free monotone server run by Timothy Brownawell. It's nice to stay away from CVS ;-)See the How to get it section at the atf's page for more details on how to download the code from that server and how to trust it (it may take an hour or two for the section to appear). You can also go straight to the source code browser. If, by any chance, you decide to download the code, be sure to read the README files as they contain very important information.And... don't start nitpicking yet! I haven't had a chance to clean up the code yet, and some parts of it really suck. Cleaning it up is the next thing I'll be doing, and I already started with the shell library :-) [Continue reading]

  • Degree completed

    After five years of intensive work, I've finally completed my degree in Informatics Engineering (I think Computer Science is a valid synonym for that) at the FIB Faculty. This has concluded today after I defended my PFC, the project that concludes the degree. So you can now call me engineer :-) Yay!In other words: I'm free until October, when I'll start a Masters in Computer Architecture, Networks and Systems (CANS). Time to work intensively on SoC! [Continue reading]

  • SoC: Short-term planning

    SoC 2007's mid-term evaluation is around the corner. I have to present some code on June 9th. In fact, it'd be already public if we used a decent VCS instead of CVS, but for now I'll keep the sources in a local monotone database. We'll see how they'll be made public later on.Summarizing what has been done so far: I've already got a working prototype of the atf core functionality. I have to confess that the code is still very ugly (and with that I mean you really don't want to see it!) and that it is incomplete in many areas, but it is good enough as a "proof of concept" and a base for further incremental improvement.What it currently allows me to do is:Write test programs (a collection of test cases) in C. In reality it is C++ as we already saw, but I've added several macros to hide the internals of the library and simplify the definition of test cases. So basically the test writer will not need to know that he's using C++ under the hood. (This is also to mimic as much as possible the shell-based interface.)Write test programs in POSIX shell. Similar as above, but for tests written using shell scripts. (Which I think will be the majority.)Define a "tree of tests" in the file system and recursively run them all, collecting the results in a single log. This can be done without the source tree nor the build tools (in special make), and by the end user.Wrote many tests to test atf itself. More on this on tomorrow's post.What I'm planning to do now, before the mid-term evaluation deadline, is to integrate the current code into the NetBSD's build tree (not talking about adding it to the official CVS yet though) to show how all these ideas are applicable to NetBSD testing, and to ensure everything can work with build.sh and cross-compilation.Once this is done, which shouldn't take very long I hope, I will start polishing the current atf's core implementation. This means rewriting several parts of the code (in special to make it more error-safe), adding more tests, adding manual pages for the tools and the interfaces, etc. This is something I'm willing to do, even though it'll be a hard and long job. [Continue reading]

  • New Processor preferences panel in Mac OS X

    Some days ago I updated my system to the latest version of Mac OS X Tiger, 10.4.10. It hasn't been until today that I realized that there is a new cool preferences panel called Processor:It looks like this:As you can see, it gives information about each processor in the machine and also lets you disable any processor you want.There is also another "hidden" window, accessible from the menu bar control after you have enabled it. It is called the Processor palette and looks like this:I already monitor the processor activity by using the Activity Monitor's dock icon, which is much more compact, but this one is nice :-)Edit (16:22): Rui Paulo writes in a comment that this is available if you install Xcode. It turns out I have had Xcode installed for ages, but my installation did not contain the CHUD tools. I recently added them to the system, which must be the reason behind this new item in the system preferences. So... this is not related to the 10.4.10 update I mentioned at first. [Continue reading]