• Multiboot support for review

    During the past few days I've continued to work on adding Multiboot support to NetBSD. It has been a hard task due to the lack of documentation — I had to reverse-engineer all the i386 boot code — but also very interesting: I've had to do deal with low-level code (recovering somewhat my ASM skills) and learn some details about ELF (see the copy_syms function in multiboot.c and you'll see why).You can now review the preliminary patch and read the public request for review. [Continue reading]

  • File systems documentation uploaded

    The file systems documentation I described yesterday has been uploaded to NetBSD's website alongside with all the documentation.You can read the official announcement or go straight to the book! You'll notice that it is now prettier than the version posted yesterday because it properly uses NetBSD's stylesheet. [Continue reading]

  • File systems documentation for review

    My Summer of Code project, tmpfs, promised that I would write documentation describing how file systems work in NetBSD (and frankly, I think this point had to do a lot with my proposal being picked up). I wrote such documentation during August but I failed to make it public — my mentor and I first thought about making it an article (which would have delayed it anyway) but soon after it became apparent that that structure was inappropriate.Anyway, I proposed myself to deal with the documentation whenever I had enough free time to rewrite most of it and restructure its sections to make it somewhat decent. And guess what, this is what I started to do some days (a week?) ago. So... here is the so-promised documentation!Be aware that this is still just for review. The documentation will end up either being part of The NetBSD Guide or being a "design and implementation" guide on its own.Also note that there is still much work to do. Many sections are not yet written. In fact, I started writing the general ideas to get into file system development because, once you know the basics, learning other stuff is relatively easy by looking at existing manual pages and code. Of course, the document should eventually be completed, specially to avoid having to reverse-engineer code.I'll seize this post to state something: the lack of documentation is a serious problem in most open source projects, specially those that have some kind of extensibility. Many developers don't like to write documentation and, what is worse, they think it's useless, that other developers will happily read the existing code to understand how things work. Wrong! If you are willing to extend some program, you want its interface to be clearly defined and documented; if there is no need to look at the code (except for carefully done examples), even better. FWIW, reading the program's code can be dangerous because you may get things wrong and end up relying on implementation details. So, write documentation, even if it is tough and difficult (I know, it can be very difficult, but don't you like challenges? ;-). [Continue reading]

  • Buying a trackball: the odyssey

    Yesterday morning, I sold my good and old Logitech Marble Mouse trackball. I had the first version that came with a PS/2 connector and with only two buttons. I wanted to change it for a new one mostly because I need a USB-enabled one, but also because I want to have a scrolling wheel (to me it's extremely useful).Since then until now, I've gone to ~all local PC shops (if you live in Barcelona, you know what this means when going to Ronda de Sant Antoni and nearbys) asking for trackballs, but none of them has a single one. More specifically, I've been looking for the same model I had (I loved it, even when playing FPS games) but in its new version, which comes with a USB connection and two additional buttons that simulate the scrolling wheel. Are trackballs obsolete or what?I think I'll order it directly to Logitech's online shop because I resist to buy a regular mouse. While they are very nice, I don't have much space to put it. However, Apple's Mighty Mouse makes me doubt... basically because its four-direction scrolling ;-)Oh, and I also want to avoid wireless ones (well, this one might be nice, as it'd be like a remote control... but it's rather expensive). Why do I want it to be wireless, wasting batteries, when I can simply connect it to my keyboard's USB hub? :-) [Continue reading]

  • Desktop screenshot

    Now that I know how to post images here, I think I'll be posting screenshots once in a while ;-) Here comes my current desktop (the iBook attached to the 20" flat panel): [Continue reading]

  • How not to close a bug

    A couple of weeks ago, I received a notification from GNOME's Bugzilla telling me that one of my bug reports was closed, being marked as incomplete. That really bothered me a lot because it was closed without applying any fix to the code — after all, NetBSD is dead, right?What is worse: the maintainers acknowledged that the bug report was correct, so there is a real bug in glib. So why the hell was it closed without intervention? Fixing it is a matter of 5 minutes or less for an experienced glib developer.OK, fine, you'd try to blame me: I failed to provide a patch in a timely fashion but that's not a good reason to close a bug report without a fix. At the very least, they'd have tried to contact me or the other developer who spoke up again — if they had, I'd have done the patch because I simply forgot about the report.So... do not close a bug if it is still there and you are aware of it! [Continue reading]

  • GNOME 2.12.2 in pkgsrc

    It has taken a while but, finally, GNOME 2.12.2 is in pkgsrc. As always, this new version comes with multiple bug fixes and some miscellaneous new stuff. Enjoy it. [Continue reading]

  • NetBSD slides for PartyZIP@ 2005 available

    The slides I used for the NetBSD conference at PartyZIP@ 2005 are now available at the NetBSD website. These shoud have been uploaded to PartyZIP@'s site as well as some recordings about the conferences, but this hasn't been done yet — and this happened in July. Hope you find them of some interest ;-) Note that they are in Spanish. [Continue reading]

  • GStreamer 0.10 in pkgsrc

    I've uploaded GStreamer 0.10, the base plugins set and the good plugins set to pkgsrc. This new major version is parallel installable with the prior 0.8 series; this is why all the old packages have been renamed to make things clearer.Fortunately, this version was easier to package because it does not need a "plugin registry" as the previous one did (i.e., no need for install-time scripts). Even though, the split of the plugins in different distfiles (base, good, bad, et. al.) make it a bit more complex.Let's now wait until someone packages Beep Media Player's development version and we'll be able to enjoy its features (assuming it works fine, of course...).I have to confess that I'm losing interest in maintaining the GNOME-related packages in pkgsrc. They take me too much time, time that I'd rather spend on other NetBSD-related tasks. This is why this update has been delayed so much... [Continue reading]

  • Google Talk talks to other servers

    Yes, it is true! (Although I haven't seen any anouncement yet; thanks to a friend for telling me.) Google's messaging service, Google Talk, can now communicate with other Jabber servers such as the popular jabber.org. It has always been a Jabber-based system, but its server didn't allow communications with third-party servers before.I think I'll migrate my account if this one proves to be stable; will be trying it for some days before to make sure :-) [Continue reading]

  • Some pictures of my rig

    I've just decided to learn how to publish images in Blogger. And in fact, it's damn easy, but for some reason I thought it was not — that is, I believed I had to register myself in some other service and was lazy to do it...Here come some pictures of my rig:The picture above shows my desktop. You can see the iBook G4 working in clamshell mode (well, sort of). It's connected to the Apple keyboard, to the mini mouse (I ought to replace it), to the BenQ FP202W flat pannel and to the stereo system. If you can notice the free USB wire next to the laptop... that's my manual switch for the keyboard and mouse! ;-) On the right side below the table is my PC, the Athlon XP 2600+ machine. Oh, and there is the Palm m515, this month's DDJ issue and a 802.11 book I got recently.This other picture shows the machine I used during tmpfs development to do all the necessary tests. It's a Pentium 2 233Mhz with 128Mb of RAM and is connected to the PC with a serial line (plus to the Ethernet, of course). (I've been playing with qemu recently and I might get rid of it.) Below the speaker (on the floor) is my old Viewsonic monitor that I'm trying to sell.At last, this other picture shows the Apple Macintosh Performa 630 I found a year and a half ago (no, the monitor is not currently connected). It's waiting for a reinstall of NetBSD/mac68k. Ah, and there is also a VHS video on the left side of the table which is connected to the PC.Next time I shall publish pictures of the SoC and Monotone T-shirts ;-) [Continue reading]

  • Routing protocols

    IP networks communicate with each other using L3 devices named routers. A router has a table that tells itself how to reach a given network; i.e., if it has to communicate with another router or if it is directly attached to it. Obviously, these tables need to be filled with accurate information, a thing that can be done in two ways:Static routing: The network administrator manually fills in the required data in each router and/or host. This is tedious to do but is enough on (very) little networks. For bigger networks, this does not scale.Dynamic routing: The routing tables are automatically filled in by routing protocols, which are executed by the routers themselves. These are what concerns us in this post, which aims to provide a little overview of their classification.A routing protocol defines a set of rules and messages — in the typical sense of a network protocol — that allow routers to communicate with each other and make them automatically adapt to topology changes in the network. For example, when a new network is added, all routers will be notified to add an appropriate entry to their routing table, so that they can reach it; imagine something similar when a network becomes unavailable.We can classify these routing protocols in two major groups:Distance vector protocols: As the name implies, this protocol calculates a distance between two given routers. The metric used in this measure depends on the protocol used. For example, RIP counts the number of hops between nodes. (This contrasts withe OSPF, which uses a per-link cost which might be related to, e.g., its speed. Please note that OSPF is not a distance vector protocol; I'm mentioning it here to show that this difference in metrics can cause problems if you reinject routes between networks that use different protocols.)Another thing that differentiates these protocols is that the routers periodically send status messages to their neighbours. On some special cases, they may send a message as a result of an event without waiting for the specified interval to pass.Link state protocols: These protocols monitor the status of each link attached to the router and send messages triggered by these events, flooding the whole network. Another difference is that each router keeps a database that represents the whole network; from this database, it's then able to generate the routing table using an appropriate algorithm (such as Bellman-Ford or Dijkstra) to determine the best path. These include OSPF, BGP and IS-IS.At last, we can also classify these protocols in two more groups:Interior routing protocols: These are used inside autonomous system (AS) to maintain the tables of their internal routers. These protocols include OSPF and RIP.Exterior routing protocols: Contrary to the previous ones, these are used to communicate routing rules between different ASs. And why are they not the same as the previous ones? Because they use very different metrics to construct the routing table: they rely on contracts between the ASs to decide where to send packets, something that cannot be taken into account with interior protocols. We can include BGP in this group.Edit (23:47): As Pavel Cahyna kindly points out, OSPF is not a distance vector protocol. The error has splipped in because I wanted to state something but that confused me and didn't put it in the correct group. The paragraph has been reworded. [Continue reading]

  • Taking backups of your data

    I've been playing with Apple's Backup utility (in trial mode) for a couple of days and it seems to be ideal for people like me: those who know that backups must be done but who never spend the time to do them.After opening it you get a dialog that lets you configure a backup plan. A plan specifies the list of the items to back up, the backing up interval and the destination for the copy (be it a remote server, your iDisk, a local volume or a CD/DVD). Setting up the items to copy is trivial because the program offers you a set of predefined plan templates: copy personal settings, iLife data, purchased music or the whole home directory. Of course, you can configure these in a more fine-grained fashion, specifying whether your keychain, your bookmarks, your calendars, your photos, etc. should be copied or not.A few clicks later, the plan is created and the program will automatically take care to issue the backups at the predefined intervals. Related to this, here is one thing that is very useful: if the computer was off at the exact time the backup should have run, it will do the copy when it becomes on again. I wish cron(8) could do something similar, because desktop PCs are not up the whole day (I know there is anacron, but it'd be nice if the regular utility supported something similar).Unfortunately, the Backup utility is tied to .Mac. If you do not have a full account you are limited to 100MB per copy. And while the iDisk and this backup facility is very nice, I don't find it worth the money.What I'm now thinking is that having a similar free utility could be very nice. It'd perfectly fit GNOME in the name of usability! It'd be even nicer if it'd run in the background, detached from any graphical interface (so that you'd set it up and forget in a dedicated server). Hmm... looks like an interesting project; pity I don't have time for it anytime soon (too much long-overdue stuff to do). Wondering if something like this already exists... [Continue reading]

  • Apple unleashes Intel-based systems

    You probably know it by now: Apple unleashed its first Intel-based computers yesterday during Steve Jobs' keynote. The PowerBook has been replaced by a completely new model, named MacBook Pro. It features a Yonah dual core processor plus a lot of other hardware updates and additions (some removals, too!); it won't be ready until February, but can be already ordered. As regards desktop machines, the iMac has been updated; contrary to the laptop, this machine has suffered "less" updates, which include the processor (also dual-core) and the graphics card. Otherwise, it's similar to the PPC model; externally, both seem to be the same.I've been looking forward to this announcement for a while &mdash this is why I'm posting it here. I had expected (as many other people) that the first models to be converted could be the iBooks. Frankly, I'm glad they didn't do it, because I won't feel bad for having bougth this G4 just a month and a half ago. Don't get me wrong; it's working great ;-)Other news include the update of Mac OS X to 10.4.4 (which I'm already running :-) and the release of iLife'06 and iWork'06.I hope that NetBSD gets ported to these new machines and that it works as well as the i386 port. If that's the case, I'll seriously consider a Mac as my next desktop machine (which is still some years away... and who knows what will happen in this time frame).Edit (17:07): I'm now wondering why the hell on earth many people seems to assume that Windows will run on these machines... All they have done is change the microprocessor (OK, plus other things), but this does not mean it becomes an IBM-compatible system — just consider Amiga-based and Mac-based 68k systems. I'm sure somebody will get it working, but it hasn't to be easy.Edit (20:14, 12th January): I stand corrected. These Intel-based machines come with regular hardware as found on PC systems; i.e., they run standard Intel microprocessors and chipsets. The major difference is that they use EFI (the Extensible Firmware Interface) instead of the obsolete BIOS. Looks like Windows Vista will run on them directly (and Apple won't forbid it :-). As regards NetBSD, support for EFI will be needed as well as (possibly) some new drivers. [Continue reading]

  • Applications vs. windows

    One of the things that I've come to love about Mac OS X is the way it handles the active applications. Let's first see what other systems do in order to talk about the advantages of this approach.All other desktop environments — strictly speaking, the ones I've used, which include GNOME, KDE and Windows — seem to treat a single window as the most basic object. For example: the task manager shows all visible windows; the key bindings switch between individual windows; the menu bar belongs to a single window and an application can have multiple menu bars; etc.If you think about it, this behavior doesn't make much sense and this may be one of the reasons why Windows and KDE offer MDI interfaces. (Remember that an MDI interface is generally a big window that includes several other within it; the application is then represented by a single window, thus removing auxiliary windows — e.g., a tool palette &mdash, from the task switcher, etc.) Unfortunately, other systems such as GTK/GNOME do not have this interface and the application's windows are always treated individually (just think how annoying it is to manage The GIMP).So, why is (IMHO) Mac OS X better in this aspect? This OS's interface always groups windows by the application they belong to. The dock (which is similar to a task bar, but better ;-) shows application icons; the task switcher (the thing that appears in almost any environment when you press Alt+Tab) lets you switch between applications, not windows; there is a single menu bar per application; etc. Whenever you select an application, all of its windows become active and are brought to the front layer automatically. In general, you have many different windows visible at a time, but they all belong to a rather small subset of applications. Therefore, this makes sense.At last, let me talk about the menu-bar-at-top-of-screen thing I mentioned above, which is what drove me to write this post in the first place. Before using a Mac, I always thought that having the menu bar detached from the window didn't make any sense, specially because different windows from a single applications often have different menus. I had even tried enabling a setting in KDE that simulates this behavior, but didn't convince me at all because the desktop as a whole doesn't follow the concept of treating applications as a unit, as described above (plus the applications are not designed to work in such a interface).However, after using Mac OS X for a while, I'm hooked to this "feature". Accessing the menu bar is a lot easier than when it is inside the window. And being able to do some actions on the application, no matter which of its windows is visible, is nice. You must try it to understand my comments ;-) [Continue reading]

  • Updating the pkgsrc GNOME packages

    A while ago, somebody called John asked me to explain the process I follow when I update the GNOME packages in pkgsrc. As I'll be doing this again in a few days (to bring 2.12.2 into the tree), this seems a good moment for the essay. Here I go:The first thing I do is to fetch the whole distribution from the FTP site; i.e., the platform and desktop directories located under the latest stable version. I have a little script that does this for me, avoiding to fetch the distfiles that haven't changed since the previous version.Once this is done I generate a list of all the downloaded distfiles and adjust the package dependencies in meta-pkgs/gnome-devel/Makefile, meta-pkgs/gnome-base/Makefile and meta-pkgs/gnome/Makefile to require the new versions; this includes adding any possible new stuff. I do this manually, which is a quite boring task; however, writing a script could take much more time.Afterward, I use cvs diff -u meta-pkgs/gnome* | grep '^+' over the modified files to get a list of the packages that need to be updated. As the list of dependencies in the meta-packages is sorted in reverse order — i.e., a package in the n-th position uses packages in the 1..n-1 positions but not any in the n+1..N ones — this command creates a useful "step by step" guide of what needs to be done.Then, it is a matter of updating all the packages that need it, which is, by far, the longest part of the process. I go one by one, bumping their version, building them, installing the results, ensuring the PLIST is correct, and generating a log file in the same directory that will serve me during the commit part. I also have a little script that automates most of this stuff for a given package — and, thanks to verifypc, this is relatively quick :-)In the last part described, there are some packages that always scare me due to the portability problems that plague them over and over again. These include the ones that try to access the hardware to get information, to control multimedia peripherals, to manage network stuff, etc. I don't know why some packages break in similar places in newer versions even if the mainstream developers have been sent some patches to fix the stuff in previous versions. </rant>Once I've got the updated gnome-base package installed, I zap all my GNOME configuration — ~/.gconf*, ~/.nautilus*, ~/.metacity*, ~/.gnome* and a lot more garbage — and try to start it. At this point, it is mostly useless, but I can see if there are serious problems in any of the most basic libraries. If so, this is the time to go for some bug hunting!When the gnome-base package works, I continue to update all the other missing packages and try to package the new ones, if any. Adding new stuff is not easy in general (portability bugs again) and this is why some of the dependencies are still commented out in the meta-packages. Anyway, with this done, I finally start the complete desktop and check if there are any major problems with the most "important" applications. Again, if there are any, it is a good time to solve them.So all the updates are done, but nobody guarantees that they work, specially because I do all the work under NetBSD-current. So I generally (but not always... shame on me!) use pkg_comp to check that they work, at the very least, under the latest stable release (3.0 at this point).The last part is to commit all the stuff in a short time window to minimize pain to end users. Even though I have left log files all around and prepared the correct entries for the CHANGES file, this often takes more than an hour.Unfortunately, end users always suffer, either because the packages break on their platform, because our packaging updating tools suck or simply because I made some mistake (e.g., forgot to commit something). But this is what pkgsrc-current is for, isn't it? ;-)And, of course, another thing to do is to review all the newly added local patches, clean them up, adapt them to the development versions of their corresponding packages and submit them to GNOME's bugzilla. This is gratifying, but also a big, big pain.Well, well... this has been quite long, but I think I haven't left anything out. [Continue reading]

  • File previews in Nautilus

    Yesterday's evening, I was organizing some of my pictures when I noticed that Nautilus wasn't generating previews for any of them. This annoyed me so much that I decided to track down the issue, which I've done today.The first thing I did was to attach a gdb session to the currently running Nautilus process, adding some breakpoints to the functions that seemed to generate preview images; I located them using grep(1) and some obvious keywords. This wasn't very helpful, but at least I could see that the thumbnail generation routine checks whether the file is local or not (because Nautilus lets you set different preferences for them).With this in mind, I resorted to the traditional and not-so-elegant printf debugging technique (which, with pkgsrc in the middle, is quite annoying). Effectively, a call to gnome_vfs_uri_is_local was always returning false, no matter which file it was passed. It was clear that the problem was in gnome-vfs itself.So I switched to the gnome-vfs code, located that function and saw that it was very short: basically, all it does is delegate its task to another function that depends on the method associated to the file — you can think of the method as the protocol that manages it. As I was accessing local files, the method in use was the file method (implemented in modules/file-method.c), so the affected function was, how not, its do_is_local.As it is a bit complex (due to caching issues for efficiency), I added some more printfs to it to isolate a bit more the problem, which drove me to the filesystem_type function implemented in modules/fstype.c. And man, when looking at this function I quickly understood why it was the focus of the problem. It is so clumsy — portability-wise — that it is very easy for it to break under untested circumstances: it is composed of a set of mutually exclusive ifdefs (one for each possible situation) to determine the file system name in which a file lives.And you guessed right: NetBSD 3.x didn't match any of them, so it resorted to a default entry (returning an unknown file system type) that is always treated as remote by the file method code.The solution has been to workaround the detection of one of these special cases in the configure script to properly detect statvfs(2) under NetBSD. This is just a hack that adds to the uglyness of the original code, but fixing this issue properly will require a lot of work and the willingness of the gnome-vfs maintainers to accept the changes.I will ask for a pull-up to the pkgsrc-2005Q4 branch when I've been able to confirm that this does not break on other NetBSD versions. Now have fun watching your images again! ;-) [Continue reading]

  • Got a BenQ FP202W flat panel

    As a present for my saint (I don't know if this is the proper expression in English) — which I celebrate today due to my second name (Manuel) — I got a BenQ FP202W flat panel. It's a 20" wide screen, providing a 16:10 aspect ration at its native resolution of 1680x1050 pixels. This contrasts heavily with my now-old Viewsonic E70f, a 17" CRT doing only 1024x768@85Hz.And man, this new monitor is truly amazing. Maximizing windows is a thing of the past! (Except for the media player, of course ;) Being able to have two documents open side by side is great: for example, you can keep your editor in one side while reading a reference manual in the other side without having to constantly switch between windows; and even then, there is still room for other things. Moreover, playing Half-Life 2 in widescreen mode is... great :)I've also tried to plug it to the iBook G4; it works fine for most things but feels a bit sluggish when some effects come into the game (aka, Exposé or Dashboard). Looks like its video card is not powerful enough to handle it flawlessy (something I can understand). I ought to try its clamshell mode, though, as disabling its builtin screen may make it faster; but first I need to get a USB keyboard...Concluding: while the old CRT monitor was still in perfect working condition, the switch has been great. I don't know why I hesitated to do it ;) If you have the money to buy such a display, I certainly recommend it.Happy new year 2006! [Continue reading]