• Doesn't 'ls f*' do what you expect?

    If you have ever ran ls on a directory whose contents don't fit on screen, you may have tried to list only a part of it by passing a wildcard to the command. For example, if you were only interested in all directory entries starting with an f, you might have tried ls f*. But did that do what you expected? Most likely not if any of those matching entries was a directory. In that case, you might have thought that ls was actually recursing into those directories.Let's consider a directory with two entries: a file and a directory. It may look like:$ ls -ltotal 12Kdrwxr-xr-x 2 jmmv jmmv 4096 Dec 19 15:18 foodir-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofileThe ls command above was executed inside our directory, without arguments, hence it listed the current directory's contents. However, if we pass a wildcard we get more results than expected:$ ls -l *-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofilefoodir:total 4K-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:19 anotherfileWhat happened in the previous command is that the shell expanded the wildcard; that is, ls never saw the special character itself. In fact, the above was internally converted to ls -l foofile foodir and this is what was actually passed to the ls utility during its execution. With this in mind, it is easy to see why you got the contents of the sample directory too: you explicitly (although somewhat "hidden") asked ls to show them.How to avoid that? Use ls's -d option, which tells it to list the directory entries themselves, not their contents:$ ls -l -d *drwxr-xr-x 2 jmmv jmmv 4096 Dec 19 15:19 foodir-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofileUpdate (21st Dec): Fixed the first command shown as noted by Hubert Feyrer. [Continue reading]

  • A subject for my undergraduate thesis

    It has finally come the time when I have to choose a subject for my undergraduate thesis on which I'll be working on full time next semester. My first idea was to make a contribution to NetBSD by developing an automated testing framework. I have had interest in this for a long while (I even proposed it as part of this year's SoC), and there is a lot of interest in it within the project too.However, this specific project does not fit correctly into the current research groups at my faculty. This wouldn't be a problem if I wasn't thinking in taking a CS Master or Ph.D. later on. But as I'm seriously considering this possibility, it'd be better if I worked on a project that lets me integrate into an existing research group as early as possible. This could also teach me several new stuff that I'd not learn otherwise: if you look at the paper linked above, you can see I already have several ideas for the testing framework. That is, I already know how I'd address most of it, hence there'd not be a lot of "research". Furthermore, the teacher I talked to about this project felt the core of the project could not be long enough to cover a full semester.So what are the other possible ideas? I went to talk to a teacher that currently directs some of the research groups and he proposed me several ideas, organized in three areas:Code analysis and optimization: Here I'd work on tools to analyze existing code and binaries to understand how they work internally; this way one could later generate a better binary by reorganizing related code and/or removing dead bits. They already have done a lot of work on this subject, so I'd be working on a tiny part of it. No matter what, dealing with the compiler/linker and the resulting binaries sounds quite well.Improve heterogeneous multiprocessor support: This group contains ideas to improve the management of heterogeneous systems such as those based on the Cell processor. I'm "afraid" any project here would be completely Linux-based, but the background idea also feels interesting. Haven't got too much details yet, though.Distributed systems: This doesn't interest me as much as the other two, but this may be because there was not enough time during the meeting to learn about this group. However, next week we are taking a guided visit to the BSC which will hopefully clear some of my doubts and let me decide if I'm really interested in this area.I shall make a decision as soon as possible, but this is hard!Oh, and don't worry about the testing framework project. I'll try to work on this in my spare time because I feel it's something NetBSD really needs and I'm sure I'll enjoy coding it. Not to mention that nowadays, whenever I try to apply any fix to the tree, I feel I should be adding some regression test for it! Plus... I already have a tiny, tiny bit of code :-) [Continue reading]

  • Software bloat

    A bit more than three years ago, I renewed my main machine and bought an Athlon XP 2600+ with 512MB of RAM and a 80GB hard disk. The speed boost I noticed in games, builds and the overall system usage was incredible — I was coming from a Pentium II 233 with 384MB of RAM.With the change, I was finally able to switch from plain window managers to desktop environments (alternating KDE and GNOME from time to time) and still keep a usable machine. I was also able to play the games of that era at high resolutions. And, what benefited me more, the build times of packages and NetBSD itself were cut by more than a half. For example, it previously needed between 6 to 7 hours to do a full NetBSD release build and, after the switch, it barely took 2. On the pkgsrc side, building some packages was almost instantaneous because the machine processed both the infrastructure and the source builds like crazy.But time passes and nowadays the machine feels extremely sluggish. And you know that hardware does not degrade like this so it's easy to conclude it's software's fault. (Thank God I've done some upgrades on the hardware, like doubling the memory, replacing the video card and adding a faster hard disk.)I'm currently running Kubuntu 6.10 and KDE is desperately slow in some situations; of course GNOME has its critical scenarios too. (Well... it is not that slow, but responsiveness is, and that makes a big amount of the final experience.) The problem is they behaved much better in the past yet I, as a desktop user, haven't noticed any great usability improvement that is worth such speed differences. As a side note: I know the developers of both projects try their best to optimize the code — kudos to them! — but this is how I see it in my machine.Another data point, this time more objective than the previous one. Remember I mentioned NetBSD took less than 2 hours to build? Guess what. It now takes 5 to 6 hours to build a full release; it's as if I went back in time 3 years! Or take pkgsrc: the infrastructure is now very, very slow; in some packages, it takes more time than the program's build itself.I could continue this rant but... it'd drive nowhere. Please do not take it as something against NetBSD, pkgsrc and KDE in particular. I've taken these three projects to illustrate the issue because they are the ones I can compare to the software I used when I bought the machine. I'm sure all other software suffers from slowdowns.Anyway, three years seem to be too much for a machine. Sometimes I think developers should be banned fast machines because, usually, they are the ones with the fastest machines. This makes them not notice the slowdowns as much as end users do. Kind of joking. [Continue reading]

  • Hard disks and S.M.A.R.T.

    Old hard disks exposed a lot of their internals to the operating system: in order to request a data block from the drive, the system had to specify the exact cylinder, head and sector (CHS) where it was located (as happens with floppy disks). This structure became unsustainable as drives got larger (due to some limits in the BIOS calls) and more intelligent.Current hard disks are little (and complex) specific-purpose machines that work in LBA mode (not CHS). Oversimplifying, when presented a sector number and an operation, they read or write the corresponding block wherever it physically is — i.e. the operating system needn't care any more about the physical location of that sector in the disk. (They do provide CHS values to the BIOS, but they are fake and do not cover the whole disk size.) This is very interesting because the drive can automatically remap a failing sector to a different position if needed, thus correcting some serious errors in a transparent fashion (more on this below).Furthermore, "new" disks also have a very interesting diagnostic feature known as S.M.A.R.T. This interface keeps track of internal disk status information, which can be queried by the user, and also provides a way to ask the drive to run some self-tests.If you are wondering how I discovered this, it is because I recently had two hard disks fail (one in my desktop PC and the one in the iBook) reporting physical read errors. I thought I had to replace them but using smartmontools and dd(1) I was able to resolve the problems. Just try a smartctl -a /dev/disk0 on your system and be impressed by the amount of detailed information it prints! (This should be harmless but I take no responsibility if it fails for you in some way.)First of all I started by running an exhaustive surface test on the drive by using the smartctl -t long /dev/disk0. It is interesting to note that the test is performed by the drive itself, without interaction with the operating system; if you try it you will see that not even the hard disk led blinks, which means that the test does not "emit" any data through the ATA bus. Anyway. The test ended prematurely due to the read errors and reported the first failing sector; this can be seen by using smartctl -l selftest /dev/disk0.With the failing sector at hand (which was also reported in dmesg when it was first encountered by the operating system), I wrote some data over it with dd(1) hoping that the drive could remap it to a new place. This should have worked according to the instructions at smartmontools' web site, but it didn't. The sector kept failing and the disk kept reporting that it still had some sectors pending to be remapped (the Reallocated_Sector_Ct attribute). (I now think this was because I didn't use a big-enough block size to do the write, so at some point dd(1) tried to read some data and failed.)After a lot of testing, I decided to wipe out the whole disk (also using dd(1)) hoping that at some point the writes could force the disk to remap a sector. And it worked! After a full pass S.M.A.R.T. reported that there were no more sectors to be remapped and that several ones were moved. Let's now hope that no more bad sectors appear... but the desktop disk has been working fine since the "fixes" for over a month and has not developed any more problems.All in all a very handy tool for testing your computer health. It is recommended that you read the full smartctl(1) manual page before trying it; it contains important information, specially if you are new to S.M.A.R.T. as I were. [Continue reading]

  • tmpfs marked non-experimental

    The implementation of an efficient memory-based file system (tmpfs) for NetBSD was my Google Summer of Code 2005 project. After the program was over, the code was committed to the repository and some other developers (specially YAMAMOTO Takashi) did several fixes and improvements in it. However, several problems remained in it that prevented tagging it release quality (see this thread).Finally I found some time to deal with most of them, something that has kept me busy for around three weeks (and which I should have done much, much earlier). All the issues that were resolved are detailed in this other post.There still are some problems in the code (which code doesn't have any?) but these do not prevent tmpfs from working fine. Of course they should be addressed in the future but people is already enjoying tmpfs in their installations and have been requesting its activation by default for a long time.Hence, after core@'s blessing, I'm proud to announce that tmpfs has been marked non-experimental and is now enabled by default in the GENERIC kernels of amd64, i386, macppc and sparc64. Other platforms will probably follow soon.The next logical step is to replace mfs with tmpfs wherever the former is used (e.g. in sysinst) but more testing is required before this happens. And this is what 4.0_BETA will allow users to do :-) Enjoy! [Continue reading]

  • Making vnd(4) work with tmpfs

    vnd(4) is the virtual disk driver found in NetBSD. It provides a disk-like interface to files which allows you to treat them as if they were disks. This is useful, for example, when a file holds a file system image (e.g. the typical ISO-9660 files) and you want to inspect its contents.Up until now vnd(4) used the vnode's bmap and strategy operations to access the backing file. These operate at the block-level and therefore do not involve any system-wide caches; this is why they were used (see below). Unfortunately, some file systems (e.g. tmpfs and smbfs) do not implement these operations so vnd could not work with files stored inside them.One of the possible fixes to resolve this problem was to make vnd(4) use the regular read and write operations; these act on a higher (byte) level and are so fundamental that must be implemented by all file systems. The disadvantage is that all data that flows through these two methods ends up in the buffer cache. (If I understand it correctly, this is problematic because vnd itself will also push a copy of the same data into the cache thus ending up with duplicates in there.)Despite that minor problem, I believe it is better to have vnd(4) working in all cases even if that involves some performance penalty in some situations (which can be fixed anyway by implementing the missing operations later on). So this is what I have done: vnd(4) will now use read and write for those files stored in file systems where bmap and strategy are not available and continue to use the latter two if they are present (as it has always done).Some more information can be found in the CVS commit and its corresponding bug report. [Continue reading]

  • A couple of Ext2/Ext3 project proposals

    I've just added a couple of project proposals related to improving Ext2/Ext3 file system support in the NetBSD Operating System. These are:Implement Ext3 file system supportImprove support for Ext2 root filesystemIf you are interested in getting into file system development — a very interesting research area, believe me! ;-) — this is probably a safe bet. These two projects are not very complex but can quickly benefit NetBSD for different reasons (not only better Linux compatibility). Check out their descriptions for more details! [Continue reading]

  • Improved Multiboot support in NetBSD/i386

    Back in February this year I added Multiboot support to NetBSD/i386. Unfortunately, the implemenation was quite hackish because it required the application of a patch to GRUB-Legacy: the code used the "a.out kludge" present in the Multiboot specification which this bootloader incorrectly omitted in ELF kernels; the patch fixed this issue. However, this prevented booting NetBSD with mainstream GRUB builds (those used by all Linux distributions), thus making this feature mostly useless.The need for the "a.out kludge" came from two different problems:The kernel's ELF image was incorrectly linked because it did not set the correct physical load addresses for the segments it contained, thus GRUB could not load it because it thought there was not enough memory.The "a.out kludge" was used here to tell the boot loader which was the correct address to load the binary image into. Pavel Cahyna fixed this issue back in May, removing the need for the hack in this specific case.The native boot loader constructs a minimal ELF image that contains the kernel symbol table (ksyms) and sticks it just after the BSS space in memory. GRUB did not do this so the NetBSD kernel resorted to manually creating this image itself based on the data passed in by GRUB. In order to be successful, some space was reserved after the BSS section by using the "a.out kludge" (tricking the bootloader to think that this section was larger than it actually was) so that the kernel's bootstrapping process could freely access it. Pavel's fix did not address this problem so, when booting a NetBSD Multiboot kernel with an unpatched GRUB, ksyms did not work.I've now finally fixed this long-standing issue appropriately. All the code to create the minimal ELF image is gone and instead the kernel simply moves the data passed in by GRUB to a memory region that is available after bootstrapping. Then, it uses a custom function (ksyms_init_explicit instead of ksyms_init) which does not need any ELF headers to initialize the ksyms.The results are much clearer and less error-prone code as well as the ability to boot NetBSD straight from a stock GRUB installation! Keep in mind that this will go into 4.0, so setting up dual-boot machines will be easier than ever :-)I've prepared a couple of screenshots for your pleasure. First, a look at the configuration used to boot a Multiboot-enabled NetBSD kernel:And then a look at the messages printed by the kernel as well as a demonstration that ksyms work by invoking a backtrace in the debugger (ddb): [Continue reading]

  • Mac OS X vs. Ubuntu: Summary

    I think I've already covered all the areas I had in mind about these two operating systems. And as the thread has lasted for too long, I'm concluding it now. Here is a summary of all items described:IntroductionHardware supportThe environmentSoftware installationAutomatic updatesFreedomCommercial softwareDevelopment platformAfter all these notes I still can't decide which operating system I'd prefer based on quality, features and cost. Nowadays I'm quite happy with Kubuntu (installed it to see how it works after breaking Ubuntu and it seems good so far) and I'll possibly stick to it for some more months.This will last until I feel the need to buy a Mac again (or simply renew my desktop), at which point I might buy one with Mac OS X or wait until the desire passes away ;-) [Continue reading]

  • Mac OS X vs. Ubuntu: Development platform

    First of all, sorry for not completing the comparison between systems earlier. I had to work on some university assignments and started to play a bit with Haskell, which made me start a rewrite of an utility (more on this soon, I hope!).Let's now compare the development platform provided by these operating systems. This is something most end users will not ever care about, but it certainly affects the availability of some applications (specially commercial ones), their future evolution and how the applications work e.g. during installation.As you may already know, both systems are Unix-like. First of all, they provide a comfortable command line interface with the usual utilities and development tools to get started very easily. They also come with the common POSIX interfaces to manage files, sockets, devices, etc. which allow a great deal of compatibility among operating systems that support them. The problem they have is that they are too low level, are C-specific and are "console-based"; there is no way to develop visual applications with them. This is why almost all programs use some sort of abstraction layer over these interfaces apart from some library that provides a graphical toolkit; otherwise development times could be extremely long and there could be lots of portability problems. These extra libraries brings us the biggest difference among the two OSes.When you are coding for Linux, the de facto standard graphical interface is the X Window System which comes with its own set of graphical libraries (Xlib) to program applications. The problem is that these are, again, too low level for general usage so developers have come up with some nice abstractions that provide widgets, layouts, etc. Among them are the well-know Qt and GTK+ toolkits. These, on their own, also lack functionality to build complete desktop environments (DE), so KDE and GNOME were born on top of them. They not only provide a consistent graphical interface but also a development platform on which to build applications: each DE has a set of services and components that make the implementation of shiny tools a breeze.However, application developers are faced with the difficult task of choosing the adequate subset of libraries for their application, which at its root means choosing one of the two major development platforms (KDE and GNOME) — if they don't implement their own, something not that uncommon. For tiny programs this may not be an issue (as can be seen with the duality of tools available), but it certainly has issues for big applications (you certainly do not want to rewrite, e.g., The GIMP, for KDE) and commercial ones. In some way you can think as if you were coding for KDE or GNOME, not Linux. You may argue that competition is good but, in my opinion, not at this level.On the other hand, Mac OS X has three frameworks: Cocoa, Carbon and Cocoa on Java (I'm not sure this last name is correct, but you get the idea). Carbon is from the Mac OS 9 days and Cocoa on Java is not recommended for anything else other than learning. Even if you chose to use Cocoa on Java, in the end, you would be using plain Cocoa so you needn't consider it in the equation. In other words, the only reasonable choice when developing an application for Mac OS X is to approach Cocoa. This brings a lot of consistency between applications, keeps a single set of services available for all programs to use and allows easy interoperability with each component. (Not to mention that you either use Cocoa or you don't; you cannot do strange mixes... or I haven't seen them.)Oh, and before you tell me that Qt is also available for Mac OS X... yes, it is, but it is built on top of Cocoa. So there is a common, high-level layer beneath all APIs that provides consistency among them.As a side effect we have the problem of application redistribution. End users do not want to deal with source code, so you have to provide them some binaries. But how do you do that on Linux to ensure that they will work on any system? Keep in mind that "any system" does not mean any version of a specific distribution; it means any distribution! Well, the thing is... it is almost impossible: there are problems everywhere that prevent binary applications to be transported between systems. I'm not going to discuss this here because it is a rather long topic; check out the linked article for more details (and I think they are missing some).Contrarywise, Mac OS X is simpler in this aspect. There is just one operating system with a consistent set of libraries, so you build software for those explicitly. You only need care about compatibility of some APIs between versions. And if your application uses any non-standard library, you can bundle it in the final binaries for easy redistribution (OK, OK, you'd also use static binaries in Linux). This of course also has its own drawbacks, but in general is nicer on the developer's eyes.There are other differences, but the point I want to make (and which is entirely my own view) is that the diversity in Linux hurts development. Different distributions make it hard to package software for each of them (can you conceive the amount of time wasted by package maintainers of each single distribution out there?) and bring many binary compatibility issues. Because, you know, Linux is just the kernel. Aside that, different desktop environments pose some hard decisions to the developers and there is a lot duplicate code in them to manage common stuff; fortunately Freedesktop.org is solving some of these points.Systems as Mac OS X (or the BSDs, or Solaris, etc.) are better in this regard because the system is a single unit distributed by a single group of people. So, whenever I say I use "Mac OS X Tiger" developers know exactly what my system has available for them.Yeah, this is a rather generic rant against Linux and is possibly not that important in our comparison, but I had to mention it because I've faced the above issues multiple times. [Continue reading]

  • Ubuntu vs. Mac OS X: Commercial software

    As much as we may like free software, there is a lot of interesting commercial applications out there (be them free as in free beer or not). Given the origins and spirit of each OS, the amount of commercial applications available for them is vastly different.Let's start with Ubuntu (strictly speaking, Linux). Although trends are slowly changing, the number of commercial programs that are addressed to Linux systems is really small. I've reasons to believe that this is because Linux, as a platform to provide such applications, is awful. We already saw an example of this in the software installation comparison because third-party applications have a hard path to distribute their software under the Linux world. We will see more examples about this soon in another post.In my opinion, this is a disadvantage because, although there are free replacements for almost any utility you can imagine, they are not necessarily better yet. Similarly, there are tools for which no replacement exists yet. Or simply put the user may want to use such commercial tool because he prefers them over any of the other alternatives.On the other side of things, a typical user will generally be satisfied with all the free tools included in the Ubuntu repositories. If not, sites such as Sourceforge or Freshmeat are full of Unix-based free applications. Generally they won't ever have the need to consider commercial applications so they won't have to spend any single amount of money to use their software nor keep it up to date.Mac OS X is a different world; commercial software (shareware, freeware, etc.) is still extremely abundant in it. This is probably, in part, because the platform is also commercial: developers won't feel "strange" in providing applications following the same model, and there are chances that their applications will succeed. Fortunately, there is also a growing number of free applications that compete with these commercial ones, and they do a great job (to mention a few: Camino, Adium X, Colloquy, Smultron, etc.).Even more, given that Mac OS X is based on Unix and that it provides a X Window System server, it is possible to run most of the free applications available under Linux in this operating system. Just check out, for example, The GIMP, or fetch pkgsrc and start building your own favourite programs!Aside that, there are also very popular commercial applications available for this OS. These include the popular Apple and Adobe applications (iWork, Photoshop, Premiere, etc.) and other such as Microsoft Office, Parallels or Skype (I know, the latter is also available for Linux). It is a fact that nowadays some of these programs are superior to their free alternatives and some people will want to use them. But, ultimately, they have the freedom to make that decision.In this area I think that Mac OS X is more versatile because it can take advantage of both free applications and some interesting commercial ones. Only time will tell if those will be natively ported to Linux some day or not, but if/when that happens, it will be as versatile as Mac OS X with the advantage of a predominating feeling of developing free software. [Continue reading]

  • Mac OS X vs. Ubuntu: Freedom

    Ubuntu is based on Debian GNU/Linux, a free (as in free beer and free speech) Linux-based distribution and the free GNOME desktop environment. Therefore it keeps the phylosophy of the two, being itself also free. Summarizing, this means that the user can legally modify and copy the system at will, without having to pay anyone for doing so. When things break, it is great to be able to look at the source code, find the problem and fix it yourself; of course, this is not something that end users will ever do, but I have found this situation valuable many times (not under Ubuntu though).Mac OS X, on the other hand, is a proprietary OS with the exception of the core kernel whose source code is published as free sofware (I don't know the license details though). This means that you must pay for a license in order to use it, and even then you cannot mess with its internals — its source code — nor redistribute it. Given that Mac OS X comes prebundled with new Apple machines, this is not so important because you'll rarely feel the need to look at its code (I certainly don't care as long as it works). However, if you want to jump to a new major version, you must pay for it. For example, if I got an iMac now, I'd have to pay around 200€ in mid-2007 to get the Mac OS X 10.5 family pack (5 licenses); I'm not implying that it's not worth it though.I know the free software ideals very well and like them but, sincerely, freedom is something that end users do not perceive in general. And I won't base the decision on which OS to run on my computer based on this criterion alone; that's why the iBook is stuck with Mac OS X ;-) Really, I've lately come to think that what really matters are free and open standards (i.e. communication protocols, document formats, etc.), not the software packages themselves. [Continue reading]

  • Mac OS X vs. Ubuntu: Automatic updates

    Security and/or bug fixes, new features... all those are very common in newer versions of applications — and this obviously includes the operating system itself. A desktop OS should provide a way to painlessly update your system (and possibly your applications) to the latest available versions; the main reason is to be safe to exploits that could damage your software and/or data.Both Mac OS X and Ubuntu provide tools to keep themselves updated and, to some extent, their applications too. These utilities include an automated way to schedule updates, which is important to avoid leaving a system unpatched against important security updates. Let's now drill down the two OSes a bit more.Ubuntu shines in this aspect thanks to the centralized packaging of software. Given that all available applications are packaged by the developers and put in a common server, the apt package manager is able to automatically update all of your installed packages to the latest available versions. This also includes keeping track of added dependencies so that an update will not (generally) break any of the existing stuff. In some sense, you can consider that there is no "core OS": once a new program is installed from the repository, it is integrated into the OS in such a way that it is indistinguishable.Unfortunately, if the application was not explicitly packaged for Ubuntu, it is not possible to use apt (I mean Synaptic, the tool you'll always work with) to keep it up to date. In that case either the program itself provides its own updating method or it simply provides none at all, leaving the user on his own to update it whenever he wants/remembers. We saw some examples of applications not made for Ubuntu in the previous post, which basically includes commercial software.Mac OS X is slightly different. Similarly to Ubunti, it has a tool that can update your system as well as applications, but these are restricted to Apple ones such as iLife. Third-party applications need to provide their own updating method, and most of them actually do (*). For example, taking Adium X again: this program checks on its startup if any newer version is available and offers the user to download and install it in that case. This is completely decoupled from the system, which makes it suboptimal. It'd be great if the OS could keep everything up to date as long as the applications provided the required information to the update manager.So... it is clear that Ubuntu wins this specific comparison as long as you always use prepackaged software. Mac OS X, while not as clean, is good enough because the OS is able to "fix" itself and most third-party applications already provide custom update methods. In the end, the user will not notice the difference.* I don't know if it is possible for such programs to "hook" into the system's update manager. This sounds reasonable and, if indeed supported, could make this point moot. Don't hesitate to correct me in that case! [Continue reading]

  • Mac OS X vs. Ubuntu: Software installation

    Installing software under a desktop OS should be a quick and easy task. Although systems such as pkgsrc — which build software from its source code — are very convenient some times, they get really annoying on desktops because problems pop up more often than desired and builds take hours to complete. In my opinion, a desktop end user must not ever need to build software by himself; if he needs to, someone in the development chain failed. Fortunately the two systems I'm comparing seem to have resolved this issue: all general software is available in binary form.Ubuntu, as you may already know, is based on Debian GNU/Linux which means that it uses dpkg and apt to manage installed software. Their developers do a great job to provide binary packages for almost every program out there. These packages can be installed really quickly (if you have a broadband Internet connection) and they automatically configure themselves to work flawlessly in your system, including any dependencies they may need.On the easiness side, Ubuntu provides the Add/Remove Applications utility and the Synaptic package manager, both of which are great interfaces to the apt packaging system. The former shows a simple list of programs that can be installed while the latter lets you manage your software on a package basis. After enabling the Universe and Multiverse repositories from Synaptic, you can quickly search for and install any piece of software you can imagine, including a few commercial applications. Installation is then trivial because apt takes care of downloading and installing the software.Given that the software is packaged explicitly for Ubuntu (or Debian), each package morphs into the system seamlessly, placing each file (binaries, documentation, libraries, etc.) where it belongs. On a somewhat related note, the problem of rebuilding kernels and/or drivers is mostly gone: the default kernel comes very modularized and some proprietary drivers are ready to be installed from the repository (such as the NVIDIA one).Unfortunately, you are screwed if some application you want to install is not explicitly packaged for the system (not only it needs to be compiled for Linux; it needs to be "Ubuntu-aware"). These applications are on their own in providing an installer and instructions on how to use them, not to mention that they may not work at all in the system due to ABI problems. I installed the Linux version of Doom 3 yesterday and I can't conceive an end user following the process. The same goes for, e.g., JRE/JDK versions prior to 1.5, which are not packaged due to license restrictions (as far as I know). We will talk some more about this in a future post when we compare the development platform of each system.Mac OS X has a radically different approach to software distribution and installation. An applicaction is presented to the user as a single object that can be moved around the system and work from anywhere. (These objects are really directories with the application files in them, but the user will not be aware of this.) Therefore the most common way to distribute software is through disk images that contain these objects in them. To install the application you just drag it to the Applications folder; that's all. (Some people argue that this is counterintuitive but it's very convenient.) These bundles often include all required dependencies too, which avoids trouble to the end user.Other applications may include custom (graphical!) installers, although all of them behave similarly (much like what happens under Windows). At last, some other programs may be distributed in the form of "mpkg"s which can be processed through a common installer built into the system; all the user has to do is double click on them. No matter what method is used by a specific program, its installation is often trivial: no need to resort to the command line nor do any specific changes by hand.As you can see both systems are very different when it comes to software installation. If all the software you need is on the Ubuntu repositories, it probably beats Mac OS X in this area. But this won't always be the case, specially for commercial software, and in those situations it can be much worse than any other system. I'm not sure which method I like most; each one has its own pros and cons as described above. [Continue reading]

  • Mac OS X vs. Ubuntu: The environment

    I'm sure you are already familiar with the desktop environments of both operating systems so I'm just going to outline here the most interesting aspects of each one. Some details will be left for further posts as they are interesting enough on their own. Here we go:Ubuntu, being yet another GNU/Linux distribution, uses one of the desktop environments available for this operating system, namely GNOME. GNOME aims to be an environment that is easy to use and doesn't get in your way; they are achieving it. There are several details that remind us of Windows more than Mac OS X: for example, we have got a task bar on the bottom panel (that is, pardon me, a real mess due to its annoying behavior) and a menu bar that is tied to the window it belongs to.It is interesting to note that the panel is highly configurable and lets you reorganize its items, add new ones and even create new panels to keep things grouped. The same goes for the desktop look, which can be configured to your liking through themes that affect the window borders, widgets and icons.A feature I miss is the ability to configure screen hot corners to trigger some actions, but I guess that some third party application could allow me to do that. Oh, also note that the desktop does not have any fancy 3D effects by default such as drop shadows, transparencies nor anything like that. People is working in these features lately (Xgl), but they are not yet ready for the end user.Mac OS X, on the other hand, is quite unique in its interface. Applications have a single menu bar that sits on top of the screen; this makes it very easy to reach and also groups all windows that belong to a single application (you know how annoying The GIMP is in this aspect, don't you?). On the other hand, the Dock replaces the typical task bar and also implements the ability to launch applications. Some may not like this but I love the merging of the two concepts: it doesn't matter if a program is currently running or not; you simply click its Dock icon (or even drag a file over it!) and it pops up to the front.This desktop is also famous for its graphical effects. As a curiosity, the active window has a drop shadow that makes it outstand from all others (quite handy). But more importantly, it makes it possible to implement things such as Exposé, which is a great task switcher. This makes me think of the hot corners I mentioned above: it is very useful to be able to configure actions for each corner so that they are triggered when you move your mouse over them.At last, and as far as I know, the Mac OS X interface is not themeable by default. You can change some of its colours (from Aqua to Granite), but that's it. Oh, but I forgot to say that there are at least three different themes applications can use, and the one they show depends on them alone. This means that there are some inconsistencies all around as some applications use the brushed metal theme while others don't. Not a big problem for me, and there are rumors that this will be resolved in Leopard.Well... I guess this is not very enlightening, but the two interfaces have been compared countless of times. I even wrote some articles in the past about it: see this or that. So that's all for now. (And if I you want me to choose one, I go for the Mac OS X interface.) [Continue reading]

  • Mac OS X vs. Ubuntu: Hardware support

    Let's start our comparison by analizing the quality of hardware support under each OS. In order to be efficient, a desktop OS needs to handle most of the machine's hardware out of the box with no user intervention. It also has to deal with hotplug events transparently so that pen drives, cameras, MP3 players, etc. can be connected and start to work magically. We can't forget power management, which is getting more and more important lately even on desktop systems: being able to suspend the machine during short breaks instead of powering it down is extremely convenient.So far Ubuntu has behaved very well in all the machines I've installed it. As regards my desktop machine, it did the job just fine except for some minor glitches; if ignored, though, the machine was still perfectly usable. For example, the TV card does not work — but really no other OS is able to automatically configure it due to deficiencies in the hardware itself — nor does suspension. This last item is worrisome because it did work in the past, but I haven't found a solution to it yet.On the other hand, the GeForce 6600GT video card works flawlessly after manually installing the NVIDIA video drivers, which is a simple matter of installing the nvidia-glx package and running the nvidia-glx-config enable command as root. I can't say this is ready for the end user — a tiny GUI for the overall process wouldn't hurt — but I'm happy with it. Anyway, if I hadn't done this, the desktop would still be usable with the free nv driver, but it does not perform as well.Somewhat related to the video card, there were problems with the resolution configuration. For some reason the screen was configured to 1024x768 and there was no way to go higher from the Screen Resolution control panel. If I recall correctly this was possible under a previous Ubuntu version (before it had a graphical installer). To solve this I had to resort to dpkg-reconfigure xserver-xorg, go through all the annoying questions, select the appropriate resolution (1680x1050) and reenable the NVIDIA driver. This is definitely not for the end user.At last, hardware hotplugging works fine as far as I can tell. There is a lot of people working on HAL, the GNOME Volume Manager, the kernel and all other related components, so this feature works as expected. Even the photo camera is instantly recognized when plugged and a window pops up asking whether all the photos should be transferred to the computer or not.Mac OS X, on the other hand, behaves much better with the hardware provided with the machine: everything works as intended. Of course this is because the same people developing it are the same people that build the hardware, so they know exactly how to write the drivers. Simply put, it would not be acceptable if some pieces were not supported. I personally like the Apple hardware, so I don't mind getting their hardware if I feel the need to run this OS (as I did with the iBook G4!); many people won't agree here, though.Hotplugging also works in a similar way to every other desktop OS. I don't have much external hardware to try it with, though, but the basic things simply behave correctly. Now, conjecturing a bit: I bet Mac OS X will behave better than Ubuntu with more advanced multimedia hardware (MIDI keyboards, video cameras, webcams, etc.) but I don't have such things to try them.Summarizing: Mac OS X has got it right and Ubuntu is on the right track. For my purposes, both of them are in par. [Continue reading]

  • Mac OS X vs. Ubuntu: Introduction

    About a week ago, my desktop machine was driving me crazy because I couldn't comfortably work on anything other than NetBSD and pkgsrc themselves. With "other work" I'm referring to Boost.Process and, most importantly, university assignments. Given my painless experience with the iBook G4 laptop I've had for around a year, I was decided to replace the desktop machine with a Mac — most likely a brand-new iMac 20" — to run Mac OS X on top of it exclusively — OK, OK, alongside Windows XP to satisfy the eventual willingness to play some games.Before making a final decision, I installed Ubuntu 6.06 LTS a few days ago on the desktop machine — an Athlon XP 2600+ with 1 GB of RAM, a couple of hard disks (80 and 120 GBs) and a GeForce 6600GT — so that I could focus on my tasks in the meantime. I had already used Ubuntu it in the past but not as my primary system, and I must confess it has impressed me a lot. Given this, the fact that my machine can't be considered old yet, that I already own a 20" screen and, of course, the price, I'm reconsidering the decision to buy the iMac.Canonical is doing a good job at developing a desktop operating system that just works for most, if not all, of the common end-user needs; that should be the definition of a desktop OS, shouldn't it? Sure, they are using components available everywhere else (modulo some custom changes) such as Debian GNU/Linux, the GNOME Desktop and an assorted set of related utilities. This, alongside the huge package repositories of Debian, provides a system with a lot of ready-to-install software that is useful to lots of people. Even more, the system itself requires few maintenance as most of it is completely automated.That's enough for the introduction. I would like now to compare Mac OS X Tiger against Ubuntu 6.06 LTS to see how they stack up to each other. Given that there are several aspects I want to cover in detail, I'll be unleashing them in the following days instead of posting them all in a single message; that'll hopefully be easier to digest. Just keep in mind that I'm speaking based on my experiences so that some comments may be subjective. I also hope that these essays will make me take a final decision of what to do with the machine I currently have! ;-)Stay tuned! [Continue reading]

  • A letter to NetBSD

    Dear NetBSD,It is almost five years since we first met and I still remember how much I liked you at that time. Despite your 1.5 release had slow disk performance when compared to the other BSDs, I found in you an operating system that just felt right. You focused on clean and well designed code among many other goals; sincerely, I didn't come to you looking for portability because I never had anything else than i386 machines. All these feelings turned into love after installing and experimenting with you: the system was minimal, well documented and made sense. As you know, I soon left FreeBSD and migrated my machines to you.pkgsrc, one of your children projects, was also nice when compared to FreeBSD's ports. The buildlink concept was very interesting and in general it looked less clumsy than FreeBSD's ports: few build-time options resulting in consistent binary packages, no interactive installs, etc. It lacked many packages but that was a plus for me: I'd easily get involved into your development. I set myself the goal of porting GNOME 2.x to you and achieved it some years ago.At that time I had never contributed to any big free software project in the past, so you can imagine how excited I was some months later when one of your developers invited me to join you. That boosted my pkgsrc contributions and I started looking at contributing more stuff to the base system. If I can tell you a secret, I had always wanted to write a simple operating system on my own but given the size of this task I preferred to join the development of an existing free one and help as much as possible. To some extent I achieved it and learned a lot along the way.As I said above, I love(d) you, but keep in mind that love and hate are not opposite feelings.As another developer recently reminded me, one of my first changes to the system after becoming a developer was the addition of the "beep on halt" feature to the i386 platform. I quickly wrote that feature and was eager to add it to the system, so I presented it to tech-kern. My excitement went away quickly. That tiny change generated a lengthy discussion where everybody exposed his preferred way to achieve that feature and bashed all other possibilities. Eventually, it was all bikeshed. However, as I was new I tried to please everybody and in the end committed the feature.If that had happened only once, you'd say I had had a bad day with you. But it turns out that this same situation was and is still too common in your mailing lists. Indeed, there are a lot of competent people in them but consensus can never be reached.The problem is that this situation scares off many new potential contributors. They come excited to you with new patches and functionality that could be valuable to the OS but you turn them down because you have no clear long-term goal, because the changes are not perfect and you insist on perfectly designed stuff on the first proposal or because there is no consensus. Oh, and that is without counting all those cases where you do not provide a single reply to proposals. Eventually the contributor gets tired of the discussion (or lack thereof) and runs away.I want to make clear that some developer groups within you are still kind to deal with such as the pkgsrc team or the www maintainers.And no, your "portability" goal is not interesting any more (and as I said above, it never was to me) to the general public. Your brother Linux runs on as many systems as you if not more; it was just amazing to see it working on a Linksys router just some days ago, while you cannot get close to it. NetBSD, you need to review your goals, make them clear and, if needed, generate some hype about them. Yes, even if you don't like it, hype will attract many new users. Some of them — possibly a tiny bit — will be potential new developers. And you need as much of them as you can to evolve or otherwise you'll be stalled in the past because you won't be able to keep up with new hardware.While I've had a very good time hacking your code and working in pkgsrc, it's not fun to me any more. It is frustrating to spend lots and lots of hours of my free time to later see that it was in vain. Furthermore, I'm extremely tired of having an unusable desktop system because of broken stuff — I know, not completely your fault — and the need to be constantly maintaining it. I need and want to be productive in other projects but you do not let me to.You know, contributing as a volunteer to free software projects is about having fun. I'm not having fun any more so I'll most likely be deinstalling you from my desktop machine and permanently shutting down my home server, both of which are running 4.0_BETA. This is not still a sure thing but, if things continue as usual, I sure will. And soon.This is not a good bye though. I still believe in your hidden goals: good design, standards compliance, etc. — these should replace the "portability" buzzword — so I'll install you on a spare machine and/or under a virtual machine to be able to do eventual hacking and maybe GNOME maintenance. Oh, and of course I'll keep using pkgsrc under other systems.Yours sincerely,jmmv [Continue reading]

  • Trying out DD-WRT

    Past Christmas I bought a Linksys WRT54GSv4 router to improve wireless access to my home network. Of course, I'd have bought an access point, but I also wanted to replace most of my home server/router functionality with this little device so that I could eventually remove the server box. Therefore it had to provide:NAT.Firewalling.Port redirection.A dynamic DHCP server.Ability to configure DHCP static entries (used for servers within the network).A local DNS server to resolve names for local machines.The first four items were all provided by the official firmware but unfortunately not the last two. However, if I chose that specific model was because it could easily be flashed using OpenWRT or DD-WRT, both of which allow to set up those services by using, e.g. Dnsmasq.However, due to lazyness and fear of breaking it, I didn't flash the router with these non-official firmwares until a couple of days ago. I finally decided to give DD-WRT a try given that it includes the same (but improved) nice web interface that came with the official software. I certainly did not want to configure it through the command line.For some reason the first flashing attempt failed in a strange way because I could access the router through telnet but not through the web. Trying to fix that through the command line resulted in much worse results rendering the router useless: it wouldn't boot at all because the firmware was corrupt. Panic!Fortunately, after playing a lot with TFTP I finally achieved to flash it again with the correct firmware. After that, I was greeted by the web interface waiting for me to configure everything again. At first everything looked as before (except for the visual theme change) with some minor additions, but then I discovered the "Services" tab which accomplished my initial goal: set up static DHCP entries for hosts and configure local DNS names. Simply wonderful.So now that the router provides DHCP and DNS as I needed, the only remaining tasks that the server does are file sharing, Monotone database serving (for the local network only), P2P management and daily NetBSD rebuilds. Note that in the past it also ran Apache, Courier IMAP, Spamassassin, FTP and probably something else. I'm now thinking how to best get rid of them all and shut down that annoying machine that continuously pollutes the living room with noise and wastes power unnecessarily. [Continue reading]

  • GNOME on NetBSD needs YOU!

    A few pkgsrc developers and I have been working hard for years to bring the GNOME Desktop to this packaging system and make it work under NetBSD. We are quite happy with the current results because the packages are updated very frequently and everything works. Well, almost. There are still several missing details that really hurt the end user experience and need fixing.If things continue as have gone until now, we will always be one step (or more!) behind other operating systems such as Linux and FreeBSD. Linux, of course, gets full support from the GNOME developers because they do their daily work on it. FreeBSD, on the other hand, has more people working on the port and therefore have more manpower to resolve all the portability problems; they are doing a nice work.In our case, it is clear that we do not have enough manpower to keep up with the huge task that porting GNOME is — we are very few people working on it. Just consider that GNOME is composed of around 100 packages and that there are new major releases every 6 months (minor ones coming much faster). This updating task imposes a lot of stress on us that prevents us from working on the remaining pending items.If we were able to work on all these issues, we could have a fully-functional GNOME Desktop on top of NetBSD. I belive this is a key area to improve NetBSD's visibility: if we had a complete desktop evnrionment, more users could come and use NetBSD for their daily tasks. Eventually, this could attract more developers who would start contributing to the system itself.So... I've prepared a list of GNOME-related projects that details the major items that need to be addressed to have a complete GNOME Desktop installation on top of NetBSD and pkgsrc. I've tried to detail each project as much as possible, explaining the current problem, why solving it could benefit NetBSD and how to get started.Should you need more information, I've also written some generic guidelines about GNOME packaging and porting. And, of course, you can contact me to get more details and take one of the projects! I'm willing to mentor you to make them success. You can certainly make a difference to the current status of things.Let me add that I've have learned a lot about many different areas from my contributions to pkgsrc and NetBSD. You can seize the opportunity to learn new exciting stuff too; don't be shy! Oh... and if GNOME on NetBSD is not of your preference, please see our complete list of proposed projects; I bet you will find something of your interest! [Continue reading]

  • Recent GNOME fixes

    A week has almost passed since someone told me that D-Bus' session daemon was broken in NetBSD. I curse that day! ;-) I've been investigating that problem since then and (very) beleatedly fixing some issues in other GNOME programs during the process.D-Bus' session daemon did not work under NetBSD because it couldn't authenticate incoming connections; that was due to the lack of socket credentials. After some of days of investigation — which included discovering that NetBSD does indeed support socket credentials through LOCAL_CREDS — and multiple attempts to implement them, I finally got D-Bus session daemon to authenticate appropriately.This served me to fix gnome-keyring too, which was broken for the exact same reason, and gnome-keyring-manager, the application I was using to check whether gnome-keyring worked or not.At last I also finally sat down and solved an annoying problem in the gnome-applets package that caused the Sticky Notes applet to crash when adding a new note; this had been happening since 2.12.0 if I recall correctly. I am sure that the root of this problem was also producing incorrect behavior in other panel applets.For more details check these out:dbus: #7798 - Generalize kqueue supportdbus: #8037 - Improve debugging messages in exchange_credentialsdbus: #8041 - Add LOCAL_CREDS socket credentials supportgnome-keyring: #353105 - Implement LOCAL_CREDS socket credentialsgnome-applets: #353239 - Get rid of AC_DEFINE_DIRgnome-keyring-manager: #353251 - Better handling of null pathsOuch... and GNOME 2.16 is around the corner... I'm afraid of all the new problems to come! [Continue reading]

  • More on LOCAL_CREDS

    One of the problems of learning new stuff based on trial-and-error iterations is that it is very easy to miss important details... but that's the price to pay when there is no decent documentation available for a given feature. We saw yesterday multiple details about LOCAL_CREDS socket credentials and, as you may deduce, I missed some.First of all I assumed that setting the LOCAL_CREDS option only affected the next received message (I didn't mention this explicitly in the post though). It turns out that this is incorrect: enabling this option makes the socket transmit credentials information on each message until the option is disabled again.Secondly, setting the LOCAL_CREDS option on a server socket (one configured with the listen(2) call) results in all sockets created from it through accept(2) to also carry the flag enabled. In other words, it is inherited.These features are interesting because, when using combined, avoid the need for the synchronization protocol outlined in the previous post — in some cases only. If the credentials are to be transmitted at the very beginning of the connection, the server can follow these steps:Create the server socket and configure it with bind(2) and listen(2).Before entering the accept(2) loop, set the LOCAL_CREDS option on the server socket.Enter the accept(2) loop and start accepting clients.For each new client:Receive its first message.Get the credentials from it.Disable the LOCAL_CREDS option from the socket used to communicate with that specific client.It couldn't be easier! This is still different to all other socket credentials methods I know of but can be easily adapted to protocols that were not designed to support LOCAL_CREDS (i.e. that do not implement the synchronization explained in the previous post). [Continue reading]

  • LOCAL_CREDS socket credentials

    Socket credentials is a feature that allows a user process to receive the credentials (UID, GID, etc.) of the process at the other end of a communication socket in a safe way. The operating system is in charge of managing this information, sent separately from the data flow, so that the user processes cannot fake it. There are many different implementations of this concept out there as you can imagine.For some reason I assumed for a long time that NetBSD didn't support any kind of socket credentials. However, I recently discovered that it indeed supports them through the LOCAL_CREDS socket option. Unfortunately it behaves quite differently from other methods. This poses some annoying portability problems in applications not designed in the first place to support it (e.g. D-Bus, the specific program I'm fighting right now).LOCAL_CREDS works as follows:The receiver interested in remote credentials uses setsockopt(2) to enable the LOCAL_CREDS option in the socket.The sender sends a message through the channel either with write(2) or sendmsg(2). It needn't do anything special other than ensuring that the message is sent after the receiver has enabled the LOCAL_CREDS option.The receiver gets the message using recvmsg(2) and parses the out of band data stored in the control buffer: a struct sockcred message that contains the remote credentials (UID, GID, etc.). This does not provide the PID of the remote process though, as other implementations do.The tricky part here is to ensure that the sender writes the message after the receiver has enabled the LOCAL_CREDS option. If this is not guaranteed, a race condition appears and the behavior becomes random: some times the receiver will get socket credentials, some times it will not.To ensure this restriction there needs to be some kind of synchronization protocol between the two peers. This is illustrated in the following example: it assumes a client/server model and a "go on" message used to synchronize. The server could do:Wait for client connection.Set LOCAL_CREDS option on remote socket.Send a "go on" message to client.Wait for a response, which carries the credentials.Parse the credentials.And the client could do:Connect to server.Wait until "go on" message.Send any message to the server.To conclude, a sample example program that shows how to manage the LOCAL_CREDS option. socketpair(2) is used for simplicity, but this can easily be extrapolated to two independent programs.#include <sys/param.h>#include <sys/types.h>#include <sys/inttypes.h>#include <sys/socket.h>#include <sys/un.h>#include <err.h>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <unistd.h>intmain(void){ int sv[2]; int on = 1; ssize_t len; struct iovec iov; struct msghdr msg; struct { struct cmsghdr hdr; struct sockcred cred; gid_t groups[NGROUPS - 1]; } cmsg; /* * Create a pair of interconnected sockets for simplicity: * sv[0] - Receive end (this program). * sv[1] - Write end (the remote program, theorically). */ if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) == -1) err(EXIT_FAILURE, "socketpair"); /* * Enable the LOCAL_CREDS option on the reception socket. */ if (setsockopt(sv[0], 0, LOCAL_CREDS, &on, sizeof(on)) == -1) err(EXIT_FAILURE, "setsockopt"); /* * The remote application writes the message AFTER setsockopt * has been used by the receiver. If you move this above the * setsockopt call, you will see how it does not work as * expected. */ if (write(sv[1], &on, sizeof(on)) == -1) err(EXIT_FAILURE, "write"); /* * Prepare space to receive the credentials message. */ iov.iov_base = &on; iov.iov_len = 1; memset(&msg, 0, sizeof(msg)); msg.msg_iov = &iov; msg.msg_iovlen = 1; msg.msg_control = &cmsg; msg.msg_controllen = sizeof(struct cmsghdr) + SOCKCREDSIZE(NGROUPS); memset(&cmsg, 0, sizeof(cmsg)); /* * Receive the message. */ len = recvmsg(sv[0], &msg, 0); if (len err(EXIT_FAILURE, "recvmsg"); printf("Got %zu bytesn", len); /* * Print out credentials information, if received * appropriately. */ if (cmsg.hdr.cmsg_type == SCM_CREDS) { printf("UID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_uid); printf("EUID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_euid); printf("GID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_gid); printf("EGID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_egid); if (cmsg.cred.sc_ngroups > 0) { int i; printf("Supplementary groups:"); for (i = 0; i printf(" %" PRIdMAX, (intmax_t)cmsg.cred.sc_groups[i]); printf("n"); } } else errx(EXIT_FAILURE, "Message did not include credentials"); close(sv[0]); close(sv[1]); return EXIT_SUCCESS;} [Continue reading]

  • A split function in Haskell

    Splitting a string into parts based on a token delimiter is a very common operation in some problem domains. Languages such as Perl or Java provide a split function in their standard library to execute this algorithm, yet I'm often surprised to see how many languages do not have one. As far as I can tell neither C++ nor Haskell have it so I have coded such a function in the past multiple times in both languages. (This is not exactly true: Haskell has the words function which splits a string by whitespace characters. Nevertheless I didn't know this when I wrote my custom implementation.)When I implemented a custom split function in Haskell I was really amazed to see how easy and clean the resulting code was. I'm sure there is some better and even cleaner way to write it because I'm still a Haskell newbie! Here is it:split :: String -> Char -> [String]split [] delim = [""]split (c:cs) delim | c == delim = "" : rest | otherwise = (c : head rest) : tail rest where rest = split cs delimThe above code starts by declaring the function's type; this is optional because Haskell's type system is able to automatically deduce it. It then uses pattern matching to specify the algorithm's base and recursive cases. At last, the recursive case is defined by parts, just as you do in mathematics. Oh, and why recursivity? Because iteration does not exist in functional programming in the well-known sense of imperative languages. Also note the lack of variables (except for the input ones) and that everything is an evaluable expression.Let's now compare the above code with two implementations in C++. A first approach to the problem following common imperative programming thinking results in an iterative algorithm:std::deque< std::string >split_iterative(const std::string& str, char delim){ std::deque< std::string > parts; std::string word; for (std::string::const_iterator iter = str.begin(); iter != str.end(); iter++) { if (*iter == delim) { parts.push_back(word); word.clear(); } else word += *iter; } parts.push_back(word); return parts;}This is certainly uglier and much more difficult to prove right; iteration is a complex concept in that sense. In this code we have variables that act as acumulators, temporary objects, commands, etc. Be glad that I used C++ and not C to take advantage of STL containers.OK, to be fair the code should be implemented in a recursive way to be really comparable to the Haskell sample function. Let's attempt it:std::deque< std::string >split_recursive(const std::string& str, char delim){ std::deque< std::string > parts; if (!str.empty()) { std::string str2 = str; parts = split_recursive(str2.erase(0, 1), delim); if (str[0] == delim) parts.push_front(""); else parts[0] = str[0] + parts[0]; } else parts.push_front(""); return parts;}This split_recursive function follows the same algorithm as the split written in Haskell. I find that it is still harder to read and more delicate (I had some segmentation fault s until I got it right).Of course Haskell is not appropriate for everything (which is true for every language out there). I have yet to write a big and useful program in Haskell to really see its power and to be able to relly compare it to other languages. All I can do at the moment is to compare trivial stuff as the above. [Continue reading]

  • What have I learned during SoC?

    One of SoC's most important goals is the introduction of students to the free software world; this way there are high chances that they will keep contributing even when SoC is over. Students already familiar with FOSS (as was my case both years) are also allowed to participate because they can seize the Summer to learn new stuff and improve their skills.As I expected, the development of Boost.Process has taught me multiple new things. First of all, I wanted to get familiar with the Win32 API because I knew nothing about it. I have achieved this objective by learning the details about process and file management and making Boost.Process work under this platform. Sincerely, Win32 is overly complex but has some interesting features.Secondly, I have got a lot more fluent with C++ templates and have learned some curious coding techniques that I never thought about in the past. The most impressive one in my opinion is that templates can be used to achieve build time specialization, avoiding expensive virtual tables at run time and inheritance when these are not really needed. (I only considered them for polimorphic containers before.)At last, I have also got into several utilities used for Boost development. Among them are Quickbook for easy document writing, Boost.Build v2 for portable software building and the Boost Unit Test library for painlessly creating automated test suites.All in all I'm happy with the outcome of the project and the new knowledge. If SoC happens again, you should really consider joining if you have the chance! [Continue reading]

  • Boost.Process 0.1 published

    SoC 2006 is officially over — at least for me in my timezone. Given that the Subversion repository has some problems with public access, I've tagged the current sources as the first public version and uploaded a couple of tarballs to the Boost Vault. Both the tag and the tarballs will also serve historical purposes, specially when newer ones come ;-)You can download the archives from the Process directory in tar.gz and ZIP formats. Enjoy! [Continue reading]

  • Boost.Process tarballs posted

    As everybody is not comfortable accessing Subversion repositories to download source code, I've posted two tarballs with Boost.Process' sources. They include an exported copy of the repository contents as well as prebuilt documentation in the libs/process/doc/html subdirectory.You can download the compressed archive either in tar.gz format or in ZIP. Keep in mind that these will be updated very frequently so please do not use them to prepackage the library.Changes from yesterday's announcement are minor at this point. For the curious ones: there is now a list of pending work and the Last revised item in the main page has been fixed. As a side effect of this last change, Boostbook will support SVN's $Date$ tags if my patch is integrated :-) [Continue reading]

  • Blog migrated to new Blogger beta

    Blogger announced yesterday multiple improvements to their service. These are still in beta — as almost all other Google stuff, you know ;-) — and are being offered to existing users progressively. To my surprise, the option to migrate was available on my dashboard today so I applied for it; I was very interested in the post labelling feature.The migration process has been flawless and trivial. After the change nothing seemed to have changed except for some minor nits in the UI. I looked around for the labels feature but discovered that it is only available once you migrate to the new "layouts system", an easier way to desing your blog's look.The switch to layouts scared me a bit because I was afraid of not being able to integrate the Statcounter code back again. But after verifying that the change was reversible, I tried it. I can confirm that the new customization page is much, much easier to use than before, although still too limited (direct HTML editing is not available yet). Oh, and I seized the oportunity to switch to a slightly different theme (yes, it was available before).Aside from that there are some new nice features such as RSS feeds (weren't they there before?), a better archive navigation (see the right bar), integration with Google accounts and many other things I'm surely missing.Summarizing: It has taken a long while for the Google people to upgrade Blogger's service, but the wait has been worth it. Now more than ever, I don't regret migrating from Livejournal to this site almost a year ago. [Continue reading]

  • SoC: Boost.Process published

    In a rush to publish Boost.Process before the SoC deadline arrives, I've been working intensively during the past two days to polish some issues raised by my mentor. First of all I've added some Win32-specific classes so that the library does not seem Unix-only. These new classes provide functionality only available under Windows and, on the documentation side, they come with a couple of extra examples to demonstrate their functionality.Speaking of documentation, it has been improved a lot. The usage chapter has been rewritten almost completely; it has gained a couple of tutorials and all the platform-specific details in it have been moved to two new chapters. One of them focuses on explaining those features available only under a specific operating system while the other summarizes multiple portability issues that may arise when using the generic classes. Additionally, a chapter about supported systems and compilers has been added.There are still two big things missing that shall be worked on in the (very) short term: add a design decisions chapter to the documentation and incorporate asynchronous functionality to the library by using Boost.Asio. This last thing is needed to keep things simple from the user 's point of view (i.e. no threads on his code).Check out the official announcement for more details.I guess that this closes SoC for me this year. There are still going to be some changes before Monday but don't expect anything spectacular (I'll be away during the weekend... hopefully ;-). But don't be afraid! Work on this project will continue afterwards! [Continue reading]

  • SoC: Status report 3

    Only 8 more days and SoC will be officially over... Time has passed very fast and my project required much more work than I initially thought. It certainly cannot be completed before the deadline but I assure you that it will not fall into oblivion afterwards; I have spent too much time on it to forget ;-)There have been many changes in Boost.Process' code base since the previous status report; let's see a brief summary:The library has been simplified removing all those bits that were aimed at "generic process management". Now it is focused on child process managing only, although extending it to support other process-related functionality is still possible (preserving compatibility with the current API). It'll be better to design and implement these features when really needed because they will require a lot of work and cannot be planned right now; doing so might result in an incomplete and clusmy design. Yup... my mentor (Jeff Garland) was right when he suggested to go this simplified route at the very beginning!Due to the above simplifications, some classes are not templated any more (the stuff that depended on the template parameters is now gone). I bet some of them could still be, but this can be easily changed later on.There is now a specialized launcher in the library to painlessly start command pipelines. This also comes with a helper process group class to treat the set of processes as a unique entity.The user now has much more flexibility to specify how a child process' channels behave. While documenting the previous API it became clear that it was incomplete and hard to understand.Code from all launchers has been unified in a base private class to avoid duplication and ensure consistency across those classes. Similar changes have ocurred in the test suite, which helped in catching some obscure problems.Related to previous, many of the code used to do the actual task of spawning a process has been moved out of the individual launcher classes into some generic private functions. This was done to share more code, improve cohesion and readability.The documentation is now much better, although it still lacks a chapter about design issues. See the online snapshot for more details.And, of course, multiple bug fixes and cleanups.Still, I haven't had a chance to ask for a public review in Boost's developers mailing list. The problem is that I continously find things to improve or to complete and prefer to do them before asking for the review. However, as time is running out I'll be forced to do this in the forthcoming week to get some more feedback in time. [Continue reading]

  • IMAP gateway to GMail

    Update (Oct 24, 2007): OK, this is one of the most visited posts in my blog. Don't bother reading this. As of today, GMail supports IMAP without the need for external hacks! Just go to your settings tab, enable it, configure your mailer and that's it! More information is on their help page.Wouldn't it be great if you could access your GMail account using your favourite email client from multiple computers, yet keep all of them synchronized? That's what you could do if they provided support for the IMAP protocol, but unfortunately they currently don't.So yesterday I was wondering... would it be difficult to write an IMAP gateway for GMail? Sure it would but... guess what? It already exists! The GMail::IMAPD Perl module implements this functionality in a ready-to-use service. All you need to do is copy/paste the sample program in the manual page, execute it and you've got the gateway running.Unfortunately, it's still quite incomplete as it only supports some mail clients and lacks some features — the documentation gives more details on this. I could get it to work with Apple Mail but was very slow overall (maybe because I have a lot of mail in my account) and had random problems. You might get better results though.For your pleasure, it is now in pkgsrc as mail/p5-GMail-IMAPD alongside a patch to accomodate a change in GMail's login protocol. There is also the programmatic interface to the web service used by the former in mail/p5-Mail-Webmail-Gmail, but be aware that the former includes a somewhat obsolete copy of the latter due to non-official modifications.Update (August 26th): I am not the author of the above mentioned Perl module and therefore I cannot provide support for it. Please read the manual page and, if it is not clear enough or if it does not work as you expect, ask the real author (Kurt Schellpeper) for further details. Anyway, to answer some of the questions posted:To get this module to work, install it using CPAN or pkgsrc (recommended). Using the latter has the advantage that the module receives a fix for the login procedure. If you install it manually be sure to apply the required patch!Then open up an editor and paste the example code in the module's manual synopsis section:# Start an IMAP-to-Gmail daemon on port 1143use GMail::IMAPD;my $daemon=GMail::IMAPD->new(LocalPort=>1143, LogFile=>'gmail_imapd.log', Debug=>1);$daemon->run();Save the file as, e.g., gmail-imap.pl and execute it from a terminal using: perl gmail-imap.pl (/usr/pkg/bin/perl gmail-imapl.pl if you are using pkgsrc). Once running, configure your mail client to connect to localhost:1143 using IMAP v4. If it does not work, I'm sorry but you are on your own. (Again, contact the module's author.)Hope this helps. [Continue reading]

  • GNOME 2.14.3 hits pkgsrc

    Last night I finished updating the GNOME meta packages in pkgsrc to the latest stable version, 2.14.3. Yes, I had to take a break from Boost.Process coding (which is progressing nicely by the way; check the docs).The meta packages had been stalled at 2.14.0 since the big update back in April which shows how few time I've had to do any pkgsrc work — well, you can also blame the iBook with its Mac OS X, if you want to ;-) Luckily the packages are now up to date, but I hope they'll not get stalled at this version for too long: 2.16.0 is around the corner (due in one-two months!).I must thank Matthias Drochner and Thomas Klausner for all their work in the GNOME packages during this period of time. Although they did not touch the meta packages, almost all of the components were brought up to date very promptly after each stable release; in fact, I just had to update a dozen of packages on my own to get a complete 2.14.3 installation, aside from tweaking the meta packages.Let me finish with a call for help: the biggest thing missing (in my opinion, that is) in GNOME under NetBSD right now is HAL. It shouldn't be too difficult to get to work but will certainly require several days of discussion and coding. Shall you want to help here (which basically boils down to adding a kernel driver and porting the userland utilities), feel free to contact me for more details. [Continue reading]

  • X11 and the Win keys

    For quite some time I've been having issues with the Windows keys in my Spanish keyboard under X11. I like to use these as an extra modifier (Mod4) instead of a regular key (Super_L), because it is very handy when defining keybindings. The X11 default seems to treat them as Super_L only. For example, trying to attach Win+N as a keybinding to one of the actions in the GNOME Keyboard Shortcuts panel resulted in the Super_L combination instead of Mod4+L, hence not working at all.Fortunately, I found how to fix that within GNOME a while ago. It is simply a matter of enabling the "Meta is mapped to the left Win-key" option in the Keyboard configuration panel. But... I was now forced to use Fluxbox while I rebuild some parts of GNOME and the modifier was not working because the system was using X11 defaults again.After inspecting /etc/X11/XF86Config and some of the files in /usr/X11R6 I found how to enable this behavior in the regular X11 configuration files, bypassing GNOME. It is a matter of adding the following line to the keyboard section of XF86Config:Option "XkbOptions" "altwin:left_meta_win"I guess this works the same for X.Org. [Continue reading]

  • SoC: Playing with Doxygen

    My Boost.Process prototype is almost feature complete; the major thing that is still not included is the ability to create pipelines. I should address that as soon as possible because I'm afraid it will have some impact on the existing classes, but for now I wanted to start documenting some code. There are already 21 header files to document and doing so is by no means an easy task.In order to document the library's API I've decided to use Doxygen, a documentation system for multiple languages including, obviously, C++. Doxygen scans your source files looking for special comments that document classes, methods and any other part of the code. Then, the comments are extracted alongside with the code structure and are used to automatically generate reference documentation in whichever format you want (HTML, LaTeX, XML, etc.).Doxygen is widely used and nicely integrated within Boost.Build. Boost's build system automatically generates the required configuration file for Doxygen (the Doxyfile) and lets you merge the resuting files with your other BoostBook (or QuickBook) documents painlessly.So far I like this tool very much. Keeping the documentation alongside the source code helps in keeping it consistent and makes it immediately available to the curious developer reading the code. Furthermore, it provides tags to format anything you can imagine: preconditions, postconditions, thrown exceptions, results, etc.The results? Take a look at the Reference section in the Boost.Process' manual ;-) At the moment of this writing only the classes in the detail subdirectory are documented, which correspond to sections 5.10 through 5.13. [Continue reading]

  • SoC: Status report 2

    Another week has passed and I'm happy to announce that the Boost.Process prototype is now completely ported to the Win32 API. In other words, a program can use the current library to transparently manage child processes both from Windows and Unix systems.There are still several rough edges and incomplete classes but the code passes the test suite on both systems :-) OK, you know that passing a test suite does not mean that the code is correct; it only means that it complies with the existing tests. So... more tests are needed to spot the existing failures.I'm now going to clean up some parts of the code that make little sense after the huge rototill to port the code to Win32; basically, the internal pipe class and its usage. Then, I'll try to complete the missing Unix-specific bits.Why did I say a "huge rototill"? After starting to port some code to Windows, I discovered the CRT library. For a moment, I thought that the porting could be easy, given that this supports the standard POSIX file descriptors and calls (open(2), read(2), etc.). Unfortunately, I quickly realized that using the CRT could not integrate well with the native Win32 API; and worse, I discovered that Windows only supports communicating with child processes through the three standard channels (stdin, stdout and stderr). This restriction has forced me to redo most of the existing design and code to offer a clean and common interface on both platforms; file descriptors are now hidden all around unless you explicitly want to see them.Of course this means that the classes used to launch child processes now only accept these three channels, something that is not powerful enough in a Unix system. In these OSes, processes may need to set up extra communcation pipes with children to retrieve additional information (dbus and GPG come to my mind), so there shall be POSIX-specific classes that allow this interface.I would like to finish the clean up and the addition of POSIX-specific code by the end of the month alongside some simple documentation (formal code examples). The idea is to be able to publish it for informal review soon afterwards (beginning of August). [Continue reading]

  • SoC: Status report

    Mmm... SoC. Multiple things have been going on lately in my SoC project, yet I've kept you uninformed. As I already told you, my project aims to develop a C++ library for the Boost project to manage child processes; it's named Boost.Process.During June I discussed with Jeff Garland — my mentor — the general design of the library. The design is surely not final but it is a lot better than it was at its first sketches. For example: it makes use of templates where appropriate to let the user replace any part of the library with his own code (more or less). I must say he has been very patient with all my questions and has provided extremely valuable information.I also seized that month to investigate a bit the Win32 API because the library must work on Windows too. I couldn't do much more during that time because I was busy with semester's final exams. All passed by the way :-)And now to the interesting thing. I've spent the past week (almost two) implementing a preliminary prototype. It is still incomplete but has already raised many issues; you know, it is hard to get into the details (that affect the design) without coding a bit. The prototype also includes multiple unit tests to ensure that everything works, as it shall be; Boost's Unit Test Framework is a really nice tool to implement them.Browse the source code for more details. [Continue reading]

  • Fixing suspension problems

    Once upon a time I could put my desktop machine to sleep either from Windows XP or Linux. When I replaced both with Vista Beta 2, I tried to suspend the machine and saw it fail miserably; I quickly (and incorrectly!) blamed the OS and forgot about the issue. But a couple of days ago I installed Ubuntu 6.06 on the same machine and it exposed the same problems: after asking the OS to suspend the machine, everything could power down as expected but in less than a second of suspension it could resume operation.What had changed since the last time it worked and now? The only thing I could find was the keyboard and the mouse, both of which are now USB and were PS/2 before. Mmm... as a test, I pressed the suspend button and immediately afterwards unplugged both peripherals. Guess what? The machine got into sleep mode properly!So, I opened the case and looked for the two jumpers in the motherboard (an Asus A7V8X-x) that tell it which power line to use for the USB ports: +5V or +5VSB. Changing them from the former to the latter fixed the problem and suspension works fine now.Maybe it's time to try NetBSD-current to see if this feature also works in my machine... [Continue reading]

  • Disabling bitmapped fonts

    A week ago or so I reinstalled NetBSD — 3.0_STABLE, not current — on my machine, finally replacing the previous unstable and out-of-control system. I had to do it to get some work done more easily than on Windows and to be able to keep up with my developer duties.After a successful and painless installation, I built and installed Firefox and Windowmaker, both of which come handy from time to time (specially while rebuilding the entire GNOME Desktop). However, launching Firefox under a plain Windowmaker session greeted me with extremely ugly fonts. The GTK interface was OK, but web pages were rendered horribly in general. It was simply unusable.At first I thought it had to do with the anti-aliasing configuration, but several attempts to change its details only resulted in worse fonts. The same happened when dealing with DPI settings. So what was happening? It turns out that Firefox was using bitmapped fonts instead of vector ones — and you know how ugly these look if they are not rendered in their native size.Firefox was asking Fontconfig (X's font configuration and access library) to provide a font from the Serif family, without caring about which it could be. Fontconfig then provided it with a bitmapped font. (I still don't know why it preferred those over any other.)The nice thing is that you can tell Fontconfig how to match generic font names to the fonts you really have. And this effectively means that you can force it to never use bitmapped fonts. All I had to do was to create a custom local.conf file in Fontconfig's configuration directory (/etc/pkg/fontconfig in my case):<?xml version="1.0"?><!DOCTYPE fontconfig SYSTEM "fonts.dtd"><fontconfig><include>conf.d/no-bitmaps.conf</include></fontconfig>The curious thing is that GNOME seems to take care of this on its own because Firefox uses nice fonts under it even if you do not touch Fontconfig files.Oh! Be aware that NetBSD has two Fontconfig installations: one from the native XFree86 installation and one from pkgsrc. These are configured in different directories: /etc/fonts and PKG_SYSCONFDIR/fontconfig respectively.Don't know if this issue also happens in other operating systems if Fontconfig is not manually configured... [Continue reading]

  • Windows: Remapping keyboard keys

    Some time ago, I bought an Apple Keyboard to use with both my PC and iBook. Compared to a traditional PC keyboard, this one includes extra function keys (from F13 to F16) and some multimedia ones, but does not provide those that are PC-specific and which are very rarely used even in, for example, Windows. One of such keys is Print Screen; I needed this key to take a screenshot of my desktop a couple of days ago and couldn't find a quick way to do it otherwise.In order to solve the problem, I searched for information on how to remap keys in Windows. It turns out that doing so manually is quite complex because you need to fiddle with a cryptic key in the registry (how not!). Fortunately, there is a little free utility that simplifies this process: SharpKeys.This tool lets you remap any physical key on your keyboard to any other one you can imagine. To my surprise, you can even remap your keys to those multimedia functions found in new keyboards, which comes great to control the media player. For example: F14 to go to the previous track, F15 to toggle between play and pause and F16 to go to the next track. I have been using a similar setup to control iTunes with SizzlingKeys and is certainly a comfortable configuration.Think about it: your keyboard surely has some keys that you do not ever use and which could be mapped to become useful! (And this applies to any OS.)Hmm... too much Windows-related posts in a row. The next one will be different; promise! [Continue reading]

  • Windows: Where is deltree?

    If you were a MSDOS user as I was, you may remember one of the useful novelties in the 5.0 version: an utility to delete whole directory hierarchies. It was known as deltree. That tool came really useful to avoid launching third party applications such as PC-Tools or PC-Shell (if I remember their names correctly).What a surprise when I needed it a year ago while working in Windows XP and couldn't find it anywhere; it certainly existed in Windows 9x! Where is it? Well, it simply does not exist any more.So, how can you delete whole directory trees from the Windows XP command line? Just use the rmdir command. Its /s option can be specified to ask it to recursively delete directories. And the /q flag comes helpful to avoid the multiple and annoying "Are you sure?" questions.As far as I can tell, deltree ceased to exist in Windows 2000. And yes: I have been meaning to post this message for almost a year! [Continue reading]

  • Happy second birthday!

    Today makes the second year since this blog was born; its birth happened in the middle of final exams, just as the same period I'm in now. As you may remember, it was firstly hosted on Livejournal but was later migrated to Blogger on October 22nd, 2005. During the switch, it was renamed from "jmmv's weblog" to "The Julipedia".This makes the 272nd post which gives an average of 0.37 posts a day. My initial intention was to reach a constant rate of a post per day, but I have been unable to keep up with that goal. I have so much other things to do... but now that the semester is almost over, I hope to be able to post more often.During the whole life time of this blog, there was a single month without posts (August 2004) because I was on holidays. According to the statistics, there are an average of 70 unique visits per day with some random days going over 120. Overall, a total of almost 15,400 unique visits since the migration to Blogger. I think it's in quite good health.Happy birthday and thank you all for reading :-) [Continue reading]

  • Windows Vista Beta 2 review

    It has already been a week since I downloaded and installed Windows Vista Beta 2 on my workstation. I was curious to see what the much delayed final version will finally bring us and how well it would perform on my three year old machine.Just after the installation process, I was greeted by the Aero-based desktop; it turns out Windows decided that my machine was powerful enough to run it. It is true that the new interface has many visual pleasing effects (window shadows, fade ins and outs, a full screen task switcher...) but it is not that similar to Mac OS X as some seem to say.Things as the control panel or the file browser have changed almost completely, in a way that it is trivial to spot dialogs that remain unchanged since XP. I like the new structure but you will need some time to get used to it; some things — specially configuration panels — have changed too much to be remotely familiar.Then there is this new annoying security feature that warns you about every single administrative task you perform and asks for your permission to continue. It seems as a good feature to prevent intrusions from malicious software, but it asks so much questions that the regular user will end up answering without reading (if they are not scared enough to answer, that is). I bet they'd have organized this whole thing in some other way to minimize the amount of questions raised without removing them.Overall, I would like to mention that it behaves quite well and, in my opinnion, it will be a good desktop OS. Unfortunately, some things are desperately slow like, for example, navigating your hard disk while there are other Explorer windows open. And, well... it is not Unix-based, something that could be really helpful for us programmers. I'll stick to NetBSD for that :-)Oh, and given that this is a free beta I have already filled some bug reports in exchange. Now... it's time for me to leave for this semester's first final exam... [Continue reading]

  • Setting up BoostBook under Windows

    In order to have a complete development environment for my SoC project under Windows, I still had to install and configure BoostBook. Why I want this is beyond the aim of this post, but for the curious ones: my NetBSD setup is severely broken and I want to be able to work on documentation when I will be doing the Win32 part.I have spent a lot of time to get BoostBook properly configured, although now that I know the appropriate path it is quite simple. Let's see how.First of all, you need to have xsltproc in Windows. "Easy", I thought; "I can install it through Cygwin". It wasn't that nice when I started to see bash crashing during installation, probably due to some Vista-related issue. I discarded Cygwin and soon after discovered some prebuilt, standalone binaries that made the task a lot easier.So, the required steps are:Get the iconv, zlib, libxml2 and libxslt binary packages made by Igor Zlatkovic.Unpack all these packages in the same directory so that you get unique bin, include and lib directories within the hierarchy. I used C:UsersjmmvDocumentsboostxml as the root for all files.Go to the bin directory and launch xsltproc.exe. It should just work.Download Docbook XML 4.2 and unpack it; for example, in the same directory as above. In my case I used C:UsersjmmvDocumentsboostxmldocbook-xml.Download the latest Docbook XSL version and unpack it; you can use the same root directory used previously. To make things easier, rename the directory created during the extraction to docbook-xsl (bypassing the version name). Here I have: C:UsersjmmvDocumentsboostxmldocbook-xsl.Add the following to your user-config.jam file, which probably lives in your home directory (%HOMEDRIVE%%HOMEPATH%). You must already have it, or otherwise you could not be building Boost.using xsltproc : "C:/Users/jmmv/Documents/boost/xml/bin/xsltproc.exe" ;using boostbook : "C:/Users/jmmv/Documents/boost/xml/docbook-xsl" : "C:/Users/jmmv/Documents/boost/xml/docbook-xml" ;Adjust the paths as appropriate.As you can see, it is quite simple. Just keep one thing in mind: if you try to build some documents and the process breaks due to misconfiguration, be sure to delete any bin and bin.v2 directories generated by the build before you try again. Otherwise your configuration changes will not take any effect. This is what made me lose a lot of time because, although I had already fixed the configuration problems, they were not being honored!For more information, check out the official documentation about manual setup. [Continue reading]

  • Win32: Mappings for Unicode support

    I have spent some time during the past few days to play with the Win32 API again after a year since first looking at them. I must learn how to manage processes under Windows as part of my SoC project, Boost.Process, and this involves native Windows programming with the Win32 API.After creating a fresh C++ Win32 console application project from Visual Studio 2005, I noticed that the template code had a _tmain function rather than a main one. I did not pay much attention to it until I looked at some code examples that deal with the CreateProcess call: they use weirdly named functions such as _tcsdup and types as _TCHAR instead of the traditional strdup and char * respectively. I could not resist to learn why they did this.Spending some time searching and reading the MSDN documentation answered my question. These functions and types are wrappers around the standard objects: the functions and types they really point to depend on whether you define the _UNICODE macro during the build or not.As you can easily guess, defining _UNICODE maps those routines and types to entities that can handle Unicode strings, effectively making your application Unicode-aware. Similarly, if you do not define the symbol, the application remains SBCS/MBCS compatible (the distinction between these two also depends on another macro, as far as I can tell). And because all these redirections are handled by the preprocessor, there is no run-time overhead.For example: the _tmain function is mapped to the traditional main subroutine if and only if _UNICODE is undefined while it is mapped to wmain otherwise. The latter takes wide-character argv and envp pointers in contrast to the former.I do not know to which extent this macro is supported by the standard libraries, although I bet almost everything supports it; I have seen many other functions taking advantage of this redirection. In the specific domain I am analyzing, there are two implementations for CreateProcess: CreateProcessW, the Unicode version; and CreateProcessA, the ANSI one.OK, my knowledge about internationalization is very limited, and I do not know if this feature is very useful or not, but it seems quite interesting at the very least.See Routine Mappings (CRT) and main: Program Startup (C++) for more details.Edit (17:24): Changed MFC references to Win32. Thanks to Jason for pointing out the difference between the two in one of the comments. I am in fact investigating the latter. [Continue reading]

  • Burning ISO images from Windows

    While I was doing some work under Windows this evening, I had the need to burn some ISO files I had downloaded from the MSDNAA. I had just reinstalled the whole system this morning (hint hint, Vista Beta 2) and I did not have any CD recording utility installed.I noticed the "Burn" button when exploring the folder containing the ISOs and I thought that it could allow me to press them to pristine CDs... but it turns out I was wrong. If you have ever used this feature under Windows XP, you know that it only lets you to create new data CDs and burn them, but not burn prerecorded ISO images. It remains the same in Vista.As I was lazy to look for the CD containing the recording utility that came with my DVD unit — and, sincerely, I do not like it that much —, I looked for and tried several free tools that were supposed to allow me do this. Guess what? None of them worked: they could simply refuse to install or fail to detect my drive.Fortunately, this search let me discover the Windows Server 2003 Resource Kit Tools, a set of utilities to ease administration of Windows boxes. These are free and work on several Windows versions, including Vista. Among all the included goodies, there is one that solved my problem: cdburn.exe. OK, fine, it is command-line based but it does the job it is supposed to do.Just as an example, this could burn some_image.iso:cdburn -sao -speed max d: some_image.isoMore on the beta experience on a future post ;-) [Continue reading]

  • Little application with Qt and OpenGL

    As part of the VIG (Graphical Visualization and Interaction) course I took this semester, we had to develop an application using Qt and OpenGL to practice the concepts learned through the semester. The application loads several 3D Studio models using lib3ds, renders them using OpenGL and lets the user control its behavior through a Qt interface. We have handed it in a few minutes ago; finally! :-)The goal of the application is to show a scenario with up to ten cars moving on top of its surface in circles. These cars may optionally have a driver that automatically looks to the closest car in front of them. You can freely inspect the rendered scene (zoom, pan and free movement) using the mouse. You can also adjust several lights and other parameters.Overall, doing this little application has been quite interesting, specially because I was OpenGL-clueless before starting the course and the results are impressive (to my point of view, that is). As regards the code, I mainly focused on the Qt interface while my partner worked more on the rendering code.After having spent many hours in Qt Designer and dealing with the library, I can say that it is very complete and lets you write code extremely quickly. The signals and slots mechanism is very powerful and it becomes specially useful when working in the IDE. If you spend some time extending the predefined widgets to suit your needs, creating a new interface is really easy: just put the pieces where you want them, lay them out and connect the appropriate signals!It is also true that it is very easy to create crappy code. I have to admit that our application is very hackish in some places... but there was no time to fix them to be better. This is mostly because it was not designed beforehand (we did not know all the requirements until a late date) and because of the learning-as-you-do with Qt Designer. It'd certainly be much better if we started it all over again.Now some screenshots for your pleasure :-)First of all, a simple scenario with a single light:The same scenario with a darker light but with a car's lights turned on:The previous scenario but tracking a car's movement from its inside:The car configuration dialog:The lights configuration dialog:There are some more dialogs and features, but oh well, the application itself is useless ;-) [Continue reading]

  • SoC: First commit

    Due to all the university tasks I had to finish, I could not start work on my SoC project eariler than this weekend. Final exams are now around the corner (first one on the 21st, last one on the 30th), but I will have to balance study time and SoC work if I want to make progress in both directions... and I have to!Sincerely, I was lazy to start working on my project because I had to investigate how Boost's build infrastructure works and how to integrate a new library into it. However, as this had to be done, I sat down today and some hours, I passed that barrier. It has not been easy, though: everything is fairly well documented, but it is not organized in a way that a novice can see the overall picture. Now that the basic project files are up, I am really eager to continue working on the project.What I have done today has been to understand how to set up a standalone project using Boost.Build v2 (that is, outside the Boost's source code) and how to set up the basic documentation using QuickBook. The following logical step is to integrate all the notes I took a year ago and complete them with other requirements and useful information.The Boost people kindly set up a Trac system and a Subversion repository for us to work on our SoC projects. So... the above has made its way into my first commit to the tree!And before I forget: this past weekend I started to investigate the MFC to learn how to manage processes under Windows, a requirement for this project. More on this on future posts. [Continue reading]

  • Become productive with Quicksilver

    Quicksilver is, at a first glance, an application launcher for OS X. It lets you search for and quickly launch your applications using predictive lookups. You invoke the application with a keybinding of your choice, type in the first letters of the program you want to launch (as much as it takes to locate it) and hit return. If you have ever used Spotlight, you know what I mean.Even though, Quicksilver is much more than a program launcher. It lets you control the active application or look for files in your machine, making it possible to apply many actions to them with few key presses (again, predictive lookups). For example, you'd copy the Application form.txt file stored in your Documents folder straight to the pen drive you just connected by using as few as 7 key presses. It may sound a lot, but if you get used to it, it is a really quick way to work.But wait! There is much more to it. Quicksilver can be extended by means of plugins. They integrate with other applications such as iTunes, bringing you the ability to "remotely" control them from the keyboard. (For this specific case, I still prefer SizzlingKeys.)This utility is specially useful when you are working at the keyboard and do not want to move your hands away to reach the mouse. Of course, you first need to get efficient with it to really appreciate its power. I have only been using it for a few days, but it certainly improves productivity in some situations.You may want to read this introduction or this intermediate guide by Dan Dickinson in case you try it out.Pity it takes so long to start on my iBook... [Continue reading]

  • Games: Half-Life 2: Episode One

    So... past Saturday (June 2nd), I ran to the closest store and bought Half-Life 2: Episode One, the continuation to Half-Life 2. As you may already know from my Half-Life 2 "review", I am a big fan of this title so I was dying to play EP1. (I know, I know... I still have a lot of work to do — e.g. SoC — but I had to relax a bit.)A lot has been said about this game already, but I wanted to give my own opinion. And what a better place than this. Overall, the game deserves a good score. If you liked HL2, you will certainly enjoy this one because it is more of the same old and well-known things. At first, it feels a bit slow because there are lots of dialogues, not to mention that you go unarmed (except for the gravity gun). This is because the game starts exactly where HL2 ended. Afterward, you get your weapons again and it becomes as active as you'd expect.If you are wondering why I am posting this now, it is I just finished the game today. If I have to say something negative about it is that EP1 feels too short. Its play time is estimated to be around five hours; I surely spent some more, but it still ended too quickly. HL2 was a bit more than twice the price, but it was much, much longer. If EP2 and EP3 are as "short" as this one, they'd benefit from a cheaper price.And until EP2 comes out (will be a long wait...), I will go through EP1 again with the developers' comments enabled to see what I missed :-) Pity my machine is not powerful enough to enable HDR and other quality effects in Source. [Continue reading]

  • Functional programming and Haskell

    This semester, I was assigned a task that was meant to practice the knowledge gathered about the functional programming paradigm during a CS course. The task was to develop an abstract data type for a processes network and a way to automatically convert it to Miranda executable code.A processes network is an idea based on the data flow paradigm: a network is a set of processes that cooperate together to reach a result. A process is an entity that produces a single output based on a set of inputs, which can be other processes or parameters fed in by the user (think of a shell pipeline). The process itself is modeled by a function, either defined in another network or provided by the language natively. A program is then composed of a set of these networks. Functional languages are ideal to implement a program modeled using this paradigm.Our program could be written either in Miranda (the language taught in class) or Haskell. My partner and I decided to go for the latter because it is a live language (Miranda seems dead), it is free and there is much more information for it than for the former. The drawback was that we were on our own learning it and it was really tough to get started, but it has been a very rewarding experience.But why is functional programming so interesting? It is a paradigm where the sole basis of execution is the evaluation of expressions. These expressions are composed of functions (either native ones or defined by the user), so the programmer's task is to write new functions and compose them to achieve the correct data transformation. There is no notion of memory nor variables as in imperative languages. And in Haskell's case, everything just feels correct; if something is not well understood or does not fit the paradigm, it is not included in the language.Here are some of the key items to functional programming; some of them may be Haskell-specific:Functions have no side effects: there are no global variables.There are no loops: algorithms have to be thought in recursive terms. Although this may seem difficult at first, it is not so much thanks to the built-in functions.Everything has a mathematical background, which makes formal proofing easier than with imperative languages. For example, one can define an abstract data type based exclusively on its formal construction equations: in the stack's case, it'd be defined by means of an empty constructor and a "push" one.High-order functions: ever thought to pass a function as a parameter to another one? C++ has some kind of support for this through predicates, but in a functional language this idea is the key to success: it is used extensively and is much more powerful.Lazy evaluation: a neat idea. It makes the treatment of infinite or undefined objects possible. This concept is not generic to the paradigm but is commonly associated with it.Strong typed: Haskell is very strong typed but it does not require you to manually specify types (although you can); isn't it strange? The compiler/interpreter comes with a powerful type deduction system that does the work for you and ensures everything makes sense, type-wise speaking.So far I like the language and the paradigm very much, and I have only grasped their surface. There are tons of things I still do not know how to do, not to mention that it is quite hard to avoid imperative programming thinking. I'm thinking to use Haskell to write at least one of my personal projects to really experience its power.If you have some spare time and are willing to learn something new, give functional programming languages — hmm, Haskell — a try. Just get the Hugs interpreter, the Haskell tutorial and get started! I bet you'll like it ;-) [Continue reading]

  • SoC: List of accepted projects

    After some days of delay, Google has published the final list of chosen projects for this year's Summer of Code; it consists of 630 funded projects. If you browse through them, you will find lots of interesting things. The good thing is that most of them will be worked on seriously, so there will be great contributions by the end of the summer :-)As I told you, my project is listed under the Boost page alongside other interesting projects. It looks like Boost will get a lot of new work — and contributors — this summer. At least, this is the case for me: I am not very involved with Boost development yet.Also take a look at the NetBSD-SoC page, which lists all projects chosen by NetBSD. At first I was sorry to abandon my slot in favour of the Boost one. But now that I have seen the project that took it, I am happy; it is something really needed. I won't tell you which it is because all of them are equally interesting! ;-) [Continue reading]

  • SoC: Accepted, again!

    I am very proud to annouce that I have been accepted into Google's Summer of Code program — again! During Summer 2005 I developed an efficient, memory-based file system for the NetBSD operating system, baptized tmpfs. I must confess that I enjoyed very much hacking the NetBSD kernel and also learned a lot about file systems.So this year I was eager to repeat the experience by taking part in SoC again. In order to ensure my participation, I thoroughly prepared three applications for three different projects. I had a hard time making the choices because there were tons of interesting projects (which could have taught me very different skills), but at last decided for the following:Application 1: Add complete NetBSD support to GRUB 2. I chose this project because I knew I could do it, but mostly because I wanted to ensure that GRUB 2 had first-class support for BSD operating systems. GRUB currently lacks features to correctly boot these, which is a nuisance. After sending the application, I was quickly contacted by a GRUB developer telling me that there was another student willing to work on this project, and that he did not knew what to do. I offered to leave my spot to the other developer, hoping to capture another potential NetBSD hacker.Application 2: Develop a process management library for Boost. This is something I have had in mind since February 2005, when I first discovered Boost. I was planning to do this as my final computer science degree next Spring, but applied for it now so that I could free myself from this idea. (I have other projects in mind that are currently blocked by the lack of Boost.Process.)Application 3: Improve NetBSD's regression testing framework. After looking at the list of suggested projects for NetBSD and evaluating them, I felt that working on this area could have been very useful for the project, improving its overall quality; I like and enjoy writing software that is able to test itself as much as possible.The decision between the Boost and NetBSD projects was quite hard to make, so I opted to send both in and let luck decide. Strictly speaking, I stated in my applications that I preferred to do the Boost project for several reasons and I guess Google made their choice based on that. But do not get me wrong: I enjoyed very much the time I spent hacking NetBSD, and I hope to continue doing so in the near future.Summarizing: I will be developing the Boost.Process library this summer! See the links above for more some information: the wiki page holds some ideas about its design and the application lists my reasons to want to work on this.I now feel sorry for not "being able" to work on NetBSD's regression testing framework. I do not know if anybody else has been picked to work on it, but if not, this project seems doomed... It was choosen past year but the student abandoned it half-way. This year it was also choosen by NetBSD but Google preferred me to work on Boost. However... while writing the application, my interest in this project raised, which means that I may retake it in the future if nobody else does; maybe as my CS final degree project? :-)Now... stay tuned for further news on Boost.Process! [Continue reading]

  • Analyzing security with Nessus

    A bit more than a week ago we had to experiment with Nessus as part of a class assignment. Nessus is a very complete vulnerability scanner that runs on top of Unix-based operating systems. In order to not get obsolete too quickly, the set of checks it runs can be updated based on a database maintained by the product's company, Tenable (much like what happens with an antivirus utility). It is important to note that this list is always seven days behind the up-to-date list unless you are a paid subscriber, which is very reasonable.I liked how it worked and decided to try it at home to analyze my machines, so I went and downloaded the beta version for Mac OS X (I didn't want to fiddle with manual setup in other OSes...). After installation, it asked me for my activation code (sent by mail) and proceeded to download the most up-to-date vulnerability list (free version). At that point, it was possible to start the server part.When launching the client I was presented with a neat, native Mac OS X interface. Analyzing the whole home network was trivial and the results were impressive. Despite that it raised some false positives (depending on the configured paranoia level), it told me several things that were sensible and listed pointers to external information (CVE entries, knowledge base articles, etc.) that was helpful to solve them.If you are a network administrator, I bet this utility was already known to you but it came as new to me very recently and liked it. [Continue reading]

  • Parallelizing command execution with vxargs

    If you need to maintain multiple hosts, you know how boring it is to repeat the exact same task on all of them. I'm currently using PlanetLab as part of a class assignment and I'm facing this problem because I need to set up around 10 machines and execute the same commands on all of them.vxargs is a nice Python script that eases this task. It lets you run a command parametrizing it with a given set of strings (e.g. host names), similar to what find's -exec flag does. First of all, you construct a file with the list of host names you need to control and then feed it to the script alongside the command you need to execute. For example, to upload a dist.tgz file to all the servers:vxargs -a hosts.list -o /tmp/result scp dist.tgz {}:The utility will replace the {} substring with each line in hosts.list and will execute the command. The nice thing is that vxargs runs all tasks in parallel, maximizing efficience. During execution, its cursed-based interface shows the progress of each command. And when all jobs are over, you will find their output (stdout and stderr) as well as their exit code in the /tmp/result directory. Fairly useful.Despite that manually installing vxargs is easy, there is now a vxargs package in pkgsrc. [Continue reading]

  • What is keeping me busy

    I am sorry for the small amount of posts lately but I think this is the busiest semester I have ever had since I started my undergraduate degree four years ago. Here is what I have to do, sorted from more to less interesting:VIG (in pairs): Develop an application with Qt and OpenGL that shows a scenario and a set of cars moving around it. The interface has to allow the user to inspect the view, manage the cars (amount, position, etc.), configure the drivers (little models on top of the cars), set up the lighting and some other things.SODX (in pairs): Prepare a presentation of multicast in P2P networks (SplitStream, Overcast...) after studying these systems and experimenting with them. Also write reports for 6 assignments we have had during the course (three remaining).LP (in pairs): Write an abstract data type in Haskell that represents a network of processes (similar to the data flow programming paradigm) and add the ability to automatically convert this representation to Miranda executable code.PESBD (in groups of 5): Analyze and design a computer-based system to aid auction companies. This has to be done following the (Rational) Unified Process, going through the Inception and part of the Elaboration phases.SSI (in groups of 4): Analyze about the risks and attacks that a given company can suffer (focusing on computer-based attacks, but not exclusively) and write a report exposing possible ways to mitigate the problems.The worst thing is that all these tasks are due by the beginning of June and all of them are still half done. Add to these the need to assist to lectures, to keep up with emails and other daily stuff and you can imagine how the next three weeks will be.Now I'm looking forward the 23rd quite impatiently because it is when Google will publish the projects chosen for this year's Summer of Code. I will tell you more when the results are public. [Continue reading]

  • Article: Smart Pointers in C++

    A bit more than a year ago I discovered what smart pointers are, thanks to the Boost Smart Pointers library. Since then I can no longer think of a C++ program that handles dynamic memory without them, because it is highly subject to programming mistakes and is hard to write. (Of course, there are exceptions.)This is why I decided to write the Smart Pointers in C++ article which just got published in ONLamp.com. It contains an introduction to the idea behind smart pointers and describes those included in the C++ standard library and Boost. Hope you find it interesting! [Continue reading]

  • iBook and the clamshell mode

    A reader named Richard saw some of my old posts and seems to be confused about how I got the iBook G4 I own (one of the latest available models) to work in clamshell mode. The thing is that I didn't get it to work as expected.After buying the BenQ FW202P flat panel I replaced my old PS/2 keyboard and mouse with USB peripherals, hoping that they'd let me use my iBook in clamshell mode. Of course, I had to apply the Screen Spanning Doctor hack, which allows me to use a desktop that spans over the external flat panel (not in cloning mode). Note that extending the desktop works perfectly at a resolution of 1680x1050, albeit some effects are a bit slow (e.g. Exposé).Unfortunately, the "unofficial" clamshell mode enabled by SSD does not work properly with this specific iBook G4 model. No matter what I try, the external monitor always gets an incorrect, non-native resolution that is either cropped or expanded in ugly ways. But, as I had already bought the other peripherals (and I do not regret it, because they are much better than the previous ones), I did the following: I connected the external monitor, mouse and keyboard and powered up the iBook regularly. When it got to the login screen, I turned the internal screen's brightness to its minimum so that it didn't consume more power than really needed.As regards the desktop setup, I stuck the internal monitor to the right of the external one and vertically centered it; this way I'd use the hot corner functionalities in the latter. I also moved the menu bar to the external one, making it the primary monitor. This setup works fairly well and "simulates" a real clamshell mode.However, I'm not using this setup any more because it's not very comfortable. First of all, it's quite annoying to have to connect/disconnect everything over and over again (which is a mess on the table) and the image in the flatpanell appears somewhat blurry due to the regular D-Sub connection (it worked fine with a CRT monitor). I guess things could be much better if I had a KVM... And secondly, every time you switch between the external and internal monitor, you "lose" your settings. For example, your preferred applications do not appear where you left them, your terminal settings (font size, etc.) are not appropriate both monitors, and a bunch of other little details that make this setup a bit uncomfortable. [Continue reading]

  • iParty 8 slides available

    I've given my NetBSD talk today at the iParty 8. The slides are now available in the advertisement material section of the NetBSD web site; note that they are in Spanish. If the talk video record is made public, I'll publish a link. Enjoy! [Continue reading]

  • NetBSD's KNF: Prefixes for struct members

    The NetBSD coding style guide, also known as Kernel Normal Form (KNF), suggests to prefix a struct's members with a string that represents the structure they belong to. For example: all struct tmpfs_node members are prefixed by tn_ and all struct wsdisplay_softc members start with sc_. But why there is such a rule? After all, the style guide does not mention the reasons behind this.The first reason is clarity. When accessing a structure instance, whose name may be anything, seeing a known prefix in the attribute helps in determining the variable's type. For example, if I see foo->sc_flags, I know that foo is an instance of some softc structure. As happens with all clarity guidelines, this is subjective.But there is another reason, which is not more technical. Using unprefixed names pollutes the global namespace, a specially dangerous situation if the structure belongs to a public header. Why? Because of the preprocessor — that thing that should have never existed — or more specifically, the macros provided by it.Let's see an example: consider a foo.h file that does this:#ifndef _FOO_H_#define _FOO_H_struct foo { int locked;};#endifAnd now take the following innocent piece of code:#define locked 1#include <foo.h>Any attempt to access struct foo's locked member will fail later on because of the macro definition. Prefixing the variable mitigates this situation. [Continue reading]

  • NetBSD talk at iParty 8

    I've been invited to give a talk about NetBSD on Saturday 22nd at the eighth iParty. iParty is a set of FOSS-related activities hold at Castellón de la Plana, Spain, which includes conferences, workshops and competitions.I'll start with an general introduction to NetBSD to later focus on development issues: how to contribute to the project and what needs to be done; I expect it to be somewhat technical. The reason behind this choice is that Google is preparing the Summer of Code 2006 and it would be really nice to have more students than past year working for NetBSD. The more students available, the higher chance NetBSD gets more work done! [Continue reading]

  • tmpfs on FreeBSD

    It has just been brought to my attention that tmpfs is being ported to FreeBSD by Rohit Jalan. These are good news: more eyes looking at the code (even if it has been modified to work on another OS) means that more bugs can be catched. [Continue reading]

  • NetBSD/i386 development under OS X

    Mac OS X is the only operating system in my iBook and I have no plans to change this in the near future (installing NetBSD was frustrating and I do not want Linux). However, I want to be able to do NetBSD development on it shall the need arise. And it has to be easy.My idea was to have a disk image with NetBSD/i386 on it and to be able to mount it from OS X to manage its contents (e.g. to update its kernel or to install new userland). Therefore, installing on a Apple UFS file system was a must. NetBSD can read such file systems but unfortunately it cannot boot from them (as far as I know); hence the need for a little boot FAT partition (see below).As a result I have got a NetBSD/i386 installation under Q, completely manageable from OS X. The overall setup renders a very nice development environment.Let's see how (not exactly trivial):Get NetBSD sources on your OS X machine.Build a cross-compiler toolchain for i386, which basically means ./build.sh -m i386 tools. Doesn't build.sh rock?Prepare a custom i386 kernel configuration and build it. You can simply get GENERIC, but you must enable options APPLE_UFS as well as options FFS_EI (thanks to Rudi Ludwig for pointing the latter out). Do not disable file-system MSDOS.Create a new PC under Q and specify a new 512MB raw disk image (qcow will not work).Use the command line fdisk utility to partition the new image: create a 50MB FAT16 partition (type 6) and leave the rest for Apple UFS (type 168). You may have geometry problems here as fdisk will think that the disk is extremely large; for the 512MB disk we just created, use 1040 cylinders, 16 heads and 63 sectors per track (you can give this to fdisk as parameters).The disk created by Q is stored in ~/Documents/QEMU/pcname.qvm/Harddisk_1.raw; I will refer to it as disk.img for simplicity. Tell OS X to attach it: hdiutil attach -nomount disk.img.Format the new partitions. WARNING: OS X attaches this as disk1 for me; check the device name in your machine before proceeding and adjust it accordingly. Simply do: newfs_msdos /dev/disk1s1 and newfs /dev/disk1s2.Detach the image and reattach it, but this time let OS X mount the file systems within them: hdiutil detach disk1 ; hdiutil attach disk.img.Use the finder to rename the partitions to more representative names. E.g. NB-Boot and NB-Root.Now it's time to install NetBSD under the new disk. We need to do this manually. Assume that all of the following commands are run from within /Volumes/NB-Root:Unpack the NetBSD sets by doing: for f in ~/NetBSD/i386/binary/sets/[bcegmt]*.tgz; do sudo tar xzpf $f; done.Create the device files in dev by executing the following: cd dev && sudo ./MAKEDEV -m ${TOOLDIR}/bin/nbmknod all. TOOLDIR points to the directory holding the cross tools you built before.Create a minimal etc/fstab. E.g.:/dev/wd0f / ffs rw 1 2/dev/wd0e /boot msdos ro,-l 0 0Create the boot directory. This clashes with the standard /boot file in a NetBSD system, but we will not have this one; feel free to use a different name.Edit etc/gettytab and add al=root to the default entry. This will automatically log into root's session, so we do not need to set a password for it.Edit rc.conf and set rc_configured=YES. Disable all the services you will not use for better boot up speed: cron=NO inetd=NO sendmail=NO virecover=NO wscons=YES. Do not forget to set the host name: hostname=devel.Edit wscons.conf and disable all extra screens.Create links that will point to the kernels: ln -s boot/netbsd boot and ln -s boot/netbsd.gdb. This is the reason why my boot partition is 50MB; the debugging kernels take a lot of space.At last, we need to install the boot loader in the FAT partition. I've chosen GRUB so I needed a GRUB boot floppy (an image) at hand to finish the setup.Mount the GRUB image and copy its files to the boot partition. E.g.: cp -rf /Volumes/GRUB/boot /Volumes/NB-Boot. You need the fat_stage1_5 file for this to work.Create a minimal /Volumes/GRUB/boot/grub/menu.lst file for automatic booting:default 0timeout 5title NetBSD - Development kernelkernel (hd0,0)/netbsd root=wd0fCopy the kernel you built at the beginning to /Volumes/GRUB/netbsd.Detach all the disk images: hdiutil detach disk1 and hdiutil detach disk2.Configure your virtual PC in Q to boot from this GRUB floppy disk image and embed GRUB into your virtual hard disk: root (hd0,0) and setup (hd0).Change Q's configuration to boot straight from the hard disk image.And at last... Boot NetBSD!With this setup you can mount your development image with hdiutil attach disk.img, mess as you want with its contents, detach it and relaunch your Q session. Pretty neat :-) [Continue reading]

  • Apple's customer service

    Two months ago I bought an Apple USB Keyboard directly from my nearest Apple Center, MicroGestió. Unfortunately, the Enter key started to behave incorrectly some days ago; its movement wasn't as smooth as that of other keys and in some cases it simply didn't move. For example, it was really hard to press it when pushed from the up or left sides.In my case, I've noticed that I tend to press the Enter key towards the top, with the force going "upwards". This made it fail many times, requiring me to hit it again, harder and in the middle. Very annoying.So... I went to the store this morning to ask if this was covered by the warranty; I had nothing to lose. And yes it was; they replaced the entire keyboard with a shiny new one without questions. Great!Of course, I'm talking about this specific Apple Center. I do not know about the customer service in the others, but there surely are better and worse ones. (Would like to be proved wrong ;-) [Continue reading]

  • Mac OS X: Boot Camp

    Apple has just published Boot Camp, a utility to install Windows XP on a Mac. As I understand it, this provides an EFI module for legacy BIOS compatibility, a set of Windows drivers for the Apple hardware and the required tools to ease Windows' installation.In other words: it lets you to flawlessly install Windows XP SP2 (be it the Home or Professional edition) in one of the new Intel-based Macs. Beta versions are now available for downloaded and the final version will be included in Mac OS X 10.5 Leopard.Great, Apple. I do not have an excuse any more to not replace my desktop machine with a Macintosh. In fact, I'm dying to switch but I'll resist for some more time ;-)Edit (April 7th, 20:57): Dirk Olbertz has installed Ubuntu on his iMac using Boot Camp (the link is in German). So this seems not to be restricted to Windows. [Continue reading]

  • GNOME 2.14.0 hits pkgsrc

    After three weeks of intense work, I am pleased to announce that GNOME 2.14 is now available in pkgsrc, the NetBSD Packages Collection.As happens with all other major GNOME releases, this one provides a set of polishments, cleanups and several new features over the 2.12 series. You can find more information in the official release page (linked above).I am happy to say that this release works fairly well under NetBSD. There are still some rough edges (that is, programs that crash on startup or do not work as expected) but there shouldn't be much regressions from the previous version. I think I will prepare a list of known broken stuff in case people wants to help because fixing everything requires a lot of manpower.To give you some approximate numbers, the process consisted of 80 package updates, 8 new packages and 16 revisions of the new and updated packages. Some more may come in the following days.As usual, updating from older versions is not exactly easy using pkgsrc. I suggest you to either build the whole new release in a sandbox using pkg_comp or zap all your installed packages to reinstall from scratch.How to install it? Just use the meta-pkgs/gnome package to install everything that is part of the official GNOME distribution or choose meta-pkgs/gnome-base if all you want is a minimal desktop.Please report any problems you find using send-pr(1).You can also check out the official announcement. Have fun! [Continue reading]

  • Fixing GNOME's trash under NetBSD

    For a very long long time (probably since forever), the trash icon in GNOME has not worked in NetBSD. You'd drag files onto it and they were appropriately deleted but, unfortunately, the trash did not update its status to reflect the removed files. If you opened the folder, it appeared empty despite ~/.Trash contained the deleted files. As you can image this was very annoying as it made the trash near to useless.However, and by pure luck, some days ago I noticed that the trash icon showed some files on my machine. For a moment I thought that the problem had been fixed with GNOME 2.14.0. But I was wrong: ~/.Trash didn't contain the files shown in the trash window; the files were really stored in /vol/data/.Trash-jmmv. So why was it picking up the files from one directory but not from the other one?I started by looking for the .Trash string in gnome-vfs which led me to a piece of code that returns the trash directory for a given volume. I first thought that there could be a problem detecting the home directory so I added some debugging messages there; everything looked correctly.After digging some more and thanks to the test/test-volume test utility, I ended up in the libgnomevfs/gnome-vfs-filesystem-type.c file. This contains a table called fs_data that maps each file system name to a description and to a boolean. The boolean indicates whether the file system is supposed to hold a trash or not. As you can imagine, ffs was not part of this list, so the code felt back to the default values that specify that there was no trash.Solving the issue was trivial. I just had to add the appropriate file system names to the table, rebuild gnome-vfs and experience the trash icon to its full power :-) The issue is reported in bug #336533 and is already commited to pkgsrc. Therefore, it will be part of the forthcoming pkgsrc-2006Q1 stable branch. [Continue reading]

  • GNOME and the dbus daemon

    It is a fact that dbus is becoming popular and that many parts of GNOME are starting to use it. Nowadays, some applications even fail to start if dbus is not present (e.g. epiphany).Unfortunately, things do not work out of the box when installing GNOME from sources — or when using an "uncustomized" OS; see below — because there is nothing in GNOME that launches a dbus-daemon session at startup. Therefore, the user is forced to either:Change his ~/.xinitrc to do exec dbus-launch gnome-session instead of the traditional exec gnome-session.Edit his gdm configuration files to launch dbus-daemon before gnome-session.As you can see, both "solutions" are cumbersome because they break the previous behavior and because they require the user to take extra steps to get things working.Of course, in the disordered Linux world, distributions such as Ubuntu or Fedora Core include customized and rather complex X startup scripts that launch dbus-daemon during a session setup. Non-dbus-enabled systems (such as NetBSD) could ship modified gdm packages to avoid this problem, but users could still hit a problem when using the traditional startx or other session managers such as xdm or kdm.I do not need to mention that some systems (e.g. NetBSD again) will not ever include — unless things change dramatically — a call to dbus-daemon in their standard X11 scripts.A possible solution is to modify the gnome-session utility to spawn a dbus-daemon process on its own, just as it does with gconf or gnome-keyring. This way the user needn't remember to start dbus on his own as the GNOME session manager will do it automatically. With this in place, GNOME magically works again in these dbus-agnostic systems.And yes, this solution is already implemented. It has kept me busy for two days but you can find the code in bug 336327. I hope to get some positive feedback and integrate it into pkgsrc until an official gnome-session release does it (if ever). If it is not integrated... well, I guess I'll have to go the gdm-patching route.By the way, I have to say this: more than 300 lines of C code to do something that can be achieved in less than 10 lines of shell script... people seem to like to make their lives more complex than need be ;-) [Continue reading]

  • GNOME 2.14.0 released

    OK, I know this comes late but I had to publish it. GNOME 2.14.0 was published a few days ago. As happens with all other major releases in the 2.x series, this one comes with several improvements and tons of bug fixes. Note that these are not "very big" changes; they can be seen as minor refinements over the previous version, aiming for a better user experience. You can check this review for more information.I have alread played with this version in its full power: I installed Ubuntu Dapper to get it up and running in few hours, resulting in a fully working desktop environment. (I think I'm leaving KDE again ;-)Now I'm dealing with the update in pkgsrc; I've almost got the gnome-base package up to date, so I hope to be able to boot into this new version tomorrow or so. Thanks to the currently running package freeze, I can work on the update without interferences for a period of two weeks in which I hope to get the new version running, shake out some bugs and feed some patches back. I hate to see some packages such as gnome-vfs2 or libgtop2 with lots of local patches, as they are almost unmaintenable.(I know, I know... the freeze is aimed at solving bugs. But I do this big update now that I have some-but-not-much free time or I won't be able to do it.) [Continue reading]

  • Bikeshed

    Don't know what a "bikeshedded discussion" is? This FAQ from FreeBSD explains it well enough. I like this sentence in special: "the amount of noise generated by a change is inversely proportional to the complexity of the change".It's a pity it happens too often in (Net)BSD mailing lists. [Continue reading]

  • Linux problems: binary redistribution

    thomasvs (who appears in Planet GNOME) is running a post titled How not to solve a problem in his blog. He talks about the aggressive tone used in a page from Autopackage's wiki and how it can cause a bad impression of that project. (I was going to reply to his post in his blog but commenting is unsupported... so posting this here.)That "Linux problems" page talks about many issues that arise when trying to redistribute software in binary form for the Linux platform. It outlines many real problems that users face when using binaries not specifically built for their installation and how it prevents developers from creating binary-only versions of their programs that will work anywhere.That page is certainly a good read. It contains a lot of technical interesting details of how ABI compatibility is broken often. (But you know, Linux is just a kernel so some of them may be unsolvable, unfortunately.)You need to have suffered these problems to understand the tone of the page (which might be improved to be a bit more polite). It is frustrating to see how people continues to do "incorrect" things that cause pain to third parties. OK, OK, this is because those people are not aware of the issues... but hey, that's what that page is for, to inform them!I already expressed my concerns here and here. They are not about binary portability, but I feel they are somewhat related. [Continue reading]

  • Fixing xv problems

    As you may already know I bought a BenQ FP202W flat panel two months ago, which made me switch from a rather small resolution (1024x768) to a much bigger one: 1680x1050 at 24bpp running on a NVIDIA GeForce 6600GT with the free nv driver. As I did the switch I lost the ability to play videos in X11 full screen mode because the Xvideo refused to work. As you can imagine, this was extremely annoying.For example, mplayer spit out the following:X11 error: BadAlloc (insufficient resources for operation)?,?% 0 0Similarly, xawtv couldn't set up an overlay window due to the lack of the xv extension thus becoming completely unusable.Based on a suggestion from someone I don't remember, I decided yesterday to replace my XFree86 4.5.0 installation with X.Org 6.9 hoping that its nv driver could work better. Guess what, it didn't.But after the installation, and just out of curiosity, I started looking at nv's sources to see if I could discover the problem. I was hopeless to find a solution but I had to try.First of all, I grepped for BadAlloc in the nv directory, something that quickly drove me to the NVAllocateOverlayMemory function. Based on the name, this seemed like a logic place for the failure. I added some debugging messages, reinstalled the driver and effectively this function was being called and returned a null pointer.Some more printf's told me that the problem was being raised by xf86AllocateOffscreenLinear. "Ouch... looks as if the driver cannot allocate enough video memory for such a big resolution... may be difficult to fix", I though. Nevertheless, I continued my quest inspecting this other function and many other code in xf86fbman.c. Along the process, I discovered a minor, unrelated bug probably caused due to a pasto. Overall, the code is quite confusing if you are X-clueless, just as I am.After a couple of hours or so I was looking at the localAllocateOffscreenLinear function. I saw that the first call to AllocateLinear was returning a null pointer, which made me think that the problem was there, continuing the inspection in that direction. That lead to nowhere.At last, and tired of try & error cycles, I returned to the NVAllocateOverlayMemory function. I saw there that calls to xf86AllocateOffscreenLinear function were passing a 32 value, which seemed like a bpp. "Hmm... if I decrease it to, say, 24, it may need less memory and it might work." And indeed it did! That little change enabled xv again in my machine.But the rationale behind the change was wrong. It happens to be that that parameter is not a bpp; it is a granularity. Therefore I assume that the less its value, the better. Some other greps "confirmed" this, as other drivers such as radeon are using a value of 16 (or even less) for that call.Some time later and with xv working properly, I came back to the localAllocateOffscreenLinear function. This made me see that it was always falling back to the second case (below the "NOPE, ALLOCATING AREA" message). "Stupid me; I should have looked there before." This is part of the function:if(gran && ((gran > pitch) || (pitch % gran))) { /* we can't match the specified alignment with XY allocations */ xfree(link); return NULL;}gran is the granularity passed in by the driver and pitch seems to be the screen's horizontal resolution. So, doing some numbers we see that pitch % gran = 1680 % 32 = 16 != 0. Voila! There it is, the real problem. Well, I think this is the problem, but I may be wrong.After all, it looks like as if the problem was a simple bug rather than something related to the card's memory. You can follow this bug for more information as well as to retrieve the proposed patch.Edit (March 6, 21:38): It turns out my patch was not really correct. For more information just check out the bug's audit trail. [Continue reading]

  • VigiPac: dead and reborn

    A bit more than year ago, jvprat (a classmate) and I started to develop VigiPac, a three-dimensional Pacman-clone with multiplayer support. It was registered on Sourceforge.net soon after to make it available to the public.Since then the project had been completely inactive because we had no time to spend on it. So... this weekend, while I was doing some cleanup, I talked to him and decided to "officially" shut the project down. I put an announcement on the site's news telling that the project was discontinued and made the sources easily available (i.e., bypassing Monotone).However, the day after we put the announcement he regained interest in the project and started hacking on the sources again. So far he has uploaded them to the Subversion repository and made some new changes. I don't know if he will have much more time to work on it nowadays because the new semester at university has just started. But, if he does have time, we will probably see good results. I hope he'll keep us informed either through his blog or through the game's news page.From my side I do not have much interest in continuing developing Vigipac (the lack of DRI under NetBSD is one of the most important reasons) but I'll closely look at any changes.In the meanwhile, I'll continue working on my "secrets manager" and possibly publish it in the following days. It already has a name (which I reserve for the announcement), a test suite, a manual page and useful functionality (to me, that is). Stay tuned! [Continue reading]

  • Managing passwords and keys

    Once upon a time I used a single password everywhere except on few, few exceptions (my system account or SSH key, for example). After some time I realized that that wasn't very clever because a break-in in any of my online accounts could open them all for attack. Not to mention that this was also problematic due to different sites having different password policies and having different trust levels: you surely do not want to share the same password between your mailing list subscriptions — which very often travels in plain text form — and your GPG passphrase!Since then I have been using a unique complex password for each account... which has turned up to be a more-or-less unmanageable approach given the number of accounts I have. To make this approach less painful, I wrote all the passwords in a GPG-ciphered text file. I then created a pair of dirty scripts to view and edit that safe file, but I have to confess that they are very ugly and are currently broken for a number of reasons. Also, keeping that file on the hard disk was not something I was very keen on; yes, I have a backup, but it is sooo outdated...However, using such simple ciphered file has its advantages. I can trivially access it from any OS, I do not rely on any password manager utility and I do not need to trust its code to not disclose information. So what have I done?I've created a little shell script that allows me to consult and modify the passwords database easily; yes, simply put, it is "yet another password manager". However, and as I wouldn't like at all to lose my private SSH/GPG keys, the "secret" database also serves as a repository for these keys.The idea is to keep all this critical, non-recoverable data in a central place, making backups trivial. For example, I'm planning to stick the script alongside with this sensible data in a little pen drive (or floppy disk) so that it can be stored in a safe place. This way, I will not have that data in the hard disk: it will only be available when I really need it by plugging the pen drive and simply executing the script within it.Consider the following:$ mount /safestore$ /safestore/safestore query some-site... enter your GPG passphrase ...... your user-name/password is shown ...$ unmount /safestoreThe above commands could be used to request the user-name and password for some-site.Or this (assuming the disk is already mounted):$ /safestore/safestore syncWhich could synchronize the GPG database in the home directory with the one in the external drive.And what about creating a SSH key and installing it on your home directory?$ /safestore/safestore ssh-keygen... answer some questions ...$ /safestore/safestore ssh-keyinstall key-nameOf course, losing that pen drive could be a very serious issue... but you already have a backup copy of your keys somewhere, right? Also, if the GPG key has a strong passphrase and considering that someone had interest to crack it, you'd have enough time to regenerate your keys, revoke the old ones and update your passwords before he'd get any data out of the ciphered drive.I'm curious to know how people manages this stuff themselves. At the moment I am not planning to publish the script because it is very customized to my needs but I may easily change my mind if there is interest in it. [Continue reading]

  • C++ code in the kernel

    I have always been fond of the idea of having an operating system kernel written entirely in C++ so that it had a clean class hierarchy, exception-based error control, etc. I know this is very difficult to achieve due to the inherent complexity of C++ and its runtime, but not impossible if appropriate care is taken.C++ for Kernel Mode Drivers is an article I just found that talks about using C++ in driver code for the Windows OS. It explains in great detail why using C++ is difficult and discouraged in such scenario. Without paying much attention to Windows-specific details, it is a good read no matter which OS you develop for. [Continue reading]

  • FAT versions

    FAT12, FAT16 and FAT32 are all different versions of the FAT file system. As far as I know, the FAT was originally designed to work on floppy disks, on which it does a decent job. Soon after, it was adapted to work on hard disks (bigger capacity) and hence FAT16 was born. Much later, and due to the introduction of bigger disks — which means something over 500MB — FAT32 was created.In order to understand what the number attached to the name means, we first need to outline how FAT internally works. FAT divides the disk area targetted to user data into clusters; you can imagine this area as a linear succession of sectors. A cluster is then defined as the agrupation of a set of consecutive sectors and is the minimum addressable block by FAT.Each of these cluster is identified by a number; let's call this number the CID (Cluster IDentifier) for the scope of this post. This obviously drives us to the question: how big can the CID be? Well, it depends on the FAT version. This is exactly what the number after the name specifies: the number of bits reserved to address the CIDs.If we do some numbers, FAT12 can have a maximum of 212 = 4096 clusters, more than enough for a floppy disk in which a cluster is a single sector. Now consider a 100MB disk; if we used FAT12 on such a disk, each cluster could be 100MB/4096 = 25KB long approximately, which introduces a lot of fragmentation for small files (think about the average file size when 100MB disks were considered big). But not only that: 12 bits, or 3 bytes, is an odd number to do arithmetic manipulations in a 16-bit computer (the 8086), introducing a computation penalty on every I/O operation. Hence, FAT16 was born, providing 16 bits to addess CIDs.The birth of FAT32 is similar to the FAT16's one. With bigger disks appearing between 1995 and 2000, 216 = 65536 clusters were not enough to address them in a fine-grained fashion: few clusters introduced way too much fragmentation. For example, a 4GB disk could gave 4GB/65536 = 64KB long clusters, a number that drives to a lot of wasted space. FAT32 was therefore created, which increasing the CID address space to 32 bits. FAT32 also introduced some improvements to the file system such as the ability to place the FATs and the root directory anywhere on the file system (not restricted to the beginning).To sum it up, there is the Long File Name (LFN) extension. This was introduced with Windows 95 and extends the traditional FAT file system so as to allow 255-character long file names preserving the compatibility with the 8.3 naming scheme.For more information check out the Wikipedia's FAT article. [Continue reading]

  • Samba performance under Mac OS X

    I have a NetBSD/i386 3.0 file server at home running Samba 3.x. Read and write access from NetBSD and Windows XP clients is fast (although, under the NetBSD clients, NFS performance wins).Unfortunately, reading large files from Mac OS X is incredibly slow. Adding the following to /etc/sysctl.conf solves this annoying problem:net.inet.tcp.delayed_ack=0This configuration file does not exist in a default installation so simply create it from scratch. rc(8) takes care of it automatically. [Continue reading]

  • Calling the BIOS from within the kernel

    NetBSD/i386's bioscall(9) trampoline is one of those interesting and tricky things you come across from time to time when reading kernel code. This apparently simple kernel function lets the caller execute BIOS functions and retrieve their results.The BIOS (Basic Input/Output System) is the PCs "firmware". It initializes the hardware, starts the boot process (in a primitive way) and provides a set of utility functions in the form of software interrupts. These interrupts can be used by applications to do tasks such as reading disk sectors, gathering information about the hardware or switching the video mode, among many other things. This "library" is what concerns us now.x86 processors start their operation in real mode (MMU-less, 16-bit addressing) for compatibility reasons with the 8086. As the BIOS contains the very first code executed by the system, it has to be callable from real mode. The BIOS cannot either switch to protected mode due to compatibility reasons again: all the boot code is real mode code (not to mention some legacy OSes such as DOS). This means that all the functions it offers to applications were designed to be called from real mode and with 16-bit addressing. (N.B. I'm not sure if this is completely true; the BIOS could provide 32-bit functions to be executed in protected mode, but anyway this is not what we are interested in now.)Sometimes, these functions can be a very useful resource for the operating system, specially before it has set up its own device drivers. But... virtually all OS kernels now work in protected mode with 32-bit addressing. In such configuration, it's impossible to call real mode code because of its 16-bit addressing and its direct access to the hardware.So is it impossible to use those functions once the kernel has started? Of course not. But only if the kernel has not thrashed important memory regions nor put the hardware in an unknown status to the BIOS. (That is, the subset of available operations once the switch has happened is quite limited.)Assuming everything is OK, the kernel can switch back to real mode (non-trivial), issue the call to the BIOS function, grab the results, return to protected mode and feed those results (if any) to the caller. And you guessed right, this is what bioscall(9) does in an automated way. The manual page contains some more details and sample codes.By the way, the EFI (Extensible Firmware Interface) is Intel's replacement for the BIOS. All these compatibility issues should be gone, something that is certainly good to make the i386 platform better and free it from obsolete design issues. [Continue reading]

  • Toying with KDE

    Some days ago I was "forced" to remove all packages in my workstation due to massive revision bumps in pkgsrc. Since I had to install an environment for X, I decided to give KDE 3.5.1 a try.The thing is that I hadn't used KDE seriously since I switched to GNOME 2.6 (ouch, that was two years ago... time passes really fast). Of course I installed it several times in this period and tried to use it, but I stopped the evaluation after a 5-minute ride. I didn't feel comfortable because I wasn't used to it and, to make things "worse", I could quickly escape to GNOME as it was installed alongside it, ready to be used again.But... you know what? I'm discovering some impressive things in KDE. Overall, I like it very much, up to the point of maybe not switching back (with the exception of Mac OS X on the laptop, that is.) Let's see some of the things I've "discovered":The audiocd kioslave: kioslaves are extensions for KDE's IO library, much like the methods for gnome-vfs. KDE has kioslaves for everything, but the audiocd one is impressive: it represents an audio CD as a virtual set of files in different formats, being OGG and MP3 among them. These files either represent a single track or the whole CD. Their names are guessed using the CDDB database. And you guessed right: ripping and encoding a CD is as easy as dragging and dropping those virtual files wherever you want!File properties: Right clicking on a file and choosing properties shows the standard settings tabs, but also shows some that are specific to that file type. I found this useful to correct the tags in the OGG files I had ripped. (I don't know if GNOME currently has this; couldn't tell for sure.)Consistency: Many typical options are in common places, no matter the application you are in. All common keybindings can be configured in a central place so that they affect all programs. OK, I know many GNOME utilities also have this integration in but this had to be mentioned.Konqueror: I wish it could use Gecko as its rendering widget (I think it can, but don't know how) because, unfortunately, KHTML fails to render correctly some pages I visit often. Anyway, it makes an excellent browser and a handy file manager. Its address bar supports many shortcuts to quickly access several search systems.K3B: At last, a decent CD and DVD burner for Unix systems. I know it has been around for a long while but it didn't work under NetBSD until very recently, so I couldn't use it.Amarok: Neat music player. It's visually attractive and easy to manage. I like the way it can be globally controlled with keybindings and how simple it is to fetch lyrics.Digikam: Good picture manager. Despite I tried an outdated version (0.7.0), it is still quite nice. Haven't had time to analyze it in detail yet, though.Almost everything works. Kudos to Mark Davies (markd@) for his excellent work in porting KDE to NetBSD.As a drawback, I still find KDE's interface too cluttered for my tastes. But I think I can live with this, specially because the interface can be simplified with some tuning effort. Not to mention that KDE 4 promises to be focused on usability... really looking forward to it! [Continue reading]

  • The RAII model

    Resource Acquisition Is Initialization (RAII) is a programming idiom that attempts to prevent the risk of leaking memory and other resources. It is only applicable to languages that have predictable object destroyment, and among these is C++ (an object's destroyer method is always called when it goes out of scope).The basic idea behind RAII is to wrap any resource subject to be leaked in a thin class. This class acquires the resource during its construction and releases it upon destroyment, thus hiding all the internal details from the user. This way, these resources are always released, no matter if the function has multiple exit points or throws an exception, because the destroyer will be automatically called for existing objects.If you have ever written C++ code, you have probably already used RAII without knowing it. In fact, I just learned today that this technique has a name, while I've been using it for a long time :-)I suggest you to read this article from Jon Hanna; it discusses this topic in great detail. [Continue reading]

  • SoC: Introductory article to tmpfs

    Dr. Dobb's Journal is running a set of mini-articles promoting Summer of Code projects. Next month's issue includes the tmpfs' introductory article, written by me and William Studenmund, the project's mentor.Looks like you have to register to access the full article; previous issues used to have them publically available. Personally, I'm going to wait for the printed version :-) [Continue reading]

  • Rewriting from scratch

    Let's face it. If you are a software developer, you have certainly felt some time that code developed by others was a real mess and that you could do a much better job rewriting it from scratch (specially without actually understanding the "messy" code in detail). Big mistake.I'm not going to explain here why because the Things You Should Never Do, Part I article from Joel on Software talks about this in great detail. Certainly worth to read. [Continue reading]

  • Automatic mouse button repeating

    The trackball I bought has two little buttons that replace the typical (and useful!) wheel. One (button 3) is used to scroll down and the other (button 4) to scroll up. These are the events that a wheel generally generates every time you move it in either direction.Unfortunately, if you press and hold one of these special buttons, the trackball only sends a single press event. So, if you want to scroll a document up or down, you have to repeatedly click buttons 4 and 3 respectively, which is highly annoying: you end up going to the scroll bar and using it rather than the two buttons.The Windows and Mac OS X drivers solve this by generating press and release events periodically for those buttons while they are pressed. This way, the application sees as if you had clicked them multiple times. Very comfortable.I didn't mention NetBSD in the previous paragraph because it doesn't support this feature. That is, it handles those buttons as two extra regular buttons (in fact, they are from the hardware's point of view). And no, neither Linux, XFree86 nor X.Org provide options to simulate the expected behavior as far as I can tell.So, what did I do? Add support to NetBSD's mouse driver (wsmouse) to simulate automatic button repeating. This way, I can use the trackball to its full power — hey, I got that model precisely because those two buttons!This new feature is customizable through a set of nodes exposed by wsconsctl(8), as seen below:# wsconsctl -m -a | grep repeatrepeat.buttons=3 4repeat.delay.first=200repeat.delay.decrement=25repeat.delay.minimum=50repeat.buttons indicates which buttons are subject to automatic event repeating. The other three variables indicate the delays used to control how often events are sent. Three are needed because they the feature supports acceleration. That is, the first time you click a button, it will take 200ms until the first repeated event is sent. The second event will be sent after 175ms; the third after 150ms and so on until the events are separated 50ms each other (the minimum). Useful to scroll large documents. [Continue reading]

  • NetBSD is now Multiboot-compliant

    Since there were no comments after my request for review for the patch that makes NetBSD be Multiboot-compliant, I have commited it to the tree. This feature is enabled as a default on i386's GENERIC and GENERIC_LAPTOP kernels; for others, you need to add options MULTIBOOT to their configuration file.Please note that regular GRUB builds will not boot these new kernels properly. This is because of a bug in GRUB Legacy. You either need to install grub-0.97nb4 from pkgsrc or manually apply this patch to the GRUB sources.See multiboot(8) for more information.Edit (Feb 6th, 21:06): Some typos fixed based on comments from Reinhard von der Lippe. Does anyone remember seeing this happening (the post update) a couple of days ago? If you use an aggregator, you should have seen an update to the post that looked very similar to this. (More details on the comments.) [Continue reading]

  • Got the trackball

    A couple of days ago I received the Logitech Marble Mouse I had ordered, which means that I've now got the perfect input devices :-) [Continue reading]

  • Multiboot support for review

    During the past few days I've continued to work on adding Multiboot support to NetBSD. It has been a hard task due to the lack of documentation — I had to reverse-engineer all the i386 boot code — but also very interesting: I've had to do deal with low-level code (recovering somewhat my ASM skills) and learn some details about ELF (see the copy_syms function in multiboot.c and you'll see why).You can now review the preliminary patch and read the public request for review. [Continue reading]

  • File systems documentation uploaded

    The file systems documentation I described yesterday has been uploaded to NetBSD's website alongside with all the documentation.You can read the official announcement or go straight to the book! You'll notice that it is now prettier than the version posted yesterday because it properly uses NetBSD's stylesheet. [Continue reading]

  • File systems documentation for review

    My Summer of Code project, tmpfs, promised that I would write documentation describing how file systems work in NetBSD (and frankly, I think this point had to do a lot with my proposal being picked up). I wrote such documentation during August but I failed to make it public — my mentor and I first thought about making it an article (which would have delayed it anyway) but soon after it became apparent that that structure was inappropriate.Anyway, I proposed myself to deal with the documentation whenever I had enough free time to rewrite most of it and restructure its sections to make it somewhat decent. And guess what, this is what I started to do some days (a week?) ago. So... here is the so-promised documentation!Be aware that this is still just for review. The documentation will end up either being part of The NetBSD Guide or being a "design and implementation" guide on its own.Also note that there is still much work to do. Many sections are not yet written. In fact, I started writing the general ideas to get into file system development because, once you know the basics, learning other stuff is relatively easy by looking at existing manual pages and code. Of course, the document should eventually be completed, specially to avoid having to reverse-engineer code.I'll seize this post to state something: the lack of documentation is a serious problem in most open source projects, specially those that have some kind of extensibility. Many developers don't like to write documentation and, what is worse, they think it's useless, that other developers will happily read the existing code to understand how things work. Wrong! If you are willing to extend some program, you want its interface to be clearly defined and documented; if there is no need to look at the code (except for carefully done examples), even better. FWIW, reading the program's code can be dangerous because you may get things wrong and end up relying on implementation details. So, write documentation, even if it is tough and difficult (I know, it can be very difficult, but don't you like challenges? ;-). [Continue reading]

  • Buying a trackball: the odyssey

    Yesterday morning, I sold my good and old Logitech Marble Mouse trackball. I had the first version that came with a PS/2 connector and with only two buttons. I wanted to change it for a new one mostly because I need a USB-enabled one, but also because I want to have a scrolling wheel (to me it's extremely useful).Since then until now, I've gone to ~all local PC shops (if you live in Barcelona, you know what this means when going to Ronda de Sant Antoni and nearbys) asking for trackballs, but none of them has a single one. More specifically, I've been looking for the same model I had (I loved it, even when playing FPS games) but in its new version, which comes with a USB connection and two additional buttons that simulate the scrolling wheel. Are trackballs obsolete or what?I think I'll order it directly to Logitech's online shop because I resist to buy a regular mouse. While they are very nice, I don't have much space to put it. However, Apple's Mighty Mouse makes me doubt... basically because its four-direction scrolling ;-)Oh, and I also want to avoid wireless ones (well, this one might be nice, as it'd be like a remote control... but it's rather expensive). Why do I want it to be wireless, wasting batteries, when I can simply connect it to my keyboard's USB hub? :-) [Continue reading]

  • Desktop screenshot

    Now that I know how to post images here, I think I'll be posting screenshots once in a while ;-) Here comes my current desktop (the iBook attached to the 20" flat panel): [Continue reading]

  • How not to close a bug

    A couple of weeks ago, I received a notification from GNOME's Bugzilla telling me that one of my bug reports was closed, being marked as incomplete. That really bothered me a lot because it was closed without applying any fix to the code — after all, NetBSD is dead, right?What is worse: the maintainers acknowledged that the bug report was correct, so there is a real bug in glib. So why the hell was it closed without intervention? Fixing it is a matter of 5 minutes or less for an experienced glib developer.OK, fine, you'd try to blame me: I failed to provide a patch in a timely fashion but that's not a good reason to close a bug report without a fix. At the very least, they'd have tried to contact me or the other developer who spoke up again — if they had, I'd have done the patch because I simply forgot about the report.So... do not close a bug if it is still there and you are aware of it! [Continue reading]

  • GNOME 2.12.2 in pkgsrc

    It has taken a while but, finally, GNOME 2.12.2 is in pkgsrc. As always, this new version comes with multiple bug fixes and some miscellaneous new stuff. Enjoy it. [Continue reading]

  • NetBSD slides for PartyZIP@ 2005 available

    The slides I used for the NetBSD conference at PartyZIP@ 2005 are now available at the NetBSD website. These shoud have been uploaded to PartyZIP@'s site as well as some recordings about the conferences, but this hasn't been done yet — and this happened in July. Hope you find them of some interest ;-) Note that they are in Spanish. [Continue reading]

  • GStreamer 0.10 in pkgsrc

    I've uploaded GStreamer 0.10, the base plugins set and the good plugins set to pkgsrc. This new major version is parallel installable with the prior 0.8 series; this is why all the old packages have been renamed to make things clearer.Fortunately, this version was easier to package because it does not need a "plugin registry" as the previous one did (i.e., no need for install-time scripts). Even though, the split of the plugins in different distfiles (base, good, bad, et. al.) make it a bit more complex.Let's now wait until someone packages Beep Media Player's development version and we'll be able to enjoy its features (assuming it works fine, of course...).I have to confess that I'm losing interest in maintaining the GNOME-related packages in pkgsrc. They take me too much time, time that I'd rather spend on other NetBSD-related tasks. This is why this update has been delayed so much... [Continue reading]

  • Google Talk talks to other servers

    Yes, it is true! (Although I haven't seen any anouncement yet; thanks to a friend for telling me.) Google's messaging service, Google Talk, can now communicate with other Jabber servers such as the popular jabber.org. It has always been a Jabber-based system, but its server didn't allow communications with third-party servers before.I think I'll migrate my account if this one proves to be stable; will be trying it for some days before to make sure :-) [Continue reading]

  • Some pictures of my rig

    I've just decided to learn how to publish images in Blogger. And in fact, it's damn easy, but for some reason I thought it was not — that is, I believed I had to register myself in some other service and was lazy to do it...Here come some pictures of my rig:The picture above shows my desktop. You can see the iBook G4 working in clamshell mode (well, sort of). It's connected to the Apple keyboard, to the mini mouse (I ought to replace it), to the BenQ FP202W flat pannel and to the stereo system. If you can notice the free USB wire next to the laptop... that's my manual switch for the keyboard and mouse! ;-) On the right side below the table is my PC, the Athlon XP 2600+ machine. Oh, and there is the Palm m515, this month's DDJ issue and a 802.11 book I got recently.This other picture shows the machine I used during tmpfs development to do all the necessary tests. It's a Pentium 2 233Mhz with 128Mb of RAM and is connected to the PC with a serial line (plus to the Ethernet, of course). (I've been playing with qemu recently and I might get rid of it.) Below the speaker (on the floor) is my old Viewsonic monitor that I'm trying to sell.At last, this other picture shows the Apple Macintosh Performa 630 I found a year and a half ago (no, the monitor is not currently connected). It's waiting for a reinstall of NetBSD/mac68k. Ah, and there is also a VHS video on the left side of the table which is connected to the PC.Next time I shall publish pictures of the SoC and Monotone T-shirts ;-) [Continue reading]

  • Routing protocols

    IP networks communicate with each other using L3 devices named routers. A router has a table that tells itself how to reach a given network; i.e., if it has to communicate with another router or if it is directly attached to it. Obviously, these tables need to be filled with accurate information, a thing that can be done in two ways:Static routing: The network administrator manually fills in the required data in each router and/or host. This is tedious to do but is enough on (very) little networks. For bigger networks, this does not scale.Dynamic routing: The routing tables are automatically filled in by routing protocols, which are executed by the routers themselves. These are what concerns us in this post, which aims to provide a little overview of their classification.A routing protocol defines a set of rules and messages — in the typical sense of a network protocol — that allow routers to communicate with each other and make them automatically adapt to topology changes in the network. For example, when a new network is added, all routers will be notified to add an appropriate entry to their routing table, so that they can reach it; imagine something similar when a network becomes unavailable.We can classify these routing protocols in two major groups:Distance vector protocols: As the name implies, this protocol calculates a distance between two given routers. The metric used in this measure depends on the protocol used. For example, RIP counts the number of hops between nodes. (This contrasts withe OSPF, which uses a per-link cost which might be related to, e.g., its speed. Please note that OSPF is not a distance vector protocol; I'm mentioning it here to show that this difference in metrics can cause problems if you reinject routes between networks that use different protocols.)Another thing that differentiates these protocols is that the routers periodically send status messages to their neighbours. On some special cases, they may send a message as a result of an event without waiting for the specified interval to pass.Link state protocols: These protocols monitor the status of each link attached to the router and send messages triggered by these events, flooding the whole network. Another difference is that each router keeps a database that represents the whole network; from this database, it's then able to generate the routing table using an appropriate algorithm (such as Bellman-Ford or Dijkstra) to determine the best path. These include OSPF, BGP and IS-IS.At last, we can also classify these protocols in two more groups:Interior routing protocols: These are used inside autonomous system (AS) to maintain the tables of their internal routers. These protocols include OSPF and RIP.Exterior routing protocols: Contrary to the previous ones, these are used to communicate routing rules between different ASs. And why are they not the same as the previous ones? Because they use very different metrics to construct the routing table: they rely on contracts between the ASs to decide where to send packets, something that cannot be taken into account with interior protocols. We can include BGP in this group.Edit (23:47): As Pavel Cahyna kindly points out, OSPF is not a distance vector protocol. The error has splipped in because I wanted to state something but that confused me and didn't put it in the correct group. The paragraph has been reworded. [Continue reading]

  • Taking backups of your data

    I've been playing with Apple's Backup utility (in trial mode) for a couple of days and it seems to be ideal for people like me: those who know that backups must be done but who never spend the time to do them.After opening it you get a dialog that lets you configure a backup plan. A plan specifies the list of the items to back up, the backing up interval and the destination for the copy (be it a remote server, your iDisk, a local volume or a CD/DVD). Setting up the items to copy is trivial because the program offers you a set of predefined plan templates: copy personal settings, iLife data, purchased music or the whole home directory. Of course, you can configure these in a more fine-grained fashion, specifying whether your keychain, your bookmarks, your calendars, your photos, etc. should be copied or not.A few clicks later, the plan is created and the program will automatically take care to issue the backups at the predefined intervals. Related to this, here is one thing that is very useful: if the computer was off at the exact time the backup should have run, it will do the copy when it becomes on again. I wish cron(8) could do something similar, because desktop PCs are not up the whole day (I know there is anacron, but it'd be nice if the regular utility supported something similar).Unfortunately, the Backup utility is tied to .Mac. If you do not have a full account you are limited to 100MB per copy. And while the iDisk and this backup facility is very nice, I don't find it worth the money.What I'm now thinking is that having a similar free utility could be very nice. It'd perfectly fit GNOME in the name of usability! It'd be even nicer if it'd run in the background, detached from any graphical interface (so that you'd set it up and forget in a dedicated server). Hmm... looks like an interesting project; pity I don't have time for it anytime soon (too much long-overdue stuff to do). Wondering if something like this already exists... [Continue reading]

  • Apple unleashes Intel-based systems

    You probably know it by now: Apple unleashed its first Intel-based computers yesterday during Steve Jobs' keynote. The PowerBook has been replaced by a completely new model, named MacBook Pro. It features a Yonah dual core processor plus a lot of other hardware updates and additions (some removals, too!); it won't be ready until February, but can be already ordered. As regards desktop machines, the iMac has been updated; contrary to the laptop, this machine has suffered "less" updates, which include the processor (also dual-core) and the graphics card. Otherwise, it's similar to the PPC model; externally, both seem to be the same.I've been looking forward to this announcement for a while &mdash this is why I'm posting it here. I had expected (as many other people) that the first models to be converted could be the iBooks. Frankly, I'm glad they didn't do it, because I won't feel bad for having bougth this G4 just a month and a half ago. Don't get me wrong; it's working great ;-)Other news include the update of Mac OS X to 10.4.4 (which I'm already running :-) and the release of iLife'06 and iWork'06.I hope that NetBSD gets ported to these new machines and that it works as well as the i386 port. If that's the case, I'll seriously consider a Mac as my next desktop machine (which is still some years away... and who knows what will happen in this time frame).Edit (17:07): I'm now wondering why the hell on earth many people seems to assume that Windows will run on these machines... All they have done is change the microprocessor (OK, plus other things), but this does not mean it becomes an IBM-compatible system — just consider Amiga-based and Mac-based 68k systems. I'm sure somebody will get it working, but it hasn't to be easy.Edit (20:14, 12th January): I stand corrected. These Intel-based machines come with regular hardware as found on PC systems; i.e., they run standard Intel microprocessors and chipsets. The major difference is that they use EFI (the Extensible Firmware Interface) instead of the obsolete BIOS. Looks like Windows Vista will run on them directly (and Apple won't forbid it :-). As regards NetBSD, support for EFI will be needed as well as (possibly) some new drivers. [Continue reading]

  • Applications vs. windows

    One of the things that I've come to love about Mac OS X is the way it handles the active applications. Let's first see what other systems do in order to talk about the advantages of this approach.All other desktop environments — strictly speaking, the ones I've used, which include GNOME, KDE and Windows — seem to treat a single window as the most basic object. For example: the task manager shows all visible windows; the key bindings switch between individual windows; the menu bar belongs to a single window and an application can have multiple menu bars; etc.If you think about it, this behavior doesn't make much sense and this may be one of the reasons why Windows and KDE offer MDI interfaces. (Remember that an MDI interface is generally a big window that includes several other within it; the application is then represented by a single window, thus removing auxiliary windows — e.g., a tool palette &mdash, from the task switcher, etc.) Unfortunately, other systems such as GTK/GNOME do not have this interface and the application's windows are always treated individually (just think how annoying it is to manage The GIMP).So, why is (IMHO) Mac OS X better in this aspect? This OS's interface always groups windows by the application they belong to. The dock (which is similar to a task bar, but better ;-) shows application icons; the task switcher (the thing that appears in almost any environment when you press Alt+Tab) lets you switch between applications, not windows; there is a single menu bar per application; etc. Whenever you select an application, all of its windows become active and are brought to the front layer automatically. In general, you have many different windows visible at a time, but they all belong to a rather small subset of applications. Therefore, this makes sense.At last, let me talk about the menu-bar-at-top-of-screen thing I mentioned above, which is what drove me to write this post in the first place. Before using a Mac, I always thought that having the menu bar detached from the window didn't make any sense, specially because different windows from a single applications often have different menus. I had even tried enabling a setting in KDE that simulates this behavior, but didn't convince me at all because the desktop as a whole doesn't follow the concept of treating applications as a unit, as described above (plus the applications are not designed to work in such a interface).However, after using Mac OS X for a while, I'm hooked to this "feature". Accessing the menu bar is a lot easier than when it is inside the window. And being able to do some actions on the application, no matter which of its windows is visible, is nice. You must try it to understand my comments ;-) [Continue reading]

  • Updating the pkgsrc GNOME packages

    A while ago, somebody called John asked me to explain the process I follow when I update the GNOME packages in pkgsrc. As I'll be doing this again in a few days (to bring 2.12.2 into the tree), this seems a good moment for the essay. Here I go:The first thing I do is to fetch the whole distribution from the FTP site; i.e., the platform and desktop directories located under the latest stable version. I have a little script that does this for me, avoiding to fetch the distfiles that haven't changed since the previous version.Once this is done I generate a list of all the downloaded distfiles and adjust the package dependencies in meta-pkgs/gnome-devel/Makefile, meta-pkgs/gnome-base/Makefile and meta-pkgs/gnome/Makefile to require the new versions; this includes adding any possible new stuff. I do this manually, which is a quite boring task; however, writing a script could take much more time.Afterward, I use cvs diff -u meta-pkgs/gnome* | grep '^+' over the modified files to get a list of the packages that need to be updated. As the list of dependencies in the meta-packages is sorted in reverse order — i.e., a package in the n-th position uses packages in the 1..n-1 positions but not any in the n+1..N ones — this command creates a useful "step by step" guide of what needs to be done.Then, it is a matter of updating all the packages that need it, which is, by far, the longest part of the process. I go one by one, bumping their version, building them, installing the results, ensuring the PLIST is correct, and generating a log file in the same directory that will serve me during the commit part. I also have a little script that automates most of this stuff for a given package — and, thanks to verifypc, this is relatively quick :-)In the last part described, there are some packages that always scare me due to the portability problems that plague them over and over again. These include the ones that try to access the hardware to get information, to control multimedia peripherals, to manage network stuff, etc. I don't know why some packages break in similar places in newer versions even if the mainstream developers have been sent some patches to fix the stuff in previous versions. </rant>Once I've got the updated gnome-base package installed, I zap all my GNOME configuration — ~/.gconf*, ~/.nautilus*, ~/.metacity*, ~/.gnome* and a lot more garbage — and try to start it. At this point, it is mostly useless, but I can see if there are serious problems in any of the most basic libraries. If so, this is the time to go for some bug hunting!When the gnome-base package works, I continue to update all the other missing packages and try to package the new ones, if any. Adding new stuff is not easy in general (portability bugs again) and this is why some of the dependencies are still commented out in the meta-packages. Anyway, with this done, I finally start the complete desktop and check if there are any major problems with the most "important" applications. Again, if there are any, it is a good time to solve them.So all the updates are done, but nobody guarantees that they work, specially because I do all the work under NetBSD-current. So I generally (but not always... shame on me!) use pkg_comp to check that they work, at the very least, under the latest stable release (3.0 at this point).The last part is to commit all the stuff in a short time window to minimize pain to end users. Even though I have left log files all around and prepared the correct entries for the CHANGES file, this often takes more than an hour.Unfortunately, end users always suffer, either because the packages break on their platform, because our packaging updating tools suck or simply because I made some mistake (e.g., forgot to commit something). But this is what pkgsrc-current is for, isn't it? ;-)And, of course, another thing to do is to review all the newly added local patches, clean them up, adapt them to the development versions of their corresponding packages and submit them to GNOME's bugzilla. This is gratifying, but also a big, big pain.Well, well... this has been quite long, but I think I haven't left anything out. [Continue reading]

  • File previews in Nautilus

    Yesterday's evening, I was organizing some of my pictures when I noticed that Nautilus wasn't generating previews for any of them. This annoyed me so much that I decided to track down the issue, which I've done today.The first thing I did was to attach a gdb session to the currently running Nautilus process, adding some breakpoints to the functions that seemed to generate preview images; I located them using grep(1) and some obvious keywords. This wasn't very helpful, but at least I could see that the thumbnail generation routine checks whether the file is local or not (because Nautilus lets you set different preferences for them).With this in mind, I resorted to the traditional and not-so-elegant printf debugging technique (which, with pkgsrc in the middle, is quite annoying). Effectively, a call to gnome_vfs_uri_is_local was always returning false, no matter which file it was passed. It was clear that the problem was in gnome-vfs itself.So I switched to the gnome-vfs code, located that function and saw that it was very short: basically, all it does is delegate its task to another function that depends on the method associated to the file — you can think of the method as the protocol that manages it. As I was accessing local files, the method in use was the file method (implemented in modules/file-method.c), so the affected function was, how not, its do_is_local.As it is a bit complex (due to caching issues for efficiency), I added some more printfs to it to isolate a bit more the problem, which drove me to the filesystem_type function implemented in modules/fstype.c. And man, when looking at this function I quickly understood why it was the focus of the problem. It is so clumsy — portability-wise — that it is very easy for it to break under untested circumstances: it is composed of a set of mutually exclusive ifdefs (one for each possible situation) to determine the file system name in which a file lives.And you guessed right: NetBSD 3.x didn't match any of them, so it resorted to a default entry (returning an unknown file system type) that is always treated as remote by the file method code.The solution has been to workaround the detection of one of these special cases in the configure script to properly detect statvfs(2) under NetBSD. This is just a hack that adds to the uglyness of the original code, but fixing this issue properly will require a lot of work and the willingness of the gnome-vfs maintainers to accept the changes.I will ask for a pull-up to the pkgsrc-2005Q4 branch when I've been able to confirm that this does not break on other NetBSD versions. Now have fun watching your images again! ;-) [Continue reading]

  • Got a BenQ FP202W flat panel

    As a present for my saint (I don't know if this is the proper expression in English) — which I celebrate today due to my second name (Manuel) — I got a BenQ FP202W flat panel. It's a 20" wide screen, providing a 16:10 aspect ration at its native resolution of 1680x1050 pixels. This contrasts heavily with my now-old Viewsonic E70f, a 17" CRT doing only 1024x768@85Hz.And man, this new monitor is truly amazing. Maximizing windows is a thing of the past! (Except for the media player, of course ;) Being able to have two documents open side by side is great: for example, you can keep your editor in one side while reading a reference manual in the other side without having to constantly switch between windows; and even then, there is still room for other things. Moreover, playing Half-Life 2 in widescreen mode is... great :)I've also tried to plug it to the iBook G4; it works fine for most things but feels a bit sluggish when some effects come into the game (aka, Exposé or Dashboard). Looks like its video card is not powerful enough to handle it flawlessy (something I can understand). I ought to try its clamshell mode, though, as disabling its builtin screen may make it faster; but first I need to get a USB keyboard...Concluding: while the old CRT monitor was still in perfect working condition, the switch has been great. I don't know why I hesitated to do it ;) If you have the money to buy such a display, I certainly recommend it.Happy new year 2006! [Continue reading]