• Doesn't 'ls f*' do what you expect?

    If you have ever ran ls on a directory whose contents don't fit on screen, you may have tried to list only a part of it by passing a wildcard to the command. For example, if you were only interested in all directory entries starting with an f, you might have tried ls f*. But did that do what you expected? Most likely not if any of those matching entries was a directory. In that case, you might have thought that ls was actually recursing into those directories.Let's consider a directory with two entries: a file and a directory. It may look like:$ ls -ltotal 12Kdrwxr-xr-x 2 jmmv jmmv 4096 Dec 19 15:18 foodir-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofileThe ls command above was executed inside our directory, without arguments, hence it listed the current directory's contents. However, if we pass a wildcard we get more results than expected:$ ls -l *-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofilefoodir:total 4K-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:19 anotherfileWhat happened in the previous command is that the shell expanded the wildcard; that is, ls never saw the special character itself. In fact, the above was internally converted to ls -l foofile foodir and this is what was actually passed to the ls utility during its execution. With this in mind, it is easy to see why you got the contents of the sample directory too: you explicitly (although somewhat "hidden") asked ls to show them.How to avoid that? Use ls's -d option, which tells it to list the directory entries themselves, not their contents:$ ls -l -d *drwxr-xr-x 2 jmmv jmmv 4096 Dec 19 15:19 foodir-rw-r--r-- 1 jmmv jmmv 0 Dec 19 15:18 foofileUpdate (21st Dec): Fixed the first command shown as noted by Hubert Feyrer. [Continue reading]

  • A subject for my undergraduate thesis

    It has finally come the time when I have to choose a subject for my undergraduate thesis on which I'll be working on full time next semester. My first idea was to make a contribution to NetBSD by developing an automated testing framework. I have had interest in this for a long while (I even proposed it as part of this year's SoC), and there is a lot of interest in it within the project too.However, this specific project does not fit correctly into the current research groups at my faculty. This wouldn't be a problem if I wasn't thinking in taking a CS Master or Ph.D. later on. But as I'm seriously considering this possibility, it'd be better if I worked on a project that lets me integrate into an existing research group as early as possible. This could also teach me several new stuff that I'd not learn otherwise: if you look at the paper linked above, you can see I already have several ideas for the testing framework. That is, I already know how I'd address most of it, hence there'd not be a lot of "research". Furthermore, the teacher I talked to about this project felt the core of the project could not be long enough to cover a full semester.So what are the other possible ideas? I went to talk to a teacher that currently directs some of the research groups and he proposed me several ideas, organized in three areas:Code analysis and optimization: Here I'd work on tools to analyze existing code and binaries to understand how they work internally; this way one could later generate a better binary by reorganizing related code and/or removing dead bits. They already have done a lot of work on this subject, so I'd be working on a tiny part of it. No matter what, dealing with the compiler/linker and the resulting binaries sounds quite well.Improve heterogeneous multiprocessor support: This group contains ideas to improve the management of heterogeneous systems such as those based on the Cell processor. I'm "afraid" any project here would be completely Linux-based, but the background idea also feels interesting. Haven't got too much details yet, though.Distributed systems: This doesn't interest me as much as the other two, but this may be because there was not enough time during the meeting to learn about this group. However, next week we are taking a guided visit to the BSC which will hopefully clear some of my doubts and let me decide if I'm really interested in this area.I shall make a decision as soon as possible, but this is hard!Oh, and don't worry about the testing framework project. I'll try to work on this in my spare time because I feel it's something NetBSD really needs and I'm sure I'll enjoy coding it. Not to mention that nowadays, whenever I try to apply any fix to the tree, I feel I should be adding some regression test for it! Plus... I already have a tiny, tiny bit of code :-) [Continue reading]

  • Software bloat

    A bit more than three years ago, I renewed my main machine and bought an Athlon XP 2600+ with 512MB of RAM and a 80GB hard disk. The speed boost I noticed in games, builds and the overall system usage was incredible — I was coming from a Pentium II 233 with 384MB of RAM.With the change, I was finally able to switch from plain window managers to desktop environments (alternating KDE and GNOME from time to time) and still keep a usable machine. I was also able to play the games of that era at high resolutions. And, what benefited me more, the build times of packages and NetBSD itself were cut by more than a half. For example, it previously needed between 6 to 7 hours to do a full NetBSD release build and, after the switch, it barely took 2. On the pkgsrc side, building some packages was almost instantaneous because the machine processed both the infrastructure and the source builds like crazy.But time passes and nowadays the machine feels extremely sluggish. And you know that hardware does not degrade like this so it's easy to conclude it's software's fault. (Thank God I've done some upgrades on the hardware, like doubling the memory, replacing the video card and adding a faster hard disk.)I'm currently running Kubuntu 6.10 and KDE is desperately slow in some situations; of course GNOME has its critical scenarios too. (Well... it is not that slow, but responsiveness is, and that makes a big amount of the final experience.) The problem is they behaved much better in the past yet I, as a desktop user, haven't noticed any great usability improvement that is worth such speed differences. As a side note: I know the developers of both projects try their best to optimize the code — kudos to them! — but this is how I see it in my machine.Another data point, this time more objective than the previous one. Remember I mentioned NetBSD took less than 2 hours to build? Guess what. It now takes 5 to 6 hours to build a full release; it's as if I went back in time 3 years! Or take pkgsrc: the infrastructure is now very, very slow; in some packages, it takes more time than the program's build itself.I could continue this rant but... it'd drive nowhere. Please do not take it as something against NetBSD, pkgsrc and KDE in particular. I've taken these three projects to illustrate the issue because they are the ones I can compare to the software I used when I bought the machine. I'm sure all other software suffers from slowdowns.Anyway, three years seem to be too much for a machine. Sometimes I think developers should be banned fast machines because, usually, they are the ones with the fastest machines. This makes them not notice the slowdowns as much as end users do. Kind of joking. [Continue reading]

  • Hard disks and S.M.A.R.T.

    Old hard disks exposed a lot of their internals to the operating system: in order to request a data block from the drive, the system had to specify the exact cylinder, head and sector (CHS) where it was located (as happens with floppy disks). This structure became unsustainable as drives got larger (due to some limits in the BIOS calls) and more intelligent.Current hard disks are little (and complex) specific-purpose machines that work in LBA mode (not CHS). Oversimplifying, when presented a sector number and an operation, they read or write the corresponding block wherever it physically is — i.e. the operating system needn't care any more about the physical location of that sector in the disk. (They do provide CHS values to the BIOS, but they are fake and do not cover the whole disk size.) This is very interesting because the drive can automatically remap a failing sector to a different position if needed, thus correcting some serious errors in a transparent fashion (more on this below).Furthermore, "new" disks also have a very interesting diagnostic feature known as S.M.A.R.T. This interface keeps track of internal disk status information, which can be queried by the user, and also provides a way to ask the drive to run some self-tests.If you are wondering how I discovered this, it is because I recently had two hard disks fail (one in my desktop PC and the one in the iBook) reporting physical read errors. I thought I had to replace them but using smartmontools and dd(1) I was able to resolve the problems. Just try a smartctl -a /dev/disk0 on your system and be impressed by the amount of detailed information it prints! (This should be harmless but I take no responsibility if it fails for you in some way.)First of all I started by running an exhaustive surface test on the drive by using the smartctl -t long /dev/disk0. It is interesting to note that the test is performed by the drive itself, without interaction with the operating system; if you try it you will see that not even the hard disk led blinks, which means that the test does not "emit" any data through the ATA bus. Anyway. The test ended prematurely due to the read errors and reported the first failing sector; this can be seen by using smartctl -l selftest /dev/disk0.With the failing sector at hand (which was also reported in dmesg when it was first encountered by the operating system), I wrote some data over it with dd(1) hoping that the drive could remap it to a new place. This should have worked according to the instructions at smartmontools' web site, but it didn't. The sector kept failing and the disk kept reporting that it still had some sectors pending to be remapped (the Reallocated_Sector_Ct attribute). (I now think this was because I didn't use a big-enough block size to do the write, so at some point dd(1) tried to read some data and failed.)After a lot of testing, I decided to wipe out the whole disk (also using dd(1)) hoping that at some point the writes could force the disk to remap a sector. And it worked! After a full pass S.M.A.R.T. reported that there were no more sectors to be remapped and that several ones were moved. Let's now hope that no more bad sectors appear... but the desktop disk has been working fine since the "fixes" for over a month and has not developed any more problems.All in all a very handy tool for testing your computer health. It is recommended that you read the full smartctl(1) manual page before trying it; it contains important information, specially if you are new to S.M.A.R.T. as I were. [Continue reading]