• Choosing a 24' widescreen monitor

    I'm currently using a 17" flat screen with the PlayStation 3 and, also, for my MacBook Pro in clamshell mode. For the laptop, it is "reasonable" given that it has a similar resolution to the one of the built-in screen, but for the PlayStation 3 it simply sucks: I can only use it through the composite input, which results in very bad graphics quality. I also miss the 20" monitor I sold when I bought the MacBook Pro, which was very nice to watch videos and had lots of real screen state to work comfortably. So... I'm in the market for a widescreen computer monitor that natively supports Full HD (i.e. 1920x1080), and that brings me to the 24" world. These things are huge!Here are some things to take into account when looking for such a monitor:Resolution: All 24" computer monitors (not TVs) I've seen have so far have a 16:10 aspect ratio with a resolution of 1920x1200. Some 23" ones also have this same resolution, and I can assume that slightly bigger ones also will. But to support Full HD they need a minimum of 1920x1080.Response time: This is the time it takes to change the color of a pixel. Usually, the lower the better to prevent ghosting, but this is hard to qualify. Each vendor advertises this in a different way, because there are multiple measures that define the response time. For example: you can measure the time it takes for a pixel to go from white to black or from a tone of gray to another tone. Some vendors will only tell you the smallest one. The best specification is when the vendor provides you all different numbers.Brightness and contrast: All monitor specifications will give you some numbers about the brightness and contrast ratio of it. Be aware that there are two measurements for the contrast ratio: dynamic and static.Video inputs: There is a wide variety of rather cheap 24" monitors, but most of them are very limited in the inputs they support. The cheapest one I've found so far only has an analog VGA connection; avoid those. If you are going all the way up to a big monitor, do it right and use a digital connection; otherwise the image may be blurry or unstable.Virtually all other ones have a single digital DVI or HDMI input and an analog one. The most advanced have additional inputs, such as two digital connections (one DVI, one HDMI), a VGA analog one, composite, component, S-video, ... In this area, just choose the one that will suit your requirements, but the more connections they have, the more expensive they will be.HDCP support: If the monitor has digital inputs — DVI, but specially HDMI — make sure it supports HDCP. This is required to watch 1080p high-definition content; otherwise, players will only output 720p, which is half the resolution. So, to use the PlayStation 3 in its full power, you must have HDCP. And I assume that some computer media players also require this... or at least that's the idea I have about video playback in Windows Vista.1:1 pixel mapping: Widescreen computer monitors have an aspect ratio of 16:10, but game consoles and video players output video in 16:9. In terms of resolution, the monitors have 1920x1200 pixels but the video signal will only be 1920x1080. There are many monitors out there that will scale the 16:9 image to fill the whole 16:10 screen, which will distort it. If the monitor supports 1:1 pixel mapping, it will happily display the lower-resolution image on the screen without distortion, adding small black bars at the top and bottom of it.Now, looking for this feature in the vendors' sites is hard... if not impossible. I have not found any list of specifications that mentions it and have only been able to guess whether a given monitor supports it or not by looking at random forums around the Internet. And even then the answers are not very clear.Picture-in-Picture (PiP): This feature allows you to display two different video inputs at the same time on the monitor. One covers the whole display and the other one is shown in a window on top of it. All of this is done internally by the monitor.Vertical orientation: Some monitors allow you to tilt them vertically, thus providing a resolution of 1200x1920. I can't imagine how useful this is, but it might be nice to watch some specific photos that were taken in such orientation.Additional connectors: Look for extra USB or FireWire connections, ideally with extra power.Speakers/microphone: Some monitors include built-in speakers and/or a microphone. I personally do not care much about this because built-in speakers tend to produce low-quality sound. And with such a huge screen I assume you also want decent audio playback ;-)Built-in power supply: To avoid more clutter under your desk, you may want to check if the monitor has a built-in power supply or an external one.So, what am I looking for? I want a monitor with: at least two digital connections, preferably a DVI and an HDMI one; HDCP support; an analog input would be nice; 1:1 pixel mapping is a must; speakers are irrelevant. I've already made my choice and, hopefully, I'll get it today :-) But on the next post I'll try to summarize some of the monitors I've analyzed. [Continue reading]

  • Ministry of silly walks

    I have never posted a video here, but this time I could not resist:The "Ministry of silly walks", by Monty Python. Hilarious. [Continue reading]

  • Past days' work

    Been tracking and resolving a bug in Linux's SPU scheduler for the last three days, and fixed it just a moment ago! I'm happy and needed to mention this ;-)More specifically, tracking it down was fairly easy using SystemTap and Paraver (getting the two to play well together was another source of headaches), but fixing it was the most complex thing due to deadlocks popping up over and over again. Sorry, can't disclose more information about it yet; want to think a bit more how to make this public and whether my fix is really OK or not. But be sure I will! [Continue reading]

  • Thanks, SystemTap!

    I started this week's work with the idea of instrumenting the spufs module found in Linux/Cell to be able to take some traces of the execution of Cell applications. At first, I modified that module to emit events at certain key points, which were later registered in a circular queue. Then, I implemented a file in /proc so that a user-space application could read from it and free space from the queue to prevent the loss of events when it was full.That first implementation never worked well, but as I liked how it was evolving, I thought it could be a neat idea to make this "framework" more generic so that other parts of the kernel could use it. I rewrote everything with this idea in mind and then also modified the regular scheduler and the process-management system calls to also rise events for my trace. And got it working.But then, I was talking to Brainstorm about his new "Sun Campus Ambassador" position at the University, and during the conversation he mentioned DTrace. So I asked... "Mmm, that tool could probably simplify all my work; is it there something similar for Linux?". And yes; yes it is! Its name, SystemTap.As the web page says, SystemTap "provides an infrastructure to simplify the gathering of information about the running Linux system". You do this by writing small scripts that hook into specific points of the kernel — at the function level, at specific mark points, etc. — and which get executed when the script is processed and installed into the live kernel as a loadable kernel module.With this tool I can discard my several-hundred-long changes to gather traces and replace them with some very, very simple SystemTap scripts. No need to rebuild the kernel, no need to deal with custom changes to it, no need to rebuild every now and then... neat!Now I'm having problems using the feature that allows to instrument kernel markers, and I need them because otherwise some private functions cannot be instrumented due to compiler optimizations (I think). OK, I'd expose those functions, but while I'm at it, I think it'd be a good idea to write a decent tapset for spufs that could later be published. And that prevents me from doing such hacks.But anyway, kudos to the SystemTap developers. I now understand why everybody is so excited about DTrace. [Continue reading]

  • Dying MacBook Pro battery

    I've had my MacBoook Pro for a bit over than 11 months. Not so long ago, I remember that the battery lasted for more than three and a half hours when using the machine very lightly (some web browsing or some e-mail reading, for example), and for a bit over an hour with (very) heavy usage. But recently, I started to notice that its capacity had shortened to alarming levels: it now only lasted for about an hour with the machine idle! That didn't feel right for a machine that only had a year of life.After installing coconutBattery, I was scared to see that the battery only had 53% of its original capacity, and that was after a modest number of 80 full charge cycles. Compared to several other similar machines with much higher cycles count and battery life, mine was in very bad condition.I don't know if that was due to a defective battery or misuse of it (like too much heat caused by the computer damaging it), but I'm inclined to think it's the former specially because I've read similar (well, worse) problems from people that bought this same machine around the same dates.Anyway, the thing is I went to an Apple Store on monday to explain the situation: they just took the battery, noted down the machine's serial number (no need to show the invoice!) and told me that they'd send me an SMS when they had resolved the problem. And... today... I received that message. Shiny new battery and no complaints from them! Kudos to the service, again. [Continue reading]

  • Hello world in Linux/ppc64

    I'm decided to improve my knowledge on the Cell platform, and the best way to get started seems to be to learn 64-bit PowerPC assembly given that the PPU uses this instruction set. Learning this will open the door to do some more interesting tricks with the architecture's low-level details.There are some excellent articles at IBM developerWorks dealing with this subject, and thanks to the first one in an introductory series to PPC64 I've been able to write the typical hello world program :-)Without further ado, here is the code!## The program's static data#.datamsg: .string "Hello, world!n" length = . - msg## Special section needed by the linker due to the C calling# conventions in this platform.#.section ".opd", "aw" # aw = allocatable/writable.global _start_start: .quad ._start, .TOC.@tocbase, 0## The program's code#.text._start: li 0, 4 # write(2) li 3, 1 # stdout file descriptor lis 4, msg@highest # load 64-bit buffer address ori 4, 4, msg@higher rldicr 4, 4, 32, 31 oris 4, 4, msg@h ori 4, 4, msg@l li 5, length # buffer length sc li 0, 1 # _exit(2) li 3, 0 # return success scYou can build it with the following commands:$ as -a64 -o hello.o hello.s$ ld -melf64ppc -o hello hello.oI'm curious about as(1)'s -a option; its purpose is pretty obvious, but it is not documented anywhere in the manual page nor in the info files.Anyway, back to coding! I guess I'll post more about this subject if I find interesting and/or non-obvious things that are not already documented clearly anywhere. But for beginner's stuff you already have the articles linked above. [Continue reading]

  • Mad at the Cell SDK

    I've been installing the Cell SDK 3.0 on two Fedora 8 systems at home — a PlayStation 3 and an old AMD box — and I cannot understand how someone (IBM and BSC) can publish such an utterly broken piece of crap and be proud of it. Sorry, had to say it. (If you are one of those who wrote the installer, please excuse me, but that's what I really think. Take this as a constructive criticism.)Before saying that Fedora 8 is not supported and that I should only run this on Fedora 7, shut up. I am sure all the problems are there too, because none of them can be related to the system version.Strictly speaking, it is not that the installer does not work, because if you follow the instructions it does. But it is a very strange program that leaves garbage all around your system, produces warning messages during execution and the garbage left around will keep producing warnings indefinitely. Plus, to make things worse, the network connection to the BSC — where the free software packages are downloaded from by yum — is extremely unreliable from outside the university's direct connection (that is, from home), which means that you will have to retry the installation lots of times until you are able to download all the huge packages. (In fact, that's what annoys me most.) And this is not a problem that happened today only; it also bit me half a year ago when installing the 2.1 version.Let's talk about the installer, that marvelous application.Starting with version 3, the SDK is composed of a RPM package called cell-install and two ISO images (Devel and Extras). When I saw that, I was pretty happy because I thought that, with the RPM package alone, I'd be able to do all the installation without having to deal with ISO images. It turns out that that is not true, as some components only seem to be available from within them (most likely the non-free ones, but I haven't paid attention).Ah, you want to know what the SDK contains. Basically, it is composed of a free GCC-based toolchain for both the PPU and SPUs, the free run-time environment (the libspe2), a proprietary toolchain, the proprietary IBM SystemSim for Cell simulator and some other tools (a mixture of free and non-free ones). So, as you can see, we have some free components and some proprietary ones. You can, in fact, develop for the Cell architecture by using the free components alone. So why on earth do you need the proprietary ones? Why can't you skip them? Why aren't they available in some nice repository that I can use without any external "installer" and avoid such crap? That's something I don't get. (Maybe it's possible with some extra effort, but not what the instructions tell you.)OK, back to the installer. So you need to copy the two ISOs in a temporary directory, say /tmp/iso and then run the installer by doing something like:# cd /opt/cell# ./cellsdk --iso /tmp/dir installThis will first proceed to show you some license agreements. Here is one funny point: you must accept the GPL and LGPL terms. Come on! I am using Fedora, and I am already using lots of GPL'd components for which I did not see the license. Why do I have to do that? And why do I have to reaccept the IBM license terms when I already did that in the downloads page?After the license thing, it mounts the images under /tmp/sdk (keep this in mind because we'll get back to it later), probably does some black magic and at last launches yum groupinstall with multiple parameters to install all the SDK components. All right, you accept the installation details and it starts installing stuff. This would be OK if it wasn't for the network connection problems I mentioned earlier; I've had to restart this part dozens of times (literally) to be able to get all packages. So, again, question: why couldn't you simply tell me what to put in yum's configuration, define some installation groups for the free components alone and let me use yum to install those without having to trust some foreign and crappy installer script? Why do you insist on using /opt for some components and uninconsistently between architectures?And why did I mention /tmp/sdk? Because the yum repositories registered by the installer have this location hardcoded in. Once you unmount the ISO images (that is, when the installation is done), yum will keep complaining about missing files in /tmp/sdk forever — unless you manually change yum's configuration, that is. What is nicer, though, is that yum always complains about one specific repository because it is only available online, yet it also looks for a corresponding image in /tmp/sdk.At last, there are also some random problems (probably caused by all the above inconsistencies). Once, the script finalized successfully but the SDK was left half installed: some components were missing. Another time, the installer hang in the middle (no CPU consuption at all, no system activity) when it seemed it had finished and had to manually kill it. After restarting it, it effectively had not finished as it had to install some more stuff.Summarizing... all these problems may not be so important, but they make one feel that the whole SDK is a very clunky thing.I wish someone could create native packages for the free components of the SDK and import them into the official Fedora (or, please, please, please, Debian) repositories. After all, these are just a native compiler for the PPU, a cross-compiler for the SPUs, the libspe2 and the SPU's newlib. Note that the GCC backends for both the PPU and SPUs are already part of the FSF trees, so it shouldn't be too difficult to achieve by using official, nicer sources.Rant time over. [Continue reading]

  • Fixing id's command line parsing

    Today's work: been fixing NetBSD's id(1)'s command line parsing to match the documented syntax. Let's explain.Yesterday, for some unknown reason, I ended up running id(1) with two different user names as its arguments. Mysteriously, I only got the details for the first user back and no error for the second one. After looking at the manual page and what the GNU implementation did, I realized that the command is only supposed to take a single user or none at all as part of its arguments.OK, so "let's add a simple argc check to the code and raise the appropriate error when it is greater than 2". Yeah, right. If you look at id(1)'s main routine, you'll find an undecipherable piece of spaghetti code — have you ever thought about adding multiple ?flag variables and checking the result of the sum? — that comes from the fact that id(1)'s code is shared across three different programs: id(1), groups(1) and whoami(1).After spending some time trying to understand the rationale behind the code, I concluded that I could not safely fix the problem as easily as I first thought. And, most likely, touching the logic in there would most likely result in a regression somewhere else, basically because id(1) has multiple primary, mutually-exclusive options and groups(1) and whoami(1) are supposed to have their own syntax. Same unsafety as for refactoring it.So what did I do? Thanks to ATF being already in NetBSD, I spent the day writing tests for all possible usages of the three commands (which was not trivial at all) and, of course, added stronger tests to ensure that the documented command line syntax was enforced by the programs. After that, I was fairly confident that if I changed the code and all the new tests passed afterwards (specially those that did before), I had not broken it. So I did the change only after the tests were done.I know it will be hard to "impose" such testing/bug-fixing procedure to other developers, but I would really like them to consider extensive testing... even for obvious changes or for trivial tools such as these ones. You never know when you break something until someone else complains later. [Continue reading]

  • ATF imported into NetBSD

    Finally! After more than five months of development (with different intensities of work), I am very pleased to announce that ATF, my Google Summer of Code 2007 project, has been integrated into the NetBSD source tree.For more details see the official announcement in the tech-userlevel@ mailing list. [Continue reading]

  • ATF 0.3 released

    I've just published the 0.3 release of ATF. I could have delayed it indefinitely (basically because my time is limited now), so I decided it was time to do it even if it did not include some things I wanted.The important thing here is that this release will most likely be the one to be merged into the NetBSD source tree. If all goes well, this will happen during this week. Which finally will give a lot of exposure to the project :-) [Continue reading]

  • Games: Resistance: Fall of Man

    Yesterday night I finished playing "Resistance: Fall of Man", a game that came with the PlayStation 3 Starter Pack I bought. It was not as long as I expected but found it to be a very good game. The storytelling, sound and gameplay was nice, but I cannot judge the graphics. I already showed you the crappy monitor used with the PS3... so I'll surely go through the whole game again when I get a nicer screen.One specific thing I liked, when comparing it to other FPS such as Doom 3 or F.E.A.R., was that you barely have to use the flashlight, which in other games gets boring after a while. Well... I guess this is because Resistance was not meant to be a frightening game, as those other two are. Another point in favor is that the game has a nice set of guns, some of which are pretty original and different to all other games I've seen in this area. And it's hard to run out of ammo, at least in the most basic (but useful) guns.Speaking of other games, there are some levels that will remind you a lot to Call of Duty, and some others to Half Life 2. Not bad, but seems like developers of FPS are running out of ideas.To conclude, let me say that playing with the Sixaxis controller, as opposed to a keyboard plus mouse, was extra nice. It was very difficult to get used to it, but in the end makes for a good gaming experience. I'm now willing to try Metroid Prime 3 or Red Stell on the Wii with its Wiimote, which surely are better in this regard.Recommended game :-) [Continue reading]

  • Got it! (The PS3)

    A bit more than a week ago, I posted about considering to buy a PlayStation 3 and, finally, yesterday evening I took the plunge and bought a Starter Pack that comes with a PlayStation 3 (60GB model, about to be deprecated), 2 Sixaxis controllers (I know the DualShock 3 is about to be published) and two games (Resistance: Fall of Man and Motorstorm).I love the machine so far and think that the money was well spent, even though I haven't had a chance to install Linux yet. Sincerely, I tried but failed: Fedora 8 (Test 3) doesn't seem to be supported by the installer, but I'll keep trying.My setup:But this screws everything up:Eww, ugly, isn't it? ;-) What's the point of a machine able to output Full HD when I only have this crappy monitor? Specially when the signal of the PS3 is going through an old Avermedia TV Phone card plugged into an old computer that is later connected to this flat panel though an analog VGA connection. Lots of image quality loss! (The Linux console looks even uglier. No joy in using it as a desktop system with this setup, as I wanted to do.)By the way, the monitor is showing the Folding@Home client bundled in the PS3 system. [Continue reading]

  • Considering to buy a PS3

    For the last couple of weeks, I've been considering to get a PlayStation 3. Not because of gaming, as I'm not a hardcore gamer, but because of the development platform it provides: a rather compact and cheap machine with an heterogeneous multiprocessor — the Cell Broadband Engine — that can easily run third-part OSes. My current research tasks focus on this area, so having a personal Cell machine at home to tinker with would be nice.Sure, we do already have a PS3 at the department and access to the Cell blades at the BSC, but none of these are easy to access physically and they are used by other researchers to do work. Leaving them unusable or doing drastic changes to the installed systems could annoy them. (OK, the PS3 is here to allow us to install custom kernels and do more bizarre changes, but still people is working on it periodically.)But making the decision is hard.Pros:A machine with a Cell processor. Ideal for my current work.A Blu-ray player.Cheap considering the above two points.Additionally *grin*, it's a gaming machine. Never owned one myself.Installing third-party OSes is supported by the official firmware.Possibility to install NetBSD and help in the port.Possibility to try other distributions of Linux rather than the ones officially supported by the Cell SDK. (We are currently use Fedora 6 on the PS3 at the department, and I don't like it that much, but I can't easily convince my tutor to try something else.)Easy access to the machine, with a monitor and keyboard.Maybe provide a "decent" desktop computer for lightweight tasks.I've heard rumors of a DVB kit for the PS3 coming in christmas. This could be a great selling point for me.Cons:Considering it's a gaming machine, it's too darn expensive.In my eyes, it's the worst third-generation gaming machine nowadays. The Xbox 360 has a lot more games. The Wii has the Wiimote (and my brother already owns this, so I can play with it if I want to). And as I already said, I'm not a hardcore gamer (and don't want to be one). But hey, I'd be getting a gaming machine, but the worst option...I'd also have to buy a monitor for it, and flat displays above 22" (the ones that support 1080 lines) are not cheap. Sum this (even if I'd end up with a 20" one) to the PS3's price and a couple of games.It's ugly, but oh well, you can surely have a different opinion.I'm afraid I'd not use as much as I think I'd do now.Any other points? I see I listed more pros than cons, but still... I'm unsure.Update (Sep 6th): And now, Sony has officially announced a price drop for the PS3... [Continue reading]

  • ATF meets XML

    During the last couple of days, I've been working on the main major change planned for the upcoming ATF 0.3 release: the ability to generate XML reports with the tests results so that they can later be converted to HTML.It goes like this: you first run a test suite by means of the atf-run tool, then use the atf-report utility to convert atf-run's output to the new XML format and at last use a standard XSLT processor to transform the document to whichever other format you want. I'm including sample XSLT and CSS style-sheets in the package to ease this process.I've uploaded an example of what this currently looks like, but be aware that this is still a very preliminary mockup. For example, adding more details for failed stuff is needed to ease debugging later on. Comments welcome! [Continue reading]

  • ATF 0.2 released

    I am pleased to tell you that ATF 0.2 has just been released! This is the first non-SoC release coming exactly after a month from the 0.1 release, which means that the project is still alive :-)This is just a quick note. For more details please see the official announcement. Enjoy! [Continue reading]

  • Getting started with Cocoa

    I recently subscribed to the Planet Cocoa aggregator and it has already brought me some interesting articles. Today, there was an excellent one titled Getting started with Cocoa: a friendlier approach posted at Andy Matuschak's blog: Square Signals.This post guides you through your first steps with Cocoa. Its basic aim is making you gain enough intuition to let you guide yourself through Cocoa documentation in the future. If you have ever programmed in, e.g. Java, you know what this means: you first need to have some basic knowledge of the whole platform to get started and, at that point, you can do almost anything by driving to the API documentation and searching for what you need — even if you had no clue on how to accomplish your task before.Attacking the in-depth documentation directly is hard because it overwhelms you with details that are not important to the beginner. Plus it does not show you the big picture.I am by no means a Cocoa expert yet (in fact I'm very much a beginner), so this post will be extremely helpful to me, at least; thanks Andy! I hope it is to you too in case you wanted to begin programming for Mac OS X. [Continue reading]

  • TV series: Jericho

    Just finished watching Jericho's first season, aired during the summer on the Spanish Tele 5 channel. What an incredible drama show! It has all ingredients to keep you hooked: well balanced characters, good acting and scenery, lots of action and, of course, a good dose of suspense (OK, not as much as in Lost, but that's exaggerate). It is a pity it was cancelled and, after looking at their site, I don't really understand if there will be a second season or not; I hope there'll be.And yeah... I think it's hard to avoid comparing this show with 24 (hmm, sixth season starting on Thursday here), specially when Hawkins is in action. Just watch it and you'll probably understand this point ;-)Highly recommended. [Continue reading]

  • Me @ deviantART

    I have been a deviantART member for a bit more than three years already — wow, time passes fast... too fast —, but I never submitted any stuff I was really proud of. In fact, I have just removed the three deviations I had in my account.Now that I own a decent digital camera, I think I'll start posting some of the photos I like most in that place after retouching them a bit. I don't know if I'll keep up with this, but I'm optimist right now ;-) Feel free to dive into my member page and see the two photos I posted!"Why not Flickr?", I hear. At the moment I think I won't be posting lots of photos nor personal ones; that is, not full photo rolls and not photos of me nor any other person I know. Hence deviantART seems a better place to share my "selected" stuff. [Continue reading]

  • Serial console cable for an old Mac

    I'm currently working on the NetBSD/mac68k kernel to migrate it from the old rcons framebuffer driver to a more modern one that supports colors, virtual terminals, custom fonts and all other assorted goodies that come with wscons. Unfortunately, I've found a very mysterious system hang-up with my code that I cannot easily debug from the machine itself because the console does not work at all. Hence, I needed to have a serial console for this machine, a Performa 630.The problem is that old Macintoshes use a DIN-8 connector for their serial line, as opposed to the DE-9 (or DE-25) used in PCs. Fortunately it is possible to connect the two by properly wiring a conversion cable, and that's what I've done today. My first attempt failed because I built a DTE-DCE cable (used to connect to modems and other communications equipment), but in the end I got it, which resulted in a "null-modem" cable to connect the two machines.Here is the scheme I used:DIN-8 (DTE)DC-9 (DCE)DC-9 (DTE)1782873324555236N/CN/C7N/CN/C855N/C stands for "Not connected". Use at your own risk. [Continue reading]

  • New camera (and current desktop)

    After some time complaining about the slowness, the size and the non-working zoom of my old camera, a Kodak Easyshare DX4530 that my father gave me when replacing it, I am now a proud owner of a Canon PowerShot A570 IS. This is my first "decent" digital camera, and I'm looking forward to learning the basics of photography with it.This camera is bigger than what I planned — it is not much smaller than the Kodak —, but, on the other hand, it offers lots of features worth having. To mention some: it has good lens quality, complete manual controls, optical image stabilization, a standard optical zoom (4x) and a very intuitive and easy-to-manage interface (but, of course, this last point is my personal opinion).I thought I could share some pictures of the mess my desktop currently is &mdash. OK, OK, maybe it's not that messy; I have seen photos depicting much worse scenarios than this one. I wish mine could be similar to this one, powered by a huge Apple Cinema Display and a Mac Pro, but I'm not there (yet!) ;-) The laptop and the desire to hack NetBSD on old computers makes keeping things tidy hard.So here we go: this is the main view, where you can easily see the MacBook Pro, an old Apple keyboard to the left and a relatively old PC on the right-low corner, connected to an old flat screen seen at the top-right corner. There is also too much garbage on top of the table that should not really be there: batteries, the speakers, papers... scissors?And here you can see the "vintage corner", currently composed of a couple of old Macintoshes (68040-powered) and a DNARD (also known as a "shark"). There are also a couple of USB hard disks, but those are not old ;-) [Continue reading]

  • Weird laptop keyboard

    My aunt asked me if I'd do a full reinstall of the software in her laptop, a Compaq Presario 1200, because it was not working properly. This was horrible to do due to the speed of the machine, which feels incredibly slow nowadays. Plus there was a problem with the keyboard: it had never worked properly as in "some keys were not mapped to the right place". The keyboard looks like this:After reinstalling the system and all the drivers from Compaq, I was deceived to see that the keyboard still did not work properly. How could it be that it had never worked before (not even after buying it with the preinstalled system!) and that it could not work after a clean install?I messed with the two Spanish keyboard mappings (traditional and international sorting) and tried to remap the incorrect keys using some free tools that I found, but neither of the two fixed the problem. After a while, though, I realized that the fact that the keyboard did not have the C cedilla (ç) key would mean that it was not manufactured for Spain: that key is used in Catalan (among other languages, of course), and it has to be there for "Spain Spanish" keyboards.And you know what? That was the correct rationale. Switching the keyboard to a south-american Spanish layout made it work as expected (tried the one from Argentine). Now, the generated characters match the letters printed in the physical keys, even though they won't be able to write a ç. (Will resolve this with some ugly hack or with an external USB keyboard.)What I'm wondering now is... how could a very popular reseller here in Spain sell them laptop with a non-native keyboard!? That's nonsense.For comparison with the above keyboard, this is how it should have looked: [Continue reading]

  • Random MacBook Pro notes

    Some things I have had in my mind for a while but for which I'm lazy to post full-blown posts:Updated the internal Hitachi 160GB 5400RPM drive to a Seagate Momentus 7200.2. There is a lot of people who say that the difference between 5400RPM and 7200RPM is negligible in laptop disks. Screw that. For daily tasks (browse the network, read your mail, etc.) it may not be too noticeable, but for disk intensive operations it really is.Some numbers: Half-Life 2 now takes 20 seconds less (from a total of 1'50") to start. A NetBSD build.sh sets now takes 1'52" compared to the 4'15" it took before. A NetBSD build.sh release with already-built sources (i.e. a release build that does no CPU-intensive operations) has been cut down to half the total time: 11'53" now as opposed to the old 23'10". And I didn't "benchmark" iPhoto, but it surely starts up much faster now. Quite a bit of a difference I say!The video card, an ATI Radeon Mobility X1600 128MB, is starting to show its limits. Been trying some game demos and they are barely playable at the native resolution, 1440x900, which is the lowest I have found at the 16:10 aspect ratio. I refuse to play with an aspect ratio that does not match the physical screen...BioShock is usable but with game detail set to a minimum, and even then some scenarios feel slow. I feel that this game should look gorgeous with high detail settings. F.E.A.R. is certainly playable (finished it) but with details set to a relatively low level. Lost Planet: Extreme Conditions is simply unusable.Silent computing... well, the machine is truly silent when doing light operations, but I hate doing heavyweight tasks on it such as building NetBSD, encoding video or playing games. Not because it is slow, but because the fans spin up to their maximum speed (around 6000RPM) and they make a damn lot of noise. Maybe this noise would not be that noticeable on a desktop computer, but as we are talking of a laptop, we are very close to the fans during usage.Been using the external Apple USB keyboard I have (not the new flat model, which by the way looks cool) with the laptop for a while. That keyboard is crap after some months of usage. It does not feel smooth to the touch any more. The MacBook Pro's built-in keyboard is much, much better./me considering a Mac Pro in the not-so-distant future. [Continue reading]

  • SoC: Second preview of NetBSD with ATF

    Reposting from the original ATF news entry: [Continue reading]

  • SoC: Some statistics

    Here go some statistics about what has been done during the SoC 2007 program as regards ATF:The repository weights at 293 revisions, 1,174 certificates (typically 4 per revision, but some revisions have more) and 221 files. This includes ATF, the patches to merge it into the NetBSD build tree and the website sources. (mtn db info will give you some more interesting details.)The clean sources of ATF 0.1 (not counting the files generated by the GNU autotools) take 948Kb and are 20,607 lines long (wow!). This includes the source code, the manual pages, the tests and all other files included in the distribution.The patches to merge ATF into NetBSD, according to diffstat, change 209 files, do 6,299 line insertions and 4,583 line deletions. Aside merging ATF into NetBSD, these changes also convert multiple existing regression tests to the new framework.As regards the time I have spent on it... I don't know, but it has been a lot. It should have been more as I had to postpone the start of coding some weeks due to university work, but I think the results are quite successful and according to the expectations. I have been able to cover all the requirements listed in the NetBSD-SoC project page and done some work on the would-be-nice ones.I am eager to see the results of the other NetBSD-SoC 2007 projects as there was very interesting stuff in them :-) [Continue reading]

  • SoC: ATF 0.1 released

    To conclude the development of ATF as part of SoC, I've released a 0.1 version coinciding with the coding deadline (later today). This clearly draws a line between what has been done during the SoC program and what will be done afterwards.See the official announcement for more details!I hope you enjoy it as much as I did working on it. [Continue reading]

  • SoC: Status report

    SoC's deadline is just five days away! I'm quite happy with the status of my project, ATF, but it will require a lot more work to be in a decent shape — i.e. ready to be imported into NetBSD — and there really is no time to get it done in five days. Furthermore, it is still to unstable (in the sense that it changes a lot) so importing it right now could cause a lot of grief to end users. However, after a couple of important changes, it may be ready for a 0.1 release, and that's what I'm aiming for.I have to confess again that some parts of the code are horrible. That's basically because it has been gaining features in an iterative way, all which were not planned beforehand... so it has ended up being hack over hack. But, don't worry: as long as there is good test coverage for all the expected features, this can easily be fixed. With a decent test suite, I'll be able to later rewrite any piece of code and be pretty sure that I have not broken anything important. (In fact, I've already been doing that for the most inner code with nice results.)So what has changed since the preview?All files read by ATF as well as all data formats used for serialization of data have now a header that specifies their format (a type and a version). This is very important to have from the very beginning so that the data formats can easily be changed in the future (which will certainly happen).Rewrote how test programs and atf-run print their execution status. Now, the two print a format that is machine-parseable and which is "sequential": reading the output from top to bottom, you can immediately know what the program is doing at the moment without having to wait for future data.Added the atf-report tool, which gathers the output of atf-run and generates a user-friendly report. At the moment it outputs plain text only, but XML (and maybe HTML) are planned. The previous point was a pre-requisite for this one.Merged multiple implementation files into more generic modules.Merged the libatf and libatfprivate libraries into a single one. The simpler the better.Added build-time tests for all public headers, to ensure that they can be included without errors.Implemented run-time configuration variables for test programs and configuration files.Wow, that's a lot of stuff :-)And talking with my mentor five days ago, we got to the following list of pending work to get done before the deadline:Configuration files. Already done as of an hour ago!A plain text format that clearly describes the results of the test cases (similar to what src/regress/README explains). I haven't looked at that yet, but this will be trivial with the new atf-report tool.Would be nice: HTML output. Rather easy. But I'm unsure about this point: it may be better to define a XML format only and then use xsltproc to transform it.Manual pages: A must for 0.1 (even if they are not too detailed), but not really required for the evaluation.Code cleanups: Can be done after SoC, but I personally dislike showing ugly code. Unfortunately there is not enough time to spend on this. Cleaning up a module means: rewriting most of it, documenting each function/class and adding exhaustive unit tests for it. It is painful, really, but the results are rewarding.Keep the NetBSD patches in sync with development: I'm continuously doing that!Let's get back to work. [Continue reading]

  • SoC: First preview of NetBSD with ATF

    Reposting from the original ATF news entry:I have just uploaded some NetBSD-current release builds with ATF merged in. These will ease testing to the casual user who is interested in this project because he will not need to mess with patches to the NetBSD source tree nor rebuild a full release, which is a delicate and slow process. For the best experience, these releases are meant to be installed from scratch even though you can also do an upgrade of a current installation. They will give you a preview of how a NetBSD installation will look like once ATF 0.1 is made public, which should happen later this month.For more details see my post to the NetBSD's current-users mailing list.Waiting for your feedback :-)Edit (Aug 20th): Fixed a link. [Continue reading]

  • Hibernating a Mac

    Mac OS X has supported for a very long time putting Macs to sleep. This is a must-have feature for laptops, but is also convenient for desktop machines. However, it hasn't been since the transition to Intel-based Macs that it also supports hibernation, also called deep sleep. When entering the hibernation mode, the system stores all memory contents to disk as well as the status of the devices. It then powers off the machine completely. Later on, the on-disk copy is used to restore the machine to its previous state when it is powered on. It takes longer than resuming from sleep status, but all your applications will be there as you left them.Now, every time you put your Intel Mac to sleep it is also preparing itself to hibernate. This is why Intel Macs take longer than PowerPC-based ones to enter the sleep mode. This way, if the machine's battery drains completely in the case of notebooks, or the machine is unplugged in the case of desktops, the machine will be able to quickly recover itself to a safe state and you won't lose data.As I mentioned yesterday, I've been running my MacBook Pro for a while without the battery, so I had an easy chance to experiment hibernation. And it's marvelous. No flaws so far.The thing is that I always powered down my Mac at night. The reason is that putting it to sleep during the whole night consumed few but enough battery to require a recharge next morning to bring it back to 100%, so I didn't do it. But now I usually put it to hibernate; this way, on the next boot, I can continue work straight from where I left it and I don't have to restart any applications.Now... putting a Mac notebook into this mode is painful if you have to remove the battery every time to force it to enter hibernation mode, and unfortunately Mac OS X does not have any "Hibernate" option. But... there is this sweet DashBoard widget called Deep Sleep that lets you do exactly that! No more boots from cold state any more :-) [Continue reading]

  • SoC: Status report

    It has already been a week since the last SoC-related post, so I owe you an status report.Development has continued at a constant rate and, despite I work a lot on the project, it may seem to advance slowly from an external point of view. The thing is that getting the ATF core components complete and right is a tough job! Just look at the current and incomplete TODO list to see what I mean.Some things worth to note:The NetBSD cross-build tool-chain no longer requires a C++ compiler to build the atf-compile host tool. I wrote a simplified version in POSIX shell to be used as the host tool alone (not to be installed). This is also used by the ATF's distfile to allow "cross-building" its own test programs.Improved the cleanup procedure of the test case's work directories by handling mount points in them. This is done through a new tool called atf-cleanup.Added a property to allow test cases specify if they require root privileges or not.Many bug fixes, cleanups and new test cases; these are driving development right now.On the NetBSD front, there have also been several cosmetic improvements and bug fixes, but most importantly I've converted the tmpfs' test suite to ATF. This conversion is what has spotted many bugs and missing features in ATF's code. The TODO file has grown basically due to this.So, at the moment, both the regress/bin and regress/sys/fs/tmpfs trees in NetBSD have been converted to ATF. I think that's enough for now and that I should focus on adding the necessary features to ATF to improve these tests. One of these is to add support for a configuration file to let the user specify how certain tests should behave; e.g. how to become root or which specific file system to use for certain tests.I also have a partial implementation to add a "fork" property to test cases to execute them in subprocesses. This way they will be able to mess all they want with the open file descriptors without disturbing the main test program. But to get here, I first need to clean up the reporting of test case's results.On the other hand, I also started preparing manual pages for the user tools as some of them should remain fairly stable at this point. [Continue reading]

  • Processor speed and desktop usage

    Back in July 7th, I disassembled my MacBook Pro to see if I could easily replace its hard disk for a faster one. I hadn't bought it yet because I first wanted to check that the whole process was easy. The thing is that, after a couple of problems, I could disassemble it. So I then ran to the local store to buy the new drive. But oh! They didn't have it. I decided to not reassemble the computer as one of the disassembling steps was quite scary and I didn't want to repeat it unless really necessary.Stupid me. It has already been three weeks and they have not yet received any unit; I hate them at this point. And yes, I've been all this time with the laptop partly disassembled, working with external peripherals and without the battery. Which is very annoying because, even though I didn't think I really needed mobility, it is important once you get used to it.Anyway. I have been using the machine as usual all these three weeks, and have kept working on my SoC project intensively. Lately, I noticed that my builds were running slower than as I remembered: for example, I went away for two hours and when I came back a full NetBSD/i386 release build had not finished yet. That was strange, but I blamed the software: things keep growing continuously, and a change in, e.g., GCC, could easily slow down everything.But yesterday, based on this thread, I installed CoreDuoTemp because I wanted to see how the processor's frequency throttling behaved. I panicked. The frequency meter was constantly at 1GHz (and the laptop carries a 2.16GHz processor) no matter what I did. Thinking that it'd be CoreDuoTemp's fault, I rebooted into Windows and installed CPU-Z. Same results. For a moment I was worried that the machine could be faulty or that I had broken it in the disassembly process. Fortunately, I later remembered another post that mentioned that MacBook Pros without a battery installed will run with the processor at the minimum speed; seems to be a firmware bug.Effectively: I reassembled the machine today — with the old, painful, slow, stupid, ugly, etc. disk! —, installed the battery and all is fine again.Why I am mentioning all this, though? Well, the thing is... if it wasn't for the software rebuilds, I wouldn't have noticed any slowdown in typical desktop usage tasks such as browsing the web, reading the email, chatting, editing photos or watching videos. And the processor was running at half of its full power! In other words, it confirms me that extra MHz are worthless for most people. It is "annoying" to see companies throwing away lots of perfectly-capable desktop machines, replacing them with more powerful ones that won't be used to its full capacity. (OK, there are other reasons for the switch aside the machine's speed.)Just some numbers. Building ATF inside a Parallels NetBSD/i386 virtual machine took "real 4m42.004s, user 1m20.466s, sys 3m16.839s" without the battery, and with it: "real 2m9.984s, user 0m22.725s, sys 1m39.053s". Here, the speed is noticeable :-)I will blog again when I have the replacement disk and possibly post some pictures of the whole procedure. [Continue reading]

  • Parallels Desktop and VMware Fusion

    Back in February, I bought a copy of Parallels Desktop 2 and have been a very happy user of it since then. However, when Parallels 3 appeared, I hesitated to pre-order it (even at a very low price) and I did well: after it was released, I tried it on my MacBook Pro and their 3D support is useless for me. I could not play neither Half-Life 2 nor Doom 3 at acceptable speeds, being the former much worse than the latter in this regard.Now, I'm evaluating VMware Fusion RC1, and I'm almost convinced to pre-order it. This product is very similar to Parallels and in fact several of its features seem "inspired" on it, such as Unity (Coherence in Parallels speak). But it has some important features that Parallels cannot currently match. To know: support for 64-bit guests, support for 2 virtual CPUs in guests and support to network-boot the virtual machine. All of these are cool from a development point of view. The first two allow one to run some more versions of specific operating systems, such as NetBSD/amd64, as well as enabling SMP support in them. The last one makes it easy to boot development kernels without modifying any virtual disk (haven't tried this yet, though).All is not good though. Fusion is also supposed to have experimental 3D support inside the guest machines (up to DirectX 8.1). However, when trying to launch Half-Life 2 inside a Windows XP SP2 virtual machine, Windows crashed with a BSOD. At least I could launch it in Parallels, albeit it was simply unplayable. But as none of the two products make for a good gaming experience, I personally don't care about this feature.Let's conclude with some numbers about the speed of each product. I installed Debian GNU/Linux leeny under Parallels and Fusion and built monotone-0.35 from scratch under them. The virtual machines were configured to have 768MB of RAM of a total of 2GB and the machine was idle aside from the build jobs. Obviously, in the case of Parallels I could only run the test with the i386 port, but in Fusion I used both the i386 and amd64 ports with 1 and 2 virtual CPUs. I also ran the same tests on the native machine using Mac OS X 10.4.10. The timings only include the make command, not ./configure.Parallels, 32-bit, 1 virtual CPU, 'make':real 17m33.048s, user 14m24.342s, sys 3m4.080sFusion, 32-bit, 1 virtual CPU, 'make':real 16m35.507s, user 14m57.016s, sys 1m29.134sFusion, 32-bit, 2 virtual CPUs, 'make -j2':real 10m0.341s, user 17m23.541s, sys 2m12.604sFusion, 64-bit, 2 virtual CPUs, 'make -j2':real 10m24.617s, user 18m26.985s, sys 1m26.133sNative, Mac OS X, 'make':real 12m50.640s, user 11m12.997s, sys 1m20.344sNative, Mac OS X, 'make -j2':real 7m3.536s, user 11m22.875s, sys 1m26.366sSee this thread for other opinions. [Continue reading]

  • Death star!

    A cool photo I found today:I think I can say: don't be scared! That seems to be a power adapter so, supposedly, all those plugs are switched off when one of them is connected. If they weren't... this would not pass any quality assurance control... So, it is a pretty nice product :-)See the original post for more details.Edit (23rd July): Corrected the (invented) title. [Continue reading]

  • SoC: ATF self-testing

    ATF is a program, and as happens with any application, it must be (automatically) tested to ensure it works according to its specifications. But as you already know, ATF is a testing framework so... is it possible to automatically test it? Can it test itself? Should it do it? The thing is: it can and it should, but things are not so simple.ATF can test itself because it is possible to define test programs through ATF to check the ATF tools and libraries. ATF should test itself because the resulting test suite will be a great source of example code and because its execution will be on its own a good stress test for the framework. See the tests/atf directory to check what I mean; specially, the unit tests for the fs module, which I've just committed, are quite nice :-) (For the record: there currently are 14 test programs in that directory, which account for a total of 60 test cases.)However, ATF should not be tested exclusively by means of itself. If it did so, any failure (even the most trivial one) in the ATF's code could result in false positives or false negatives during the execution of the test suite, leading to wrong results hard to discover and diagnose. Imagine, for example, that a subtle bug made the reporting of test failures to appear as test passes. All tests could start to succeed immediately and nobody could easily notice, surely leading to errors in further modifications.This is why a bootstrapping test suite is required: one that ensures that the most basic functionality of ATF works as expected, but which does not use ATF to run itself. This additional test suite is already present in the source tree, and is written using GNU Autotest, given that I'm using the GNU Autotools as the build system. Check the tests/bootstrap directory to see what all this is about.ATF's self-testing is, probably, the hardest thing I've encountered in this project so far. It is quite tricky and complex to get right, but it's cool! Despite being hard, having a complete test suite for ATF is a must so it cannot be left aside. Would you trust a testing framework if you could not quickly check that it worked as advertised? I couldn't ;-) [Continue reading]

  • Daggy fixes (in Monotone)

    If you inspect the ATF's source code history, you'll see a lot of merges. But why is that, if I'm the only developer working in the project? Shouldn't the revision history be linear?Well, the thing is it needn't and it shouldn't; the subtle difference is important here :-) It needn't be linear because Monotone is a VCS that stores history in a DAG, so it is completely natural to have a non-linear history. In fact, distributed development requires such a model if you want to preserve the original history (instead of stacking changes on top of revisions different than the original ones).On the other hand, it shouldn't be linear because there are better ways to organize the history. As the DaggyFixes page in the Monotone Wiki mentions:All software has bugs, and not all changes that you commit to a source tree are entirely good. Therefore, some commits can be considered "development" (new features), and others can be considered "bugfixes" (redoing or sometimes undoing previous changes). It can often be advantageous to separate the two: it is common practice to try and avoid mixing new code and bugfixes together in the same commit, often as a matter of project policy. This is because the fix can be important on its own, such as for applying critical bugfixes to stable releases without carrying along other unrelated changes.The key idea here is that you should group bug fixes alongside the original change that introduced them, if it is clear which commit is that and you can easily locate it. And if you do that, you end up with a non-linear history that requires a merge per each bug-fix to resolve the divergences inside a single branch.I certainly recommend you to read the DaggyFixes page. One more reason to do the switch to Monotone (or any other DAG-based VCS system, of course)? ;-) Oh, I now notice I once blogged about this same idea, but that page is far more clear than my explanation.That is why you'll notice lots of merges in the ATF source tree: I've started applying this methodology to see how well it behaves and I find it very interesting so far. I'd now hate switching to CVS and losing all the history for the project (because attempting to convert it to CVS's model could be painful), even if it is that not interesting. [Continue reading]

  • Recovering two old Macs

    Wow, it has already been three years since a friend an I found a couple of old Macintoshes in a trash container1. Each of us picked one, and maybe a year ago or so I gave mine to him as I had no space at home to keep it. Given that he did not use them and that I enjoy playing with old hardware, I exchanged those two machines by an old Pentium 3 I had laying around :-) The plan is to install NetBSD-current on at least one of them and some other system (or NetBSD version) in the other one to let me ensure ATF is really portable to bizarre hardware (running sane systems, though).The machines are these:A Performa 475: Motorola 68040 LC, 4MB of RAM, 250MB SCSI hard disk, no CD-ROM, Ethernet card.A Performa 630: Motorola 68040 LC, 40MB of RAM, 500-something IDE hard disk (will replace it with something bigger), CD-ROM, Ethernet card.I originally kept the Performa 630 and already played with it when we found the machines. Among other things, I replaced the PRAM battery with a home-grown solution, added support to change the NetBSD's console colors (because the black on white default on NetBSD/mac68k is annoying to say the least) and imported the softfloat support for this platform.Then, the turn for Performa 475 came past week. When I tried to boot it, it failed miserably. I could hear the typical Mac's boot-time chime, but after that the screen was black and the machine was completely unresponsive. After Googling a bit, I found that the black screen could be caused by the dead PRAM battery, but I assumed that the machine could still work; the thing is I could not hear the hard disk at all, and therefore was reluctant to put a new battery in it. Anyway, I finally bought the battery (very expensive, around 7€!), put it in and the machine booted!Once it was up, I noticed that there was a huge amount of software installed: Microsoft Office, LaTeX tools, Internet utilities (including Netscape Navigator), etc. And then, when checking what hardware was on the machine I was really, really surprised. All these programs were working with only 250MB of hard disk space and 4MB of RAM! Software bloat nowadays? Maybe...Well, if I want this second machine to be usable, I'll have to find some more RAM for it. But afterwards I hope it'll be able to run another version of NetBSD or maybe a Linux system.1 That also reminds me that this blog is three years old too! [Continue reading]

  • SoC: Web site for ATF

    While waiting for a NetBSD release build to finish, I've prepared the web site for ATF. It currently lacks information in a lot of areas, but the most important ones for now — the RSS feed for news and the Repository page — are quite complete.Hope you like it! Comments welcome, of course :-) [Continue reading]

  • SoC: Converting NetBSD 'regress' tests

    I've finally got at a point where I can start converting some of the current regression tests in the NetBSD tree to use the new ATF system. To prove this point, I've migrated all the tests that currently live in regress/bin to the new framework. They all now live in /usr/tests/util/. This has not been a trivial task — and it is not completely done yet, as there still are some rough edges — but I'm quite happy with the results. They show me that I'm on the right track :-) and, more importantly, they show outsiders how things are supposed to work.If you want more information on this specific change you can look at the revision that implements it or simply inspect the corresponding patch file. By the way, some of the tests already fail! That's because they were not run often enough in the past, something that ATF is supposed to resolve.While waiting for a NetBSD release build to complete, I have started working on a real web site for ATF. Don't know if I'll keep working on it now because it's a tough job and there is still a lot of coding to do. Really! But, on the other hand, having a nice project page is very good marketing. [Continue reading]

  • Book: Producing Open Source Software

    This year, Google sent all the Summer of Code students the Producing Open Source Software: How to run a successful free software project book by Karl Fogel (ISBN 0-596-00759-0) as a welcome present.I've just finished reading it and I can say that it was a very nice read. The book is very easy to follow and is very complete: it covers areas such as the project's start-up, how to set things up for promoting it, how to behave in mailing lists, how to prepare releases, how to deal with volunteers or with paid developers, etc. Everything you need to drive your project correctly and without gaining much enemies.While many of the things stated in the book are obvious to anyone who has been in the open source world for a while (and already started a project on its own or contributed to an existing one), it is still a worthy read. I wish all the people involved in NetBSD (some more than others) read it and applied the suggestions given there. We'd certainly improve in many key areas and reduce pointless (or better said, unpleasant) discussions!Oh, and by the way: you can read the book online at its web page, as it is licensed under a Creative Commons Attribution-ShareAlike license. Kudos to Karl Fogel. [Continue reading]

  • SoC: Code is public now

    Just in time for the mid-term evaluation (well, with one day of delay), I've made the atf's source code public. This is possible thanks to the public and free monotone server run by Timothy Brownawell. It's nice to stay away from CVS ;-)See the How to get it section at the atf's page for more details on how to download the code from that server and how to trust it (it may take an hour or two for the section to appear). You can also go straight to the source code browser. If, by any chance, you decide to download the code, be sure to read the README files as they contain very important information.And... don't start nitpicking yet! I haven't had a chance to clean up the code yet, and some parts of it really suck. Cleaning it up is the next thing I'll be doing, and I already started with the shell library :-) [Continue reading]

  • Degree completed

    After five years of intensive work, I've finally completed my degree in Informatics Engineering (I think Computer Science is a valid synonym for that) at the FIB Faculty. This has concluded today after I defended my PFC, the project that concludes the degree. So you can now call me engineer :-) Yay!In other words: I'm free until October, when I'll start a Masters in Computer Architecture, Networks and Systems (CANS). Time to work intensively on SoC! [Continue reading]

  • SoC: Short-term planning

    SoC 2007's mid-term evaluation is around the corner. I have to present some code on June 9th. In fact, it'd be already public if we used a decent VCS instead of CVS, but for now I'll keep the sources in a local monotone database. We'll see how they'll be made public later on.Summarizing what has been done so far: I've already got a working prototype of the atf core functionality. I have to confess that the code is still very ugly (and with that I mean you really don't want to see it!) and that it is incomplete in many areas, but it is good enough as a "proof of concept" and a base for further incremental improvement.What it currently allows me to do is:Write test programs (a collection of test cases) in C. In reality it is C++ as we already saw, but I've added several macros to hide the internals of the library and simplify the definition of test cases. So basically the test writer will not need to know that he's using C++ under the hood. (This is also to mimic as much as possible the shell-based interface.)Write test programs in POSIX shell. Similar as above, but for tests written using shell scripts. (Which I think will be the majority.)Define a "tree of tests" in the file system and recursively run them all, collecting the results in a single log. This can be done without the source tree nor the build tools (in special make), and by the end user.Wrote many tests to test atf itself. More on this on tomorrow's post.What I'm planning to do now, before the mid-term evaluation deadline, is to integrate the current code into the NetBSD's build tree (not talking about adding it to the official CVS yet though) to show how all these ideas are applicable to NetBSD testing, and to ensure everything can work with build.sh and cross-compilation.Once this is done, which shouldn't take very long I hope, I will start polishing the current atf's core implementation. This means rewriting several parts of the code (in special to make it more error-safe), adding more tests, adding manual pages for the tools and the interfaces, etc. This is something I'm willing to do, even though it'll be a hard and long job. [Continue reading]

  • New Processor preferences panel in Mac OS X

    Some days ago I updated my system to the latest version of Mac OS X Tiger, 10.4.10. It hasn't been until today that I realized that there is a new cool preferences panel called Processor:It looks like this:As you can see, it gives information about each processor in the machine and also lets you disable any processor you want.There is also another "hidden" window, accessible from the menu bar control after you have enabled it. It is called the Processor palette and looks like this:I already monitor the processor activity by using the Activity Monitor's dock icon, which is much more compact, but this one is nice :-)Edit (16:22): Rui Paulo writes in a comment that this is available if you install Xcode. It turns out I have had Xcode installed for ages, but my installation did not contain the CHUD tools. I recently added them to the system, which must be the reason behind this new item in the system preferences. So... this is not related to the 10.4.10 update I mentioned at first. [Continue reading]

  • SoC: The atf-run tool

    One of the key goals of atf is to let the end user — not only the developer — to easily run the tests after their corresponding application is installed. (In our case, application = NetBSD, but remember that atf also aims to be independent from NetBSD.) This also means, among other things, that the user must not need to have any development tool installed (the comp.tgz set) to run the tests, which rules out using make(1)... how glad I'm of that! :-)Based on this idea, each application using atf will install its tests alongside its binaries, being the currently location /usr/tests/<application>. These tests will be accompanied by a control file — an Atffile — that lists which tests have to be run and in which order. (In the future this may also include configuration or some other settings.) Later on, the user will be able to launch the atf-run tool inside any of these directories to automatically run all the provided tests, and the tool will generate a pretty report while the tests are run.Given that atf is an application, it has to be tested. After some work today, it is finally possible for atf to test itself! :-) Of course, it also comes with several bootstrap tests, written using GNU Autotest, to ensure that atf's core functionality works before one can run the tests written using atf itself. Otherwise one could get unexpected passes due to bugs in the atf code.This is what atf installs:$ find /tmp/local/tests/tmp/local/tests/tmp/local/tests/atf/tmp/local/tests/atf/Atffile/tmp/local/tests/atf/units/tmp/local/tests/atf/units/Atffile/tmp/local/tests/atf/units/t_file_handle/tmp/local/tests/atf/units/t_filesystem/tmp/local/tests/atf/units/t_pipe/tmp/local/tests/atf/units/t_pistream/tmp/local/tests/atf/units/t_postream/tmp/local/tests/atf/units/t_systembuf$All the t_* files are test programs written using the features provided by libatf. As you can see, each directory provides an Atffile which lists the tests to run and the directories to descend into.The atf-run tool already works (*cough* it's code is ugly, really ugly) and returns an appropriate error code depending on the outcomes of the tests. However, the report it generates is completely un-understandable. This will be the next thing to attack: I want to be able to generate plain-text reports to be seen as the text progresses, but also to generate pretty HTML files. To do the latter, the plan is to use some intermediate format such as XML and have another tool to do the formatting. [Continue reading]

  • SoC: Prototypes for basename and dirname

    Today, I've attempted to build atf on a NetBSD 4.0_BETA2 system I've been setting up in a spare box I had around, as opposed to the Mac OS X system I'm using for daily development. The build failed due to some well-understood problems, but there was an annoying one with respect to some calls to the standard XPG basename(3) and dirname(3) functions.According to the Mac OS X manual pages for those functions, they are supposed to take a const char * argument. However, the NetBSD versions of these functions take a plain char * parameter instead — i.e., not a constant pointer.After Googling for some references and with advice from joerg@, I got the answer: it turns out that the XPG versions1 of basename and dirname can modify the input string by trimming trailing directory separators (even though the current implementation in NetBSD does not do that). This makes no sense to me, but it's what the XPG4.2 and POSIX.1 standards specify.I've resolved this issue by simply re-implementing basename and dirname (which I wanted to do anyway), making my own versions take and return constant strings. And to make things safer, I've added a check to the configure script that verifies if the basename and dirname implementations take a constant pointer and, in that (incorrect) case, use the standard functions to sanity-check the results of my own by means of an assertion.1 How not, the GNU libc library provides its own variations of basename and dirname. However, including libgen.h forces the usage of the XPG versions. [Continue reading]

  • SoC: Start of the atf tools

    Aside from the libraries I already mentioned in a past post, atf1 will also provide several tools to run the tests. An interesting part of the problem, though, is that many tests will be written in the POSIX shell language as that will be much easier than writing them in C or C++: the ability to rapidly prototype tests is a fundamental design goal; otherwise nobody could write them!However, providing two interfaces to the same framework (one in POSIX shell and one in C++) means that there could be a lot of code duplication in the two if not done properly. Not to mention that sanely and safely implementing some of these features in shell scripting could be painful.In order to resolve the above problem, the atf will also provide several binary tools that will be helpers for the shell scripts. Most of these tools will be installed in the libexec directory as they should not be exposed to the user, yet the shell scripts will need to be able to reach them. The key idea will be to later build the shell interface on top of the binary one, reusing as much code as possible.So far I have the following tools:atf-config: Used to dynamically query information about atf's installation. This is needed to let the shell scripts locate where the tools in libexec can be found (because they are not in the path!).atf-format: Pretty-prints a message (single- or multi-paragraph), wrapping it on terminal boundaries.atf-identify: Calculates a test program's identifier based on where it is placed in the file system. Tests programs will be organized in a directory hierarchy, and each of them has to have a unique identifier.The next one to write, hopefully, will be atf-run: the tool to effectively execute multiple test programs in a row and collect their results.Oh, and in case you are wondering. Yes, I have decided to provide each tool as an independent binary instead of a big one that wraps them all (such as cvs(1)). This is to keep them as small as possible — so that shell scripts can load them quickly — and because this seems to be more in the traditional Unix philosophy of having tools for doing very specific things :-)1 Should I spell it atf, ATF or Atf? [Continue reading]

  • SoC: A quote

    I've already spent a bunch of time working on the packaging (as in what will end up in the .tar.gz distribution files) of atf even though it is still in very preliminary stages of development. This involved:Preparing a clean and nice build framework, which due to the tools I'm using meant writing the configure.ac and Makefile.am files. This involved adding some useful comments and, despite I'm familiar with these tools, re-reading part of the GNU Automake and GNU Autoconf manuals; this last step is something that many, many developers bypass and therefore end up with really messy scripts as if they weren't important. (Read: if the package used some other tools, there'd be no reason to not write pretty and clean build files.)Preparing the package's documentation, or at least placeholders for it: I'm referring to the typical AUTHORS, COPYING, NEWS, README and TODO documents that many developers seem to treat as garbage and fill them up at the last minute before flushing out a release, ending with badly formatted texts full of typos. Come on, that's the very first thing a user will see after unpacking a distribution file, so these ought to be pretty!Having spent a lot of time packaging software for pkgsrc and dealing with source code from other projects, I have to say that I've found dozens of packages that do not have the minimum quality one can expect in the above points. I don't like to pinpoint, but I have to: this includes several GNOME packages and the libspe. This last one is fairly "interesting" because the user documentation for it is high-quality, but all the credibility slips away when you look at the source code packaging...To all those authors:“Programs should be written and polished until they acquire publication quality.” — Niklaus Wirth [Continue reading]

  • SoC: Project name

    The automated testing framework I'm working on is a project idea that has been around for a very long time. Back in SoC 2005, this project was selected but, unfortunately, it was not developed. A that time, the project was named regress, a name that was derived from the current name used in the NetBSD's source tree to group all available tests: the src/regress directory.In my opinion, the "regress" name was not very adequate because regression tests are just a kind of all possible tests: those that detect whether a feature that was supposed to be working has started to malfunction. There are other kinds of tests, such as unit tests, integration tests and stress tests, all of which seemed to be excluded from the project just because of its name.When I wrote my project proposal this year, I tried to avoid the "regression testing" name wherever possible and, instead, simply used the "testing" word to emphasize that the project was not focusing on any specific test type. Based on that, the NetBSD-SoC administrators chose the atf name for my project, which stands for Automated Testing Framework. This is a very simple name,and, even though it cannot be easily pronounced, I don't dislike it: it is short, feels serious and clearly represents what the project is about.And for the sake of completion, let me mention another idea I had for the project's name. Back when I proposed it, I thought it could be named "NetBSD Automated Testing Framework", which could then be shortened to nbatf or natf (very similar to the current name, eh?). Based on the latter name, I thought... the "f" makes it hard to pronounce, so it'd be reduced to "nat", and then, it could be translated to the obvious (to me) person name that derives from it: Natalie. That name stuck on my head for a short while, but it doesn't look too serious for a project name I guess ;-) But now, as the atf won't be tied to NetBSD, that doesn't make much sense anyway. [Continue reading]

  • SoC: Getting started

    This weekend I have finally been able to start coding for my SoC project: the Automated Testing Framework for NetBSD. To my disliking, this has been delayed too much... but I was so busy with my PFC that I couldn't find any other chance to get my hands on it.I've started by working on two core components:libatf: The C/C++ library that will provide the interface to easily write test cases and test suites. Among its features will be the ability to report the results of each test, a way to define meta-data for each test, etc.libatfmain: A library that provides a default entry point for test programs that takes care to run the test cases in a controlled environment — e.g. it captures all signals and deals with them gracefully — and provides a standard command-line interface for them.Soon after I started coding, I realized that I could need to write my own C code to deal with data collections and safe strings. I hate that, because that's a very boring task — it is not related to the problem at hand at all — and because it involves reinventing the wheel: virtually all other languages provide these two features for-free. But wait! NetBSD has a C++ compiler, and atf won't be a host tool1. So... I can take advantage of C++, and I'll try to. Calm down! I'll try to avoid some of the "complex" C++ features as much as possible to keep the executables' size small enough. You know how the binaries' size blows up when using templates... Oh, and by the way: keep in mind that test cases will be typically written in POSIX shell script, so in general you won't need to deal with the C++ interface.Furthermore, I see no reason for atf to be tied to NetBSD. The test cases will surely be, but the framework needn't. Thus I'm thinking of creating a standalone package for atf itself and distributing it as a completely independent project (under the TNF2 umbrella), which will later be imported into the NetBSD source tree as we currently do for other third-party projects such as Postfix. In fact, I've already started work on this direction by creating the typical infrastructure to use the GNU auto-tools. Of course this separation could always be done at a later step in the development, but doing it from the very beginning ensures the code is free of NetBSD-isms, emphasizes the portability desire and keeps the framework self-contained.I'd like to hear your comments about these "decisions" :-)1 A host tool is a utility that is built with the operating system's native tools instead of with the NetBSD's tool-chain: i.e. host tools are what build.sh tools builds. Such tools need to be highly portable because they have to be accepted by old compilers and bizarre build environments.2 TNF = The NetBSD Foundation. [Continue reading]

  • Ohloh, an open source directory

    A friend has just told me about Ohloh, a web site that analyzes the activity of open source projects by scanning their source repositories. It is quite nice! It generates statistics about the recent activity of each registered project, the languages they uses, the people working on them... And, for each developer, it accumulates statistics about their work on the different projects he has contributed to, automatically building a developer profile.You can add your own projects to the site, which is a very easy procedure, and create an account to have your own profile, which is useful to merge all your contributions to various projects into a single person. I.e. if you have contributed to one of the registered projects and search for yourself, the web will return some hits; if you have an account, you can claim that you are that person and link your contributions to multiple projects into a single page.Check out my account for an example :-)Edit (19:20): Added the detailed widget. [Continue reading]

  • tmpfs added to FreeBSD

    A bit more than a year ago, I reported that tmpfs was being ported to FreeBSD from NetBSD (remember that tmpfs was my Google SoC 2005 project and was integrated into NetBSD soon after the program ended). And Juan Romero Pardines has just brought to my attention that tmpfs is already part of FreeBSD-current! This is really cool :-)The code was imported to FreeBSD-current on the 16th as seen in the commit mail, so I suppose it will be part of the next major version (7.0). I have to thank Rohit Jalan, Howard Su and Glen Leeder for their efforts in this area.Some more details are given in their TMPFS wiki page.Edit (June 23): Mentioned where tmpfs is being ported from! [Continue reading]

  • Six months with the MacBook Pro

    If memory serves well, today makes the sixth month since I have got my MacBook Pro and, during this period, have been using it as my sole computer. I feel it is a good time for another mini-review.Well... to get started: this machine is great; I probably haven't been happier with any other computer before. I have been able to work on real stuff — instead of maintaining the machine — during these months without a hitch. Strictly speaking I've got a couple of problems... but that was "my fault" for installing experimental kernel drivers.As regards the machine's speed, which I think is the main reason why I wanted to write this post: it is pretty impressive considering it is a laptop. With a good amount of RAM, programs behave correctly and games can be played at high quality settings with a decent FPS rate. But, and everything has a "but": I really, really, really hate its hard disk (a 160 GB, 5400 RPM drive). I cannot stress that more. It's slow. Painfully slow under medium load. Seek times are horrible. That alone makes me feel I'm using a 10 year-old machine. I'm waiting for the shiny-new big 7200 RPM drives to become a bit easier to purchase and will make the switch, even if that means my battery life will be a bit shorter.About Mac OS X... what can I say that you already don't know. It is very comfortable for daily use — although that's very subjective, of course; it's quite "funny" to read some reviews that blame OS X for not behaving exactly like Windows — and, being based on Unix, allows me to do serious development with a sane command-line environment and related tools. Parallels Desktop for Mac is my preferred tool so far as I can seamlessly work with Windows-only programs and do Linux/NetBSD development, but other free applications are equally great; some worth of mention: Adium X, Camino or QuickSilver.At last, sometimes I miss having a desktop computer at home because watching TV series/movies on the laptop is rather annoying — I have to keep adjusting the screen's position so it's properly visible when laying on bed. I can imagine that an iMac with the included remote control and Front Row could be awesome for this specific use.All in all, don't hesitate to buy this machine if you are considering it as a laptop or desktop replacement. But be sure to pick the new 7200 RPM drive if you will be doing any slightly-intensive disk operation. [Continue reading]

  • PFC report almost ready

    The deadline for my PFC (the project that will conclude my computer science degree) is approaching. I have to hand out the final report next week and present the project on July 6th. Its title is "Efficient resource management in heterogeneous multiprocessor systems" and its basic goal is to inspect the poor management of such machines in current operating systems and how this situation could be improved in the future.Our specific case study has been the Cell processor, the PlayStation 3 and Linux, as these form a clear example of an heterogeneous multiprocessor system that may become widespread due to its relatively cheap price and the attractive features (gaming, multimedia playback, etc.) it provides to a "home user".Most of the project has been an analysis of the current state of the art and the proposal of ideas at an abstract level. Due to timing constraints and the complexity of the subject (should I also mention bad planning?), I have been unable to implement most of them even though I wanted to do so at the very beginning. The code I've done is so crappy that I won't be sharing it anytime soon, but if there is interest I might clean it up (I mean, rewrite it from the ground up) and publish it to a wider audience.Anyway, to the real point of this post. I've published an almost definitive copy of the final report so that you can take a look at it if you want to. I will certainly welcome any comments you have, be it mentioning bugs, typos, wrong explanOctations or anything! Feel free to post them as comments here or to send me a mail, but do so before next Monday as that's the deadline for printing. Many thanks in advance if you take the time to do a quick review!(And yes... this means I'll be completely free from now on to work on my SoC project, which is being delayed too much already...)Edit (Oct 17th): Moved the report in the server; fixed the link here. [Continue reading]

  • NetBSD's website redesign

    Even though I don't usually repost general NetBSD news, I would like to mention this one: the NetBSD web site has got a severe facelifting aiming at improving its usability and increasing the consistency among its pages.Many thanks to Daniel Sieger for his perseverance and precious work. This is something that had been attempted in the past many times but raised so many bikesheds that it was never accomplished.In case you would like to contribute to the project doing something relatively easy, you can do so now. It could be interesting to revamp many of the existing pages to be more user friendly by reorganizing their contents (simplification is good sometimes!), their explanations and making better use of our XML-based infrastructure. Keep in mind that the web site is the main "entry point" to a project and newcomers should feel very comfortable with it; otherwise they will go away in less than a minute!Furthermore, it'd be nice to see if there are any plain HTML pages left and convert them to XML. This could make all those pages automatically use the new look of the site and integrate better with it. (If you don't know what I mean, just click, for example, on the Report or query a bug at the top of the front page. It looks ugly; very ugly. But unfortunately, this is not as simple as to convert the page to XML because it is automatically generated by some script.)Send your feedback to www AT NetBSD.org or to the netbsd-docs AT NetBSD.org public list. [Continue reading]

  • Compiler-level parallelization and languages

    Some days ago, Intel announced a new version of their C++ and Fortran compilers. According to their announcement:Application performance is also accelerated by multi-core processors through the use of multiple threads.So... as far as I understand, and as some other news sites mention, this means that the compiler tries to automatically parallelize a program by creating multiple threads; the code executed on each thread is decided at build time through some algorithm that deduces which blocks of code can be executed at the same time.If this is true — I mean, if I understood it correctly —, it sounds great but I don't know to what level the compiler is able to extract useful information from code blocks in either C++ and Fortran. These two languages follow the imperative programming paradigm: a program written in them describes step by step how the machine must operate. In other words: the program specifies how a specific type of machine (a load/store one) must behave in order to compute a result, not how the result is computed.Extracting parallelization information from this type of languages seems hard if not impossible except for very simple cases. Even more, most imperative languages are not protected against side effects: there is a global state that is accessible from any part of the program, which means that you cannot predict how a specific call will change this global state. In terms of functions: a function with a specific set of parameters can return different values on each call, because it can store auxiliary information on global variables.It seems to me that functional languages are much more suitable to this kind of compiler-level parallelization than imperative ones. In a functional language, the program describes how to compute a result at an abstract level, not how to reach that result by a specific type of machine. The way the compiler arrives to that result is generally irrelevant. (If you know SQL, it has the same properties: you describe what you want to know through a query but you don't know how the database engine will handle it.) Furthermore, and this is important, purely functional languages such as Haskell do not have side effects as long as you don't use monads. So what does this mean? A function, when called with a specific same of parameters, will always return the same result. (So yes, the compiler could, and possible does, trivially apply memoization.)With these characteristics, a compiler for a functional language could do much more to implement compiler-level parallelization. Each call to a function could be analyzed to see which other functions it calls, thus generating a call graph; later on, the compiler could decide which subset of this graph is sent to each computing core (i.e. placed on an independent thread) and merge the results between threads when it got to processing the top level function it split. So if you had an expression such as:foo = (bar 3 4) + (baz 5 6)The compiler could prepare two threads, one to compute the result of bar 3 4 and one to calculate bar 5 6. At the end, and after the two threads finished, it could do the sum. Of course, bar and baz could have to be "large" enough to compensate the time spent on creating and managing the threads.Anyway, what I wanted to emphasize is that depending on the language you choose, doing specific types of code analysis and optimization can be much easier and, of course, much better.To conclude, and as I'm talking about Haskell, I'd like to suggest you to read the article "An introduction to Haskell, part 1" recently published at ONLamp.com. It ends talking about this idea a bit. [Continue reading]

  • Flattening an array of arrays

    This evening a friend asked me if I knew how to easily flatten an array of arrays (AoA from now on) in Perl 5. What that means is, basically, to construct a single array that contains the concatenation of all the arrays inside the AoA.My first answer was: "foldr", but I knew beforehand that he wouldn't like it because... this is Haskell. After some time we got to the conclusion that there is no trivial way to flatten an AoA in Perl 5, even though Perl 6 includes a built-in function to do so. He ended up using this code to resolve the problem:my @ordered = map { @$_ } values %arches;Ew, how ugly. Anyway, as part of the discussion, I then continued on my first answer just to show him how versatile functional programming is. And I said, hey, look at this nice example:Hugs> foldr (++) [] [[1,2], [3,4]][1,2,3,4]His answer: oh well, but it is easier in Ruby: just use the built-in ary.flatten function. Hmm... but why would I need a built-in function in Haskell when I can just redefine it in a trivial single line?flatten = foldr (++) []There you go, you can now flatten as much AoAs as you want! (Huh, no parameters? Well, you don't need to name them.) Isn't functional programming great? ;-)PS: I know nothing about Ruby, but I bet you can write a very similar definition using this or other non-functional languages. I remember someone explaining somewhere (yeah, that's very specific) that Ruby has some patterns that resemble functional programming. So yes, you can probably define a flatten function by using something that looks like foldr, but that might look out of place in an imperative language. (Would be great to know about it for sure!)Edit (June 9th): Added a link to my friend's blog. [Continue reading]

  • Is assembly code faster than C?

    I was reading an article the other day and found an assertion that bugged me. It reads:System 6.0.8 is not only a lot more compact since it has far fewer (mostly useless) features and therefore less code to process, but also because it was written in assembly code instead of the higher level language C. The lower the level of the code language, the less processing cycles are required to get something done.It is not the first time I see someone claiming that writing programs in assembly by hand makes them faster, and I'm sure it is not the last time I'll see this. This assertion is, simply put, wrong.Back in the (good?) old days, processors were very simple: they fetched a instruction from main memory, executed it and once finished (and only then), they fetched the next instruction and repeated the process. On the other hand, compilers were very primitive and their optimization engines were, I dare to say, non-existent. In such scenario, a good programmer could really optimize any program by writing it in assembly instead of in a high-level language: he was able to understand very well how the processor internally behaved and what the outcomes of each machine-level instruction were. Furthermore, he could get rid of all the "bloat" introduced by a compiler.Things have changed a lot since then. Nowadays' processors are very complex devices: they have a very deep execution pipeline that, at a given time, can be executing dozens of instructions at once. They have powerful branch prediction units. They reorder instructions at run time and execute them in an out-of-order way (provided they respect the data dependencies among them). There are memory caches everywhere. So... it is, simply put, almost impossible for a programmer's brain to keep track of all these details and produce efficient code. (And even if he could, the efficiency could be so tied to a specific microprocessor version that it'd be useless in all other cases.)Furthermore, compilers now have much better optimization stages than before and are able keep track of all these processor-specific details. For example, they can reorder instructions on their own or insert prefetching operations at key points to avoid cache misses. They can really do a much better job in converting code to assembly than a programmer would in most cases.But hey! Of course it is still possible and useful to manually write optimized routines in assembly language — to make use of SIMD extensions for example — but these routines tend to be as short and as simple as possible.So, summarizing: it no longer makes sense to write big programs (such as a complete operating systems) in assembly language. Doing that means you lose all the portability gains of a not-so-high-level language such as C and that you will probably do a worse optimization job than a compiler would. Plus well-written and optimized C code can be extremely efficient, as this language is just a very thin layer over assembly.Oh, and back to the original quote. It would have made sense to mention the fact that the Mac Plus was written in assembly if it had been compared with another system of its epoch written in C. In that case, the argument would have been valid because the compilers were much worse than they are today and the processors were simpler. Just remember that such assertion is, in general, not true any more. [Continue reading]

  • Mac tutorials at ScreenCasts Online

    I've recently subscribed to (the free version of) ScreenCasts Online based on some comments I read somewhere. This is a video podcast that explains tips and tricks for the Mac, and presents third party software — either commercial or free — in great detail, which is ideal if you are planning to purchase some specific commercial program.The typical show starts by presenting a problem to be resolved or by directly talking about the specific program to be presented. It is then followed by a detailed inspection of the user interface and some sections that exemplify common tasks. At the very end, it gives pointers to either fetch or buy the program. I have to confess that I find some of these shows to be excessively detailed, to the point of becoming boring at some point. But they are still a good way to see all the possibilities a given program can offer."Thanks" to them, I've fallen in love with OmniGraffle and OmniPlan ;-) Pity they are so expensive because I won't be paying that amount of money for my extremely modest needs. [Continue reading]

  • Keeping pkgsrc packages up to date

    drio asks in the suggestion box which is the best way to keep all the packages installed from pkgsrc up to date. I must confess that pkgsrc is quite weak in the updating area when compared to systems such as apt-get or yum. The problem comes from the fact that pkgsrc is a source-based packaging system, meaning that the end user builds packages by himself most of the times. Doing updates from such a system is hard because rebuilds take a long time and have high chances of breaking, leaving your system in an unusable status. Of course there is support for binary packages in pkgsrc, but we are not doing a good job in providing good sets of prebuilt binaries. Furthermore, and as drio stated, there is few documentation on the subject.The thing is there are several ways of updating all your installed packages. All of them are quite tedious and not "official", but with some work you can configure some scripts and cron jobs to automate the process as much as possible.Before doing an update, I usually start by running pkg_chk -u -n; this tells me which packages are out of date and which are their new versions. If the resulting list is short, I tend to follow the make replace procedure. This only works if the new versions of the packages are binary compatible with the old ones, something that you cannot guarantee by looking at the version numbers. For example, you can assume that if you have version X.Y.A from the libfoo library, the newer X.Y.B will be compatible to the old one. This is generally true but not always. Plus you need to have some knowledge of the dependency graph of your installed packages. Anyway, if you want to take the risk, simply go to the pkgsrc's directory for the outdated packages and run make replace in them. In most cases this works and is the fastest way to do minor updates.Things get worse when you have to update lots of stuff. The first and most obvious approach resorts to doing a clean reinstall. Start by issuing pkg_delete -r "*" followed by wiping /usr/pkg and /var/db/pkg. Then rebuild your packages. The problems with this approach are that it introduces a huge downtime in the system — until you have rebuilt everything (which can take a long time), the tools won't be available — and that any build failure can prevent you from reconfiguring your system soon enough.Another approach involves using different installation prefixes for the old and new installations. I used to do that when working on major GNOME updates. To do this set LOCALBASE to something like /usr/pkg-YYYYMMDD (similarly for PKG_SYSCONFBASE, VARBASE and PKG_DBDIR) where YYYYMMDD is the date when you started the installation of that specific set of packages. Then install your packages as usual and at last create a /usr/pkg symlink to point to the real directory. Do not change the date until you need to do major updates. When that time comes, change the date in your configuration to the current day; after that, pkgsrc will think that you don't have any packages installed so you can cleanly reinstall everything. Once you have finished installing all your packages again, update the /usr/pkg symlink to point to the new directory and remove the old one. Voila, minimum downtime and build failures cannot bother you. (However, you will need to migrate configuration files to the new tree, for example.)The last approach I can think of involves using pkg_comp. Use this tool to configure a sandbox in which you build binary packages for all the stuff you are interested in. You can even set up a cron job to do this rebuild weekly, for example, which is trivial using the tool's auto target. Once pkg_comp has generated a clean set of binary packages for you, you can proceed to update your real system with those packages. The way you proceed is up to you though. You can remove everything and do a clean reinstall (which should be a quick process anyway because you needn't rebuild anything!) or use pkg_add -u for the outdated packages. I think this is the safest way to proceed.Oh, I now notice that there is a pkg_rolling-replace utility that can also be used for updates. Dunno how it works though.Hope this makes any sense!Edit (22:15): Peter Bex refers us to the How to upgrade packages page in the unofficial NetBSD Wiki. It contains all these tricks plus many more, so it is worth to link it from here for completeness. [Continue reading]

  • Piled Higher and Deeper

    I've been told today about the Piled Higher and Deeper website, also known as phdcomics (easier to remember). And so far I'm hooked. I love this comic strip! May it be because I'm already involved in the research area due to my PFC and I know what they are talking about? Possibly. And it also illustrates what I can "expect" if I finally enroll in a Ph.D. course. [Continue reading]

  • Monotone's help rewrite merged

    I have just merged my net.venge.monotone.help-rewrite branch into the mainline Monotone's source code. I already explained its purpose in a past post, so please refer to it to see what has changed.There is still some work to do on the "help rewrite" area, but I won't have the time to do it in the near future. Hence I added some items to the ROADMAP file explaining what needs to be done, hoping that someone else can pick them up and do the work. They are not difficult but they can introduce you to Monotone's development if you are interested! ;-) [Continue reading]

  • Talk about Git

    I've been using Git (or better said Cogito) recently as part of my PFC and, although I don't like the way Git was started, I must confess I like it a lot. In some ways it is very similar to Monotone (the version control system I prefer now) but it has its own features that make it very interesting. One of these is the difference between local and remote branches, something I'll talk about in a future post.For now I would just like to point you to a talk about Git by Linus given at Google. He focuses more on general concepts of distributed version systems than on Git itself, so most of the ideas given there apply to many other systems as well. If you still don't see the advantages of distributed VCSs over centralized ones, you must watch this. Really. Oh, and it is quite "funny" too ;-) [Continue reading]

  • pkgsrcCon 2007 report

    pkgsrcCon 2007 is over. The conference started around 1:00pm on the 27th and has lasted until today^Wyesterday (the 29th) at around 7:00pm. There have been 10 different talks as planned, although we weren't able to follow the proposed schedule. Most of the presentations were delayed and some were shifted because the speakers could not arrive on time. Not a big deal though.We have been, more or less, around 20-25 people. There were 30 registered, but a couple had to withdraw and some others did not come (for unknown reasons to us). Maybe the conferences were too technical or the schedule did not meet their expectations.The thing is I originally intended to write a short report after each day, but I have had literally no life outside pkgsrcCon since Thursday night. Taking care of the organization has been time consuming (and I have done almost nothing!), not to mention that recording all the presentations was exhausting; more on this below.Anyway, it has been an excellent experience. Meeting other pkgsrc developers in person has been very nice, plus they are all also very nice people too. Of course, their presentations were also interesting, and they showed extremely interesting ideas from their creators. I'm specially looking forward to seeing the preliminary results of one of the detailed projects (won't tell you which ;-).So, what did we do? We started with a pre-pkgsrcCon dinner on Thursday night, which was the first encounter among all developers. The talks started the day afterwards in the afternoon, and when they were over we had some beers and dinner together. Saturday was more of the same, even though it had more talks. We then had dinner on a nicer restaurant; great, great time. The last talks happened on Sunday, after which we had some more beers and the last dinner all together.I now have to compress all the video recordings and we have yet to decide if we will make them public and how. Stay tuned! (And don't hesitate to join for pkgsrcCon 2008; I'm sure you will enjoy it, at the very least as much as I did!) [Continue reading]

  • Monotone's help rewrite

    A couple of weeks ago, I updated Monotone to 0.34 and noticed a small style problem in the help output: the line wrapping was not working properly, so some words got cut on the terminal's boundary. After resolving this minor issue, I realized that I didn't know what most of the commands shown in the main help screen did. Virtually all other command-line utilities that have integrated help show some form of an abstract description for each command which allows the novice to quickly see what they are about. So why wouldn't Monotone?I started extending the internal commands interface to accept a little abstract for each command and command group, to be later shown in the help output. This was rather easy, and I posted some preliminary changes in to the mailing list. But you know what happens when proposing trivial changes...People complained that the new output was too long to be useful, which I agreed on and fixed by only showing commands in a given group at a time. But... there was also an interesting request: allow the documentation of subcommands (e.g. list keys) in a consistent way with how primary commands (e.g. checkout) are defined. There is even a bug (#18281) about this issue.And... that has kept me busy for way longer than I expected. I've ended up rewriting the way commands are defined internally by constructing a tree of commands instead of a plain list. This allows the generic command lookup algorithm to locate commands at any level in the tree, thus being able to standarize the way to define help and options on them.The work is almost done and can be seen in the net.venge.monotone.help-rewrite branch.I've also been messing with Cogito recently and found some of its user interface features to be very convenient. These include automatic paging of long output and colored diffs straight on the console. Something to borrow from them if I ever have the time for it, I guess ;-) [Continue reading]

  • SoC: Selected again!

    Yes, Google Summer of Code (SoC) 2007 is back and I'm in once again! This means I'll be able to spend another summer working on free software and deliver some useful contributions by the end of it.This time I sent just one proposal, choosing NetBSD as the mentoring organization. The project is entitled Automated testing framework and is mentored by Martin Husemann. This framework is something I've had in mind for a long time already; in fact, I also applied for this in SoC 2006 and attempted to develop this project as my undergraduate thesis.For more details on what the project is about, check out these notes.At last, take a look at the full list of accepted projects for NetBSD. It is rather short unfortunately, but they all look very promising. It is a pity ext3 support is not in them, but getting ZFS instead will be good too.Edit (April 19th): Fixed a link. [Continue reading]

  • Problems with locales?

    Reviewing photos from my trip to Punta Cana, I found this one taken at Punta Cana's airport:Looks like they had some problems with locales! The text should have read España (or Spain).On a related note, it was also curious to see all other monitors mention the flight's destination city whereas ours showed the whole country. According to the other screens, it should have really said Madrid. [Continue reading]

  • Mounting volumes on Mac OS X's startup

    As I mentioned yesterday, I have a couple of disk images in my Mac OS X machine that hold NetBSD's and pkgsrc's source code. I also have some virtual machines in Parallels that need to use these project's files.In order to keep disk usage to the minimum, I share the project's disk images with the virtual machines by means of NFS. (See Mac OS X as an NFS Server for more details.) But in doing so, a problem appears: the NFS daemon is started as part of the system's boot process, long before I can manually mount the disk images. As a result, the NFS daemon — more specifically, mountd — cannot see the exported directories and assumes that their corresponding export entries are invalid. This effectively means that, after mounting the images, I have to manually send a HUP signal to mountd to refresh its export list.A little research will tell you that it is trivial to mount disk volumes on login by dragging their icon to the Login items section of the Accounts preference panel:But... that doesn't solve the problem. If you do that, the images will be mounted when you log in, and that happens long after the system has spawned the NFS daemons.Ideally, one should be able to list the disk images in /etc/fstab, just as is done with any other file system, but that does not work (or I don't know the appropriate syntax). So how do you resolve the problem? Or better said, how did I resolve it (because I doubt it's the only solution)?It turns out it was not trivial: you need to manually write a new startup script that mounts the images for you on system startup. In order to do that, start by creating the /Library/StartupItems/DiskImages directory; this will hold the startup script as well as the necessary meta-data to tell the system what to do with it.Then create the real script within that directory and name it DiskImages:#! /bin/sh## DiskImages startup script#. /etc/rc.commonbasePath="/Library/StartupItems/DiskImages"StartService() { hdiutil attach -nobrowse /Users/jmmv/Projects/NetBSD.dmg hdiutil attach -nobrowse /Users/jmmv/Projects/pkgsrc.dmg}StopService() { true}RestartService() { true}RunService "$1"Don't forget to grant the executable permission to that script with chmod +x DiskImages.At last, create the StartupParameters.plist file, also in that directory, and put the following in it:{ Description = "Automatic attachment of disk images"; OrderPreference = "First"; Uses = ("Disks");}And that's it! Reboot and those exported directories contained within images will be properly recognized by the NFS daemon.I'm wondering if there is a better way to resolve the issue, but so far this seems to work. Now... mmm... a UI to create and manage this script could be sweet. [Continue reading]

  • How to disable journaling on a HFS+ volume

    Mac OS X's native file system (HFS+) supports journaling, a feature that is enabled by default on all new volumes. Journaling is a very nice feature as it allows a quick recovery of the file system's status should anything bad happen to the machine — e.g. a power failure or a crash. With a journaled file system, the operating system can easily undo or redo the last operations executed on the disk without losing meta-data, effectively avoiding a full file system check.However, journaling introduces a performance penalty for write operations. Every time the operating system has to modify the file system, it must first update the journal, then execute the real operation and at last mark the operation as completed in the journal. In most situations, this penalty is worth it for the reasons stated above. (Note: I haven't benchmarked this penalty; it may be unnoticeable!)There are some scenarios in which it can be avoided though. For example: I keep several disk images in my machine that hold the source code of some projects — NetBSD, pkgsrc — because these need to be placed in case-sensitive file systems. Up until now I had these configured as journaled HFS+ file systems, but I just figured out that I could gain some performance points by disabling this feature at the risk of losing the robustness introduced by journaling. After all, crashes are rare and power failures are non-existant in a laptop; plus the data stored in the images can be easily refetched at will in case of a disaster.It turns out that the Disk Utility allows you to easily enable journaling for a volume (just check out the big icon in the toolbar), but the interface provides no way to disable it. Or at least I haven't found that option. According to multiple articles I found, it was possible in older OS X versions. So I realized that the feature had to be available somewhere in recent versions.And that's right: the command-line diskutil program is able to disable journaling for a given mounted volume. Just run it as:# diskutil disableJournal /Volumes/TheVolumeNameAnd voila! Journaling disabled.For more details, check out the #107248 Knowledge Base Article. [Continue reading]

  • Games: Doom 3: Resurrection of evil

    As I told you a couple of weeks ago, I bought Doom 3: Resurrection of evil, which I have just finished it. As other reviewers have mentioned, this is more of the same Doom 3: dark and scary scenarios, lots of monsters and the same game-play.However, I find it more balanced than the original Doom 3. Overall, the game is shorter as well as each of its chapters. I started playing a chapter a day, then stopped for several days while I was busy with other stuff and today I played the last four chapters in two runs. This makes it more fun to play, because as you notice you are reaching the end you want to see it as soon as possible (as happened to me).Then there are some annoying new monsters, but they are not strong ones. There are also some new guns, such as the double shotgun from the original Doom, a Half Life 2's gravity-gun-like thing and The Artifact, which replaces the Soul's Cube; mmm, I think I preferred the latter.At last, the storyline is so thin that you don't get involved, but it doesn't matter because the game is very linear and thus you can't get stalled nor lost. Just keep going and killing everything in your way.Summarizing, you'll certainly enjoy it if you liked the original Doom 3. I'm wondering which game I can get now given that Half Life 2: Episode 2 is delayed! [Continue reading]

  • Fixing EXIF date information

    Back in February, some friends and I spent a week visiting Punta Cana. We made a lot of photos there but, when we came back, we realized that most of us (not all!) forgot to adjust our camera's date to the foreign timezone, which was 5 hours behind of ours.I imported all our photos in iPhoto but the difference in their timestamps made all images appear unsorted in the main view; this was very annoying. So I wondered if the EXIF information embedded in them could be easily fixed. It turns out it is possible, but the tools to do so — or at least the one I used — leave a lot to be desired: it was not exactly trivial to resolve the problem.In order to modify the EXIF information, I installed the free libexif library alongside a very simple frontend to it named exif, distributed from the same page. To painlessly get this in Mac OS X, just use the graphics/exif package in pkgsrc.Then I had to see which fields I wanted to modify in each photo. This can be accomplished with the exif --tag <photo-name.jpg> command, which shows all tags currently attached to the photo and their corresponding values. Skimming through the resulting list, it is easy to spot which fields correspond to date information. I specifically picked up 0x0132, 0x9003 and 0x9004, but some of them may not exist in your images.With this information at hand, I wrote a little script that looped over all affected photos and, for each tag, adjusted it to the new value. It goes as follows:#!/bin/shwhile [ $# -gt 0 ]; do file=$1; shift for tag in 0x0132 0x9003 0x9004; do date=$(exif --tag=${tag} $file | grep Value | cut -d ' ' -f 4-) date=$(perl adjust.pl "${date}") exif --tag=${tag} --ifd=0 --set-value="${date}" ${file} mv ${file}.modified.jpeg ${file} donedoneThe magic to subtract five hours to each date is hidden in the adjust.pl script, which looks like this:($date, $time) = split / /, $ARGV[0];($year, $month, $day) = split /:/, $date;($hour, $min, $sec) = split /:/, $time;$amount = 5;if ($hour >= $amount) { $hour -= $amount;} else { $hour = $hour - $amount + 24; if ($day == 1) { $day = 31; $month -= 1; } else { $day -= 1; }}printf "%04d:%02d:%02d %02d:%02d:%02dn", $year, $month, $day, $hour, $min, $sec;I know the code is really crappy but it did the job just fine! [Continue reading]

  • Cross-platform development with Parallels

    These days I'm seizing some of my free time to continue what I did as my SoC 2006 project: the Boost.Process library. There is still a lot of work to be done, but some items are annoying enough to require early attention (well, I can't speak of "early" because I hadn't touched the code for months).Boost.Process aims to be a cross-platform library and currently works under POSIX-based systems (such as Linux, NetBSD or Mac OS X) as well as under Win32 systems. However, developing such a thing is not easy if you don't have concurrent access to both systems to test your code as you go. That is because, past summer, Win32 support was "second class": I first coded everything under NetBSD and, eventually, I fired up my Windows XP installation and fixed any problems that arised due to the new code. This was suboptimal and really slowed down the development of the library.Now, with a MacBook Pro and Parallels Desktop for Mac, these issues have gone away. I can now code under whichever system I want and immediately test my changes on the other system without having to reboot! It's so convenient...And, with Coherence mode, everything is so transparent... just check out the following screenshot:To make things better I could share the project's code over the virtual network to avoid having to commit changes to the public repository before having tested them on the two systems. If you inspect the logs, you'll see many "Add feature X" commits followed by a "Fix previous under Win32". But it is a minor issue right now.Kudos to the Parallels developers, who made this possible and painless. I now understand the "computer as a tool" paradigm rather than a "computer as a hobby". [Continue reading]

  • Cursa Bombers 2007

    The Cursa Bombers is a 10km-long race that takes place in Barcelona city. Today, it has been celebrated for the 9th consecutive year with a record in the number of participants: almost 13,000.As I told you a long time ago, I enjoy running so I decided to take part in this race. The event started at 10:00 in the morning and lasted a maximum of an hour and 20 minutes.It took me 46'39" to finish it and ended up in the 4156 position; not that bad, considering that I haven't trained running for a while. The breakdown is:Absolute time of arrival: 48'49"Real time to completion: 46'39"Absolute position: 4156th (considering the absolute time)Corrected position: 3857th (considering the real time)Category and position within it: Male senior, 1917thAverage time per km: 4'40"Position and time at first checkpoint (2,5km): 5531th, 14'34"Position and time at second checkpoint (5km): 4973th, 27'02"Position and time at third checkpoint (7,5km): 4456th, 34'46" It is the first race I run but it has been very enjoyable and exciting. I'm already looking forward to the next one! [Continue reading]

  • Games: Doom 3

    It has taken me around two years but, a week ago, I finally completed Doom 3. The game is good, although not excellent. Let's see why:On the first hand, the game is frightening. Yes, it is. I didn't believe it before playing, but once you get immersed in the game, it really becomes so. For the best experience, turn off your lights, rise the volume and, of course, play alone. If you like horror games or movies, you'll certainly enjoy this one.The ambient at the beginning of the game is neat. You have few ammo, you discover new monsters, you hear screams through radio transmissions... Easy to get "involved". As you progress, you'll learn about the game's plot basically through other people's PDAs, which hold mails and voice messages. These are also recorded in a way to increase tension, as in some way they outline what will happen soon you go on.On the graphics area, it is very good, even though it requires a good amount of resources. I have finished it at high quality and there certainly are a lot of details in the images, plus a lot of neat effects. As regards sounds, they are very well placed to create a horror ambient. I think some background voices were much better in the English version (the one I started playing) than in the Spanish one, but it has been so long that I cannot tell for sure.Let's now see the negative side: the game is too repetitive and too long. That's why it has taken me so long to finish it: I've played it in four runs, because after each one I was too bored to continue playing. Well, being too long wouldn't be bad if there was more variety in the levels... in fact, I'm not sure it's that long, but it seemed so. For example, the Delta Labs seem infinite.Also, once you get to the Hell level and onwards, you find so many monsters that it becomes not so frightening. It is more a "shoot everything that moves and run" than "go slowly to see what may happen" as is at the first half of the game.Summarizing, it has more pros than cons, so it's worth it. If not, I wouldn't have just got the expansion pack, Doom 3: Resurrection of Evil! [Continue reading]

  • NTFS read/write support for Mac OS X

    It is a fact that hard disk drives are very, very large nowadays. Formatting them as FAT (in any of its versions) is suboptimal due to the deficiencies of this file system: big clusters, lack of journaling support, etc. But, like it or not, FAT is the most compatible file system out there: virtually any OS and device supports it in read/write mode.Today, I had to reinstall Windows XP on my Mac (won't bother you with the reasons). In the past, I had used FAT32 for its 30Gb partition so I could access it from Mac OS X. But recently, some guys at Google ported Linux's FUSE to Mac OS X, effectively allowing anyone to use FUSE modules under this operating system. And you guessed right: there is a module that provides stable, full read/write support for NTFS file systems; it's name: ntfs-3g.So I installed Windows XP on a NTFS partition and gave these a try.MacFUSE, as said above, is a port of Linux's FUSE kernel-level interface to Mac OS X. For those that don't know it, FUSE is a kernel facility that allows file system drivers to be run as user-space applications; this speeds up development of these components and also prevents some common programming mistakes to take the whole system down. Having such a compatible interface means that you can run almost any FUSE module under Mac OS X without changes to its code.Installing MacFUSE is trivial, but I was afraid that using ntfs-3g could require messing with the command line — which would be soooo Mac-unlike — and feared it could not integrate well with the system (i.e. no automation nor replacement of the standard read-only driver).It turns out I was wrong. There is a very nice NTFS-3G for Mac OS X project that provides you the typical disk image with a couple of installers to properly merge ntfs-3g support into your system. Once done, just reboot and your NTFS partition will automatically come up in the Finder as a read/write volume! Sweet. Kudos to the developers that made this work.Oh, and by the way. We have got FUSE support in NetBSD too! [Continue reading]

  • Building an updated kernel for the PS3

    The mainstream Linux sources have some support for the PlayStation 3, but it is marked as incomplete. Trying to boot such a kernel results in a stalled machine, as the kernel configuration option says:CONFIG_PPC_PS3: This option enables support for the Sony PS3 game console and other platforms using the PS3 hypervisor. Support for this platform is not yet complete, so enabling this will not result in a bootable kernel on a PS3 system.To make things easier, I'd simply have used the Linux sources provided by YellowDog Linux 5 (YDL5), which correspond to a modified 2.6.16 kernel. However, as I have to do some kernel development on this platform, I objected to using such old sources: when developing for an open source project, it is much better to use the development branch of the code — if available — because custom changes will remain synchronized with mainstream changes. This means that, if those changes are accepted by the maintainers, it will be a lot easier to later merge them with the upstream code.So, after a bit of fiddling, I found the public kernel branch used to develop for the PS3. It is named ps3-linux, is maintained by Geoff Levand and can be found in the kernel's git repository under the project linux/kernel/git/geoff/ps3-linux.git.Fetching the code was "interesting". I was (and still am) a novice to git, but fortunately my prior experiences with CVS, Subversion and specially Monotone helped to understand what was going on.Let's now see how to fetch the code, cross-build a custom kernel and install it on the PS3 using YDL5.To checkout the latest code, which at this moment corresponds to a patched Linux 2.6.21-rc3 sources, do this:$ git clone git://git.kernel.org/pub/scm/linux/kernel/git/geoff/ps3-linux.git ps3-linuxThis will clone the ps3-linux project from the main repository and leave it in a directory with the same name. You can keep it up to date by running git pull within the directory, but I'm not going to talk about git any more today.As I cross-compile the PS3 kernel from a FC6 Intel-based machine with the Cell SDK 2.0, I need to tell it which is the target platform and which is the cross-compiler before being able to build or even configure a kernel. I manually add these lines to the top-level Makefile, but setting them in the environment should work too:ARCH=powerpcCROSS_COMPILE=ppu-Now you can create a sample configuration file by executing the following command inside the tree:$ make ps3_defconfigThen proceed to modify the default configuration to your likings. To ease development, I want my kernels to be as small and easy to install as possible; this reduces the test-build-install-reboot cycle to the minimum (well, not exactly; see below). Therefore I disable all stuff I do not need, which includes modules support. Why? Keeping all the code in a single image will make later installation a lot easier.Once the kernel is configured, it is time to build it. But before doing so you need to install a helper utility used by the PS3 build code: the Device Tree Compiler (or dtc). Fetch its sources from the git repository that appears in that page, run make to build it and manually install the dtc binary into /usr/local/bin.With the above done, just run make and wait until your kernel is built. Then copy the resulting vmlinux file to your PS3; I put mine in /boot/vmlinux-jmerino to keep its name version-agnostic and specific to my user account. Note that I do not have to mess with modules as I disabled them; otherwise I'd have to copy them all to the machine — or alternatively set up a NFS root for simplicity as described in Geoff Levand's HOWTO.To boot the kernel, you should know that the PS3 uses the kboot boot loader, a minimal Linux system that chainloads another Linux system by means of the kexec functionality. It is very powerful, but the documentation is scarce. Your best bet is to mimic the entries already present in the file. With this in mind, I added the following line to /etc/kboot.conf:jmerino='/dev/sda1:/vmlinux-jmerino root=/dev/sda2 init=/sbin/init 4'I'd much rather fetch the kernel from a TFTP server, but I have not got this to work yet. Anyway, note that the above line does not specify an initrd image, although all the other entries in the file do. I did this on purpose: the less magic in the boot, the better. However, bypassing the initrd results in a failed boot with:Warning: Unable to open an initial console.This is because the /dev directory on the root partition is unpopulated, as YDL5 uses udev. Hence the need for an initrd image. Getting a workaround for this is trivial though: just create the minimum necessary devices on the disk — "below udev" —, as shown below.# mount --bind / /mnt# MAKEDEV -d /mnt/dev console zero null# umount /mntAnd that's it! Your new, fresh and custom kernel is ready to be executed. Reboot the PS3, wait for the kboot prompt and type your configuration name (jmerino in my case). If all goes fine, the kernel should boot and then start userland initialization.Thanks go to the guys at the cbe-oss-dev mailing list for helping me in building the kernel and solving the missing console problem.Update (23:01): Added a link to a NFS-root tutorial. [Continue reading]

  • X11 mode-line generator

    I recently installed NetBSD-current (4.99.12 at the time I did this) inside Parallels Desktop for Mac. Everything went fine except for the configuration of the XFree86 shipped with the base system: I was unable to get high resolutions to work (over 1024x768 if I recall correctly), and I wanted to configure a full-screen desktop. In my specific case, this is 1440x900, the MacBook Pro's native resolution.It turns out I had to manually add a mode line to the XF86Config file to get that resolution detected. I bet recent X.Org versions do not need this as, e.g. Fedora Core 6 works fine without manual fiddling.Writing mode lines is not fun, but fortunately I came across an automated generator. In fact, this seems to be just a web-based frontend to the gtf tool provided by NVIDIA. So I entered the appropriate details (x = 1440, y = 900, refresh = 60), hit the Generate modeline button and got:# 1440x900 @ 60.00 Hz (GTF) hsync: 55.92 kHz; pclk: 106.47 MHzModeline "1440x900_60.00" 106.47 1440 1520 1672 1904 900 901 904 932 -HSync +VsyncAfter that I had to make the HorizSync and VertRefresh values in my configuration file a bit wider to fulfill this mode's request and everything worked fine. Be extremely careful if you mess with synchronization values though; incorrect ones can physically damage a monitor, although I think this is not a problem for LCDs. [Continue reading]

  • Building the libspe2 on the PS3

    The Linux kernel, when built for a Cell-based platform, provides the spufs pseudo-file system that allows userland applications to interact with the Synergistic Processing Engines (SPEs). However, this interface is too low-level to be useful for application-level programs and hence another level of abstraction is provided over it through the libspe library.There are two versions of the libspe:1.x: Distributed as part of the Cell SDK 2.0, is the most widely used nowadays by applications designed to run on the Cell architecture.2.x: A rewrite of the library that provides a better and cleaner interface — e.g. less black boxes —, but which is currently distributed for evaluation and testing purposes. Further development will happen on this version, so I needed to have it available.The YellowDog Linux 5.0 (YDL5) distribution for the PlayStation 3 only provides an SRPM package for the 1.x version, but there is no support for 2.x. Fortunately, installing the libspe2 is trivial if you use the appropriate binary packages provided by BSC, but things get interesting if you try to build it from sources. As I need to inspect its code and do some changes in it, I have to be able to rebuild its code, so I had to go with the latter option.Let's see how to build and install libspe2 from sources on a PS3 running YDL5.The first step is to download the most up-to-date SRPM package for the libspe2, which at the time of this writing was libspe2-2.0.1-1.src.rpm. Once downloaded, install it on the system:# rpm -i libspe2-2.0.1-1.src.rpmThe above command leaves the original source tarball, any necessary patches and the spec file all properly laid out inside the /usr/src/yellowdog hierarchy.Now, before we can build the libspe2 package, we need to fulfill two requisites. The first is the installation of quilt (for which no binary package exists in the YDL5 repositories), a required tool in libspe2's build process. The second is the updating of bash to a newer version, as the one distributed in YDL5 has a quoting bug that prevents quilt from being built properly.The easiest way to solve these problems is to look for the corresponding SRPM packages for quilt and an updated bash. As YDL5 is based on Fedora Core, a safe bet is to fetch the necessary files from the Fedora Core 6 (FC6) repositories; these were: quilt-0.46-1.fc6.src.rpm and bash-3.1-16.1.src.rpm. After that, proceed with their installation as shown above for libspe2 (using rpm -i).With all the sources in place, it is time to build and install them in the right order. Luckily the FC6 SRPMs we need work fine in YDL5, but this might not be true for other packages. Here is what to do:# cd /usr/src/yellowdog/SRPMS# rpmbuild -ba --target=ppc bash.spec# rpm -U ../RPMS/ppc/bash-3.1-16.1.ppc.rpm# rpmbuild -ba --target=ppc quilt.spec# rpm -i ../RPMS/ppc/quilt-0.46-1.ppc.rpm# rpmbuild -ba libspe2.spec# rpm -i ../RPMS/ppc64/libspe2-2.0.1-1.ppc64.rpm# rpm -i ../RPMS/ppc64/libspe2-devel-2.0.1-1.ppc64.rpmAnd that's it! libspe2 is now installed and ready to be used. Of course, with the build requisites in place, you compile libspe2 in your home directory for testing purposes by using the tar.gz package instead of the SRPM.At last, complete the installation by adding the elfspe2-2.0.1-1.ppc.rpm package to the mix. [Continue reading]

  • NetBSD and SoC 2007

    Yes, ladies and gentlemen: Google Summer of Code 2007 is here and NetBSD is going to become a mentoring organization again (unless Google rejected the application, that is)!We are preparing a list of projects suitable for SoC; spend some time looking for one that interests you (I'm sure there is something) and get ready to send your proposal between the 14th and 24th of this same month. I've already made my choice :-)See the official announcement for more details. [Continue reading]

  • Article on Multiboot and NetBSD

    A bit more than a year ago I started working on Multiboot support for NetBSD. This work was completed by the end of past year and was integrated into the main source tree, allowing any user to boot his NetBSD installation by using GRUB alone, without having to chainload different boot loaders.I've written an introductory article on Multiboot and how NetBSD was converted to support it. It has just been published at ONLamp. Enjoy reading! [Continue reading]

  • ICB support added to Colloquy

    Internet Citizen's Band (or ICB) is an ancient chat protocol, most likely the precursor of IRC. It is very limited — for example, you can only be logged into a single room — but I need to use it to communicate with a group of developers. It has a nice feature, though: bricks! Those who have used it know what I mean ;-)Up until recently, I used the ircII console client to access this chat network. Sincerely, I never liked it too much, as it is quite spartan. With the complete switch to Mac OS X, I started to look for a nicer alternative. I found Colloquy, a very nice IRC and SILC client, but it lacked ICB support.Colloquy is a nice graphical IRC client. I was previously hooked to xchat and I thought it'd be difficult for me to switch, but Colloquy quickly changed my mind. It has a very clean interface and does not get in your way: the notifications system is well thought out so you can be sure to get an unobtrusive notice whenever someone requires your attention. Furthermore, its development is splitted in two parts: Chat Core, a generic framework to interact with chat protocols, and Colloquy itself, the UI built on top of Chat Core. At last, Colloquy is free! Thanks, Timothy, for such a great gift.So... past Christmas, I spent some time learning how to deal with Xcode, some fundaments of the Cocoa API and a few notions of the Objective-C language. Combined with this brief description of the ICB protocol, I worked on adding support for ICB to Chat Core and Colloquy. And the good news is... the code is already integrated into the main tree, as can be seen in the changeset 3582!It is still buggy and lacks some features — basically because of the lack of protocol documentation —, but I use it daily already. (If you use a beta version of Leopard you may find more serious issues though.) I will be fixing problems in the following days, such as the one resolved yesterday in changeset 3585; yep, that is my first commit to the tree :-)Until the tarball on the website is rebuilt, I can provide you binary builds with ICB support in them. Or, alternatively, you can simply download the source code and compile it yourself, which is very easy with Xcode. Have fun! [Continue reading]

  • PFC subject chosen

    A while ago, I was doubtful about the subject of my undergraduate thesis (or PFC as we call it). At first, I wanted to work on a regression testing framework for NetBSD. This is something I really want to see done and I'd work on it if I had enough free time now... Unfortunately, it didn't fit quite well my expectations for the PFC: it was a project not related at all with the current research subjects in my faculty, hence it was not appropriate enough to integrate into one of these work groups.So, after inverstigating some of the projects proposed by these research groups, I've finally settled on one that revolves around heterogeneous multiprocessor systems such as the Cell Broadband Engine. The resulting code will be based on Linux as it is the main (only?) platform for Cell development, but the concepts should still be applicable to other systems. Who knows, maybe I'll end up trying to port NetBSD to a Cell machine — shouldn't be too hard if that G5 support is integrated ;-)The preliminary title: Efficient resource management in heterogeneous multiprocessor systems. For more details, check out the Project proposal (still not concreted, as you can see). [Continue reading]

  • Mac OS X aliases and symbolic links

    Even though aliases and symbolic links may seem to be the same thing in Mac OS X, this is not completely true. One could think they are effectively the same because, with the switch to a Unix base in version 10.0, symbolic links became a "normal thing" in the system. However, for better or worse, differences still remain; let's see them.Symbolic links can only be created from the command line by using the ln(1) utility. Once created, the Finder will represent them as aliases (with a little arrow in their icon's lower-left corner) and treat them as such. A symbolic link is stored on disk as a non-empty file which contains the full or relative path to the file it points to; then, the link's inode is marked as special by activating its symbolic link flag. (This is how things work in UFS; I suspect HFS+ is similar, but cannot confirm it.)Aliases, on the other hand, are created from the Finder (and possibly with SetFile(1), but I don't know how) and are stored as regular, empty files if inspected from the command line. However, they have the a extended attribute set on them, which marks them as special files, and the necessary information is stored (I think) in their resource fork.The interesting thing about aliases is that they are more versatile than symbolic links. For example: an alias can point to a file that is stored inside a disk image. When accessing such an alias, the system will automatically mount the corresponding disk image if not already mounted and then redirect you to the file. This may be interesting to transparently access files saved into an encrypted disk image: you store the files into the image, create aliases to them on your desktop and, when opened by any alias-aware application, the system will ask you for the image's password, mount it and provide you the file. But, unfortunately, aliases do not work from the command line, so this benefit is not as impressive as it could be (at least for me). [Continue reading]

  • Hide a volume in Mac OS X

    Yesterday, we saw how to install Mac OS X over multiple volumes, but there is a minor glitch in doing so: all "extra" volumes will appear as different drives in the Finder, which means additional icons on the desktop and its windows' sidebars. I find these items useless: why should I care about a part of the file system being stored in a different partition? (Note that this has nothing to do with icons for removable media and external drives, a these really are useful.)The removal of the extra volumes from the sidebars is trivial: just right-click (or Control+click) on the drive entry and select the Remove from Sidebar option.But how to deal with the icons on the desktop? One possibility is to open the Finder's preferences and tell it to not show entries for hard disks. The downside is that all direct accesses to the file system will disappear, including those that represent external disks.A slightly better solution is to mark the volume's mount point as hidden, which will effectively make it invisible to the Finder. To do this you have to set the invisible extended attribute on the folder by using the SetFile(1) utility (stored in /Developer/Tools, thus included in Xcode). For example, to hide our example /Users mount point:# /Developer/Tools/SetFile -a V /UsersYou'll need to relaunch the Finder for the changes to take effect.The above is not perfect, though: the mount point will be hidden from all Finder windows, not only from the desktop. I don't know if there is any better way to achieve this, but this one does the trick... [Continue reading]

  • Install Mac OS X over multiple volumes

    As you may already know, Mac OS X is a Unix-like system based on BSD and Mach. Among other things, this means that there is a single virtual file system on which you can attach new volumes by means of mount points and the mount(8) utility. One could consider partitioning a disk to place specific system areas in different partitions to prevent the degradation of each file system, but the installer does not let you do this (I suspect the one for Mac OS X Server might have this feature, but this is just a guess). Being Unix, this doesn't mean it isn't possible!As a demonstration, I explain here how to install Mac OS X so that the system files are placed in one partition and the users' home directories are in another one. This setup keeps mostly-static system data self contained in a specific area of the disk and allows you to do a clean system reinstall without losing your data nor settings.First of all boot the installation from the first DVD and execute the Disk Utility from the Utilities menu. There you can partition your disk as you want, so create a partition for the system and one for the users; let's call them System and Users respectively, being on the disk0s2 and disk0s3 devices. Both should be HFS+, but you can choose whether you want journaling and/or case sensitivity independently. Exit the tool and go back to the installer.Now do a regular install on the System volume, ignoring the existence of Users. To make things simple, go through the whole installation, including the welcome wizard. Once you are in the default desktop, get ready for the tricky stuff.Reboot your machine and enter single user mode by pressing and holding Command+s just after the initial chime sound. At the command line, follow the instructions to remount the root volume as read/write. If I recall correctly, it tells you to do:# fsck -fy /# mount -u rw /Mount the Users volume in a temporary location, copy the current contents of /Users into it and remove the original files. For example:# mkdir /Users2# mount -t hfs /dev/disk0s3 /Users2# rsync -avE /Users/ /Users2... ensure that /Users2 matches /Users ...# rm -rf /Users/.[a-zA-Z_]* /Users/*# umount /Users2# rmdir /Users2I used rsync(1) instead of cp(1) because it preserves the files' resource forks, if present (provided you give it the -E option).Once the data migration is done, you can proceed to tell the system to mount the Users volume in the appropriate place on the next boot. Create the /etc/fstab file and add the following line to it:/dev/disk0s3 /Users hfs rwEnsure it works by mounting it by hand:# mount /UsersIf no problems arise, you're done! Reboot and enjoy your new system.The only problem with the above strategy is that your root volume must be big enough to hold the whole installation before you can reorganize the partitions. I haven't tried it but maybe, just maybe, you could do some manual mounts from within the Terminal available in the installer. That way you'd set up the desired mount layout before any files are copied, delivering the appropriate results. Note that if this worked, you'd still need to do the fstab trick in this case, but you'd have to do it on the very first reboot, even before the install is complete! [Continue reading]

  • MacBook Pro review

    Since the Intel Macs were published, I had been planning to get one of them; I settled on getting an iMac 20" by next Summer (so that it'd carry Leopard "for free"). But last December I found a great offer on the MacBook Pro 15.4", being the total price similar to what I was planning to buy. Furthemore, going for the MacBook Pro instead of the iMac let me get rid of my iBook G4 and my desktop PC.Now it has been a little over two weeks since I received the MacBook Pro 15.4", equiped with a Core 2 Duo at 2.16GHz and the 2 GB of RAM, 160 GB hard disk updates. It has been enough time to get a decent impression of the machine, so let me post a little review.The laptop is great overall. It is fast, full of features and tiny details, and has an excelent look (highly subjective ;-). Compared to the iBook G4, which had a 12" 4:3 screen, this one is noticeable bigger (15.4" 16:10) but is thinner and weights almost the same. Sincerely I don't care too much because it was also replacing the desktop PC I had, so I really wanted to have a large resolution to work comfortably (plus a decent video card, only available in the Pro model).As regards performance, the Core 2 Duo is certainly faster than the processors in the other machines. For example, the old PC needed between 5 to 6 hours to build a full NetBSD release, while the C2D takes less than 2 (1.45, if I recall correctly). Games also behave appropriately, even at the highest available resolution (1400x900). Unfortunately, the hard disk (which does 5400RPM "only") is a bottleneck for my typical development (or gaming) tasks, as I outlined in a previous post.Somewhat related to the previous post, the hardware virtualization available in these new microprocessors is awesome. Anyone who deals with cross-development should consider getting one of them: it's impressive to see two (or more!) different operating systems working at the same time at native speeds.Aside that, the machine is full of tiny details. You probably know most of them: the MagSafe connector, the keyboard's backlight, the integrated webcam and microphone or the Apple Remote. I kinda like this last item, although it does not shine as it could if it was in an iMac.However it has its problems too. When the fans spin up, it becomes very noisy... and this happens as soon as you start building any piece of software or launch a game. On another order of things, I've been attempting to install Windows XP on a partition that is not at the end of the disk and haven't been successful, which means it is restricted to the slower part of the drive (a pity for games, specially). But well, not that I can blame Apple because Boot Camp is still beta.Not much more I can say. These machines have been reviewed in depth all around already.And to conclude, a shot of my current desktop :-) [Continue reading]

  • CVS and fragmentation

    First of all, happy new year to everybody!I've recently got a MacBook Pro and, while this little machine is great overall, the 5400 RPM hard disk is a noticeable performance bottleneck. Many people I've talked to say that the difference from 5400 to 7200 RPM should not be noticeable because:These 2.5-inch drives use perpendicular recording, hence storing data with a higher bit density. This means that, theorically, they can read/write data more quickly achieving speeds similar to 7200 RPM drives.Modern file systems prevent fragmentation, as described here for HFS+.To me, these two reasons are valid as long as you manage large files: the file system will try to keep them physically close and the disk will be able to transfer sequential data fairly quickly.But unfortunately, these ideas break when you have to deal with thousands of tiny files around (or when you flood the drive with requests from different applications, but this is not what I want to talk about today). The easiest way to demonstrate this is to use CVS to manage a copy of pkgsrc on such drives.Let's start by checking out a fresh copy of pkgsrc from the CVS repository. As long as the file system has a lot of free space (and has not been "polluted" by erased files), this will run quite fast because it will store all new files physically close (theorically in consecutive cylinders). Hence, we take advantage of the higher bit densities and the file system's file allocation policy. Just after the check out operation (or unarchiving of a tarball of the tree), run an update (cvs -z3 -q update -dP) and write down the amount of time it takes. In my specific tests, the update took around 5 minutes, which is a good measure; in fact, it is almost the same I got in my desktop machine with a 7200 RPM disk.Now start using pkgsrc by building a "big" package; I've been doing tests with mencoder, which has a bunch of dependencies and boost, which installs a ton of files. The object files generated during the builds, as well as the resulting files, will be physically stored "after" pkgsrc. It is likely that there will be "holes" in the disk because you'll be removing the work directories but not the installed files, which will result in a lot of files stored non-contiguously. To make things worse, keep using your machine for a couple of days.Then, do another update of the whole tree. In my specific tests, the process now takes around 10 minutes. Yes, it has doubled the original measure. This problem was also present with faster disks, but not as noticeable. But do we have to blame the drive for such a slowdown or maybe, just maybe, it is CVS's fault?The pkgsrc repository contains lots of empty directories that were once populated. However, CVS does not handle such entries very well. During an update, CVS recreates these empty directories locally and, at the end of the process, it erases them provided that you passed the -P (prune) option. Furthermore, every such directory will end up consuming, at least, 5 inodes on the local disk because it will contain a CVS control directory (which typically stores 3 tiny files). This continuous creation and deletion of directories and files fragment the original tree by spreading the updated files all around.Sincerely, I don't know why CVS works like this (anyone?), but I bet that switching to a superior VCS could mitigate this problem. A temporary solution can be the usage of disk images, holding each source tree individually and keeping its total size as tight as possible. This way one can expect the image to be permanently stored in a contiguous disk area.Oh, and by the way: Boot Camp really suffers from the slow drive because it creates the Windows partition at the end of the disk; that is, its inner part, which typically has slower access times. (Well, I'm not sure if it'd make any difference if the partition was created at the beginning.) Launching a game such as Half-Life 2 takes forever; fortunately, when it is up it is fast enough.Update (January 9th): As "r." kindly points out, the slower part of the disk is the inner one, not the outer one as I had previously written (had a lapsus because CDs are written the other way around). And the reason is this: current disks use Zone Bit Recording (ZBR), a technique that fits a different amount of sectors depeding on the track's length. Hence, outer (longer) tracks have more sectors allocated to them and can transfer more data in a single disk rotation. [Continue reading]