• Live@NYC: Not any more

    That's it. Two days ago I landed in Barcelona and my stay in NYC finished. I was happy to see family and friends again, but I'm missing NYC and, in special, the people there a lot.I'm actually considering moving to NYC as soon as I find a decent job there and postponing the acquisition of a Ph.D. Many people says I should just do that, but some friends say that my feelings will eventually go away and I'll feel comfortable here again. I know that this second thing is true, but why should I accept that if there is a chance for a change? Why can't I try to go and live there? Why do I have to hurt the people I met there? I can think of some reasons to stay in Barcelona for, but I can also think of more interesting reasons to leave.Anyway... don't worry, this is not the common post you'll find in this blog! I think I'll just go back to blogging about technical stuff. I know I still have to prepare a summary post for the stay and probably a relatively small selection of photos, but I think that will come later. If you ever can go to NYC, don't doubt: just do it. [Continue reading]

  • Vacations@SF: 3rd day

    I couldn't keep up with my plans yesterday. I started by walking through the Golden Gate Bridge (both ways) and then, on my way back, I ended up visiting Japantown and the area around the City Hall. I mean, I didn't visit the Golden Gate Park itself... but not a big deal since it seems similar to Central Park according to the maps.At night, I met just one NetBSD developer in the 21st Amendment bar.244 more photos to process.Today, time to visit the Berkeley campus. [Continue reading]

  • Vacations@SF: 2nd day

    Yesterday was my 2nd day of vacations at San Francisco but ended up so tired that I couldn't sit down to write this post. What did I do? Basically, yesterday was a walking day. I got out of the hotel early in the morning (around 9am) and came back at 4pm, just to leave again at 5.30pm and return at 8.30pm. That's a lot of hours of wandering around!Basically, my path started at the hotel and I headed to the North Beach neighborhood through Downtown. From there, I climbed up to the Coit Tower; these hills are painful! Then I took the Powell steps to get to the Piers and walked north-west around the coast to end in the Aquatic Park. After that, a walk to the hotel to rest a little bit. And afterwards, headed Downtown again to have dinner at a place called Bocadillos, hoping that I'd find something similar to the Spanish "bocadillos"; no luck though, but the food was OK. At last, while returning home, I took pictures of some cool buildings decorated for Christmas.As a result, 202 more photos to process... I'll eventually post some, I guess.And the plan for today is to cross the Golden Gate, visit the Golden Gate Park and, in the evening, meet some local NetBSDers. Will see how that goes. [Continue reading]

  • Vacations@SF: 1st day

    That's it! I'm in SF after my first "vacations" day. The trip, door to door, has taken around 12 hours; I got up at 4.45am in NYC, took the flight at 8.30am in JFK, arrived to SFO at 12.00pm and got to the hotel at around 2.00pm (all of these local times in their respective places; three-hour difference between eastern and western coasts). No cabs, which is what enlarged the total time a noticeable amount.The city seems pretty nice so far. More laid back than Manhattan... but still overpriced. And I miss some things from NYC such as the 24/7 availability of virtually everything (note, it is Sunday today!). Even though, I have been only able to explore the surroundings of the hotel today — that is, Union Square and a bit of SoMA — so this opinion may change in the following days. Dunno what I'll do in these other days yet, though.I've taken lots of pictures (103) but most of them are worthless due to poor lighting :( Anyway, I guess I'll eventually share some of them.Now, almost time to go to sleep. Jet lag is kicking in. I don't want to think how bad this will be when I go back to Spain! [Continue reading]

  • Live@NYC: Coming to an end

    That's it. My internship at Google finished this past Thursday (that is one day and a half ago) and I'm going back to Spain on December 2nd. A week and a half to go and my time at NYC is over. Quick summary: the internship has been great, working at Google is amazing and my project was more or less finished.I'll provide more details about the whole experience later when I'm back to my country, but now is vacations time. I really need to relax a bit; haven't had vacations for more than a year and a half! And, when I get back, I'll have to start working on my Ph.D. immediately.So what am I going to do? I'm leaving to San Francisco tomorrow morning and I'll spend four days there by myself, exploring that other nice city. I'll be back to NYC for thanksgiving and then use the rest of the days to visit a couple of museums, buy presents and, if time permits, visit Washington DC.Next post will probably be from San Francisco ;) C ya. [Continue reading]

  • SoC 2008 Mentor Summit

    The Google SoC 2008 Mentor Summit is now officially over. The summit has taken place during the whole weekend and has been pretty intensive. The organization of the whole event has been excellent thanks to the hard work of Leslie Hawthorn among others; sorry, can't remember your names... I'm very bad at this.We have had multiple sessions, ranging from technical ones such as distributed version control systems to more political ones such as how to deal with assholes in open source projects. There were lots of passionate people in these talks, and it was quite interesting to see it all. The cool thing, though, as opposed to other conferences, is that everyone here comes from a different project and background, so you get to see lots of different opinions and points of views for each topic.As regards the Google HQ campus, it is great. I thought the NYC offices were good, but these are spectacular to see. Unfortunately, there is not much to do out of them... so I'm not sure if it could be so good to work here for a long time.Now, I'm sitting in the San Jose airport (SJC) waiting for the flight back to NYC. Amazingly, there is free internet wireless connection and electrical plugs! Very, very nice detail. And there is few people around, which makes it very quiet and relaxed.Oh, and to those who will decipher this: some more mini golf training during the summit :P [Continue reading]

  • Live@NYC: ... or not; now in MTV!

    I've landed this morning in San Francisco at 9.00am (which means I left NYC at 6.00am!) and went straight down to the Google Headquarters in Mountain View. No sleep at all except for a little bit of pseudo-sleep in the plane. The Google campus is really nice. It puts the NYC offices in an inferior level than I thought :P But the only problem is that the area surrounding the campus is basically empty. Very small houses and lots of space between them, which is not bad per se... but means that there really is not much to do.Anyway. What am I doing here? I am attending the Google Summer of Code 2008 Mentors Summit this weekend, but came a bit earlier to be able to do a couple of meetings with coworkers in the Mountain View office. Pretty exhausting day, and it is not close to over yet!Just enjoy the few photos I've taken so far.PS: Been playing mini-golf on-board until I got an unasked segmentation fault. [Continue reading]

  • C++ teaser on templates

    A rather long while ago, I published a little teaser on std::set and people seemed to like it quite a bit. So here goes another one based on a problem a friend has found at work today. I hope to reproduce the main idea behind the problem correctly, but my memory is a bit fuzzy.Can you guess why the following program fails to compile due to an error in the call to equals from within main? Bonus points if you don't build it.struct data { int field;};templateclass base {public: virtual ~base(void) { } virtual bool equals(const Data& a, const Data& b) const { return a == b; }};class child : public base {public: bool equals(const data& a, const data& b) const { return a.field == b.field; }};intmain(void){ data d1, d2; base* c = new child(); (void)c->equals(d1, d2); delete c; return 0;}Tip: If you make base::equals a pure abstract method, the code builds fine. [Continue reading]

  • Live@NYC: Prospect Park

    Yesterday night, we went to a techno club — Webster Hall — which had Carl Cox as an invited DJ. Some of my friends around here enjoy this music and said this was a great DJ, so we couldn't miss it. He indeed was good.Today, after few hours of sleep, I have been doing quite a bit of housework: basically, huge cleaning, reordering and some DIY — had to fix some drawers. Resolving the extremely loud and annoying hissing caused by the heating system will have to wait, though.What I have enjoyed today, though, is spending the evening walking around Brooklyn and, more specifically, in Prospect Park. Central Park is very nice, you know, but this other park is too! Not to mention that the borough (Brooklyn) seems awesome... or at least the few neighborhoods I've visited so far. Much more relaxed than Manhattan and, I've been told, cheaper for living. Anyway, enjoy the photos :-)I guess tomorrow will be, finally, a museum day. Not going out tonight (I'm extremely tired and can't meet my friends) so I guess I'll be able to wake up early and seize the day. We'll see.Note to self: never, never, never again, buy batteries from a crappy deli. They last for a few photos only! Who knows for how long they have been in the shop shelves. [Continue reading]

  • Boost.Process and SIGCHLD

    For some unknown reason, I'm regaining interest in Boost.Process lately.  I guess many of the people who have written me in the past asking for the status of the library will be happy to hear this, but I can't promise I will stick to coding it for long.  I have to say that I have received compliments from quite a few people...  thanks if you are reading and sorry if I did not reply you at all.Anyway.  So I downloaded my code and ran the unit tests under Mac OS X to make sure that everything still worked before attempting to do any further coding.  Oops, lots of failures!  All tests spawning a child process broke due to an EINTR received by waitpid(2).  That doesn't look good; it certainly didn't happen before.After these failures, I tried the same thing under Linux to make sure that the failures were not caused by some compatibility issue with Mac OS X.  Oops, failures again! Worrisome.  The curious thing is that the tests do work in Win32 — but that can be somewhat expected because all the internal code that does the real work is platform-specific.Curiously, though, running the examples (not the tests, but the sample little programs distributed as part of the library documentation) did not raise any errors. Hence, I tried to run gdb on the actual tests to see if the debugger could shed any light on the failures.  No way.  Debugging the unit tests this way is not easy because Boost.Test does a lot of bookkeeping itself — yeah, newer versions of the library have cool features for debugging, but they don't work on OS X.  Hmm, so what if I run gdb on the examples? Oh! The problem magically appears again.It has taken me a long while to figure out the problem. Along the process, I have gone through thoughts of memory corruption issues and race conditions. In the end, the response was much simpler: it all turns out to SIGCHLD (as the error code returned by waitpid(2) well said).SIGCHLD is received by a process whenever any of its children change status (e.g. terminates execution). The default behavior of the signal handler for SIGCHLD is to discard the signal. Therefore, when this signal is received, no system calls are aborted because it is effectively discarded. However, it turns out that newer versions of Boost.Test install signal handlers for a lot of signals (all?) to allow the test monitor to capture unmanaged signals and report them as errors. Similarly, gdb also installs a signal handler for SIGCHLD. As a result, Boost.Process does not work when run under gdb or Boost.Test because the blocking system calls in the library do not deal with EINTR, but it actually works for non-test programs run out of the debugger.The first solution I tried was to simply retry the waitpid(2) whenever an EINTR error was received. This fixes the problem when running the tests under gdb. Unfortunately, the test cases are signaled as failed anyway because the test monitor still receives SIGCHLD and considers it a failure.The second solution I have implemented consists on resetting the SIGCHLD handler to its default behavior when Boost.Process spawns a new child and restoring the old SIGCHLD handler when the last child managed by Boost.Process is awaited for.  Eventually, the library could do something useful with the signal, but discarding it seems to be good enough for now.This second solution is the one that is going to stay, probably, unless you have any other suggestion. I still feel it is a bit fragile, but can't think of anything better. For example: what if the user of Boost.Process had already programmed a handler for SIGCHLD? I just think that such a case shouldn't be considered because, after all, if you are using Boost.Process to manage children processes, you shouldn't have to deal with SIGCHLD on your own as long as the library provides a correct abstraction for it. [Continue reading]

  • Live@NYC: 2 months and a half

    Wow, I haven't blogged for a long time (a month since last post) and I'm already counting down my days in NYC... less than a month and a half left here :-( I certainly miss some things from home, such as the nice little (non-fancy) bars, my bike, the gym, a nice house and, of course, friends and family, but I'm not in a real hurry to go back.So what have been the recent happenings? There are lots of things to tell I guess.First of all, almost all interns are gone. Of all I knew, probably 15-20, only 2 are left, 3 if you count me. This probably means we'll have to look for new friends to grow our partying group a little bit.On another topic, I don't have roommates any more. The ones I had moved to a bigger place and I haven't yet found anyone who wants to share this apartment. The major problem is that the room to be rented is, actually, the living room, so it is hard to convince someone to live here. Lack of privacy is an issue for most even if they don't say so right away.As regards working out, I'm getting way too lazy. It has already been more than a week since I last went running. Will try to force myself to go tomorrow morning again; the food is already starting to accumulate in the wrong places. Somewhat related to this, I've got an iPod Nano 4th generation which, when put together the Nike+ kit, is amazing.And we have got a "new" addiction at work: Guitar Hero. This game is really nice and we are already playing it on the expert level. However, I wonder if it is any good to spend time learning how to play a fake guitar. I'd rather go and buy a real one, and instead spend my time playing real songs. Let's hope Guitar Rising keeps up with the expectations and is published soon; whenever that happens, I'll definitely get it.About more recent happenings, I've been in NYCBSDCon 2008 this weekend giving a presentation on ATF. We'll have the video recordings posted out soon, so be sure to check them to learn some interesting new stuff. It was also a good way to meet well-known people in the BSD world such as, for example, Matt Dillon and Dru Lavigne.As regards work at Google, my project is starting to have decent shape, which means that it actually works! During the next month, I'll have to extend it to some new areas, and that's a bit scary because they'll involve processing vast amounts of data in an efficient manner. (At the moment, the data set is not big enough to really require tuning the code.)Oh, and one last thing about mini-golf. I have played, I think, 4 times already and lost in all of them. This game sucks. Will keep trying, though.I'm missing lots of stuff here, but I don't know how to add more random notes and not end with a completely disconnected post. All the paragraphs are already too independent as they are now. So just wait for the next one to know more :P [Continue reading]

  • Live@NYC: Almost 2 months

    I just realized that today makes two months since I left my home in Barcelona and headed first to Italy and then to NYC. This means I only have two more months and a week to stay at Google. Ew, time flies so fast...But so far, things are going great. They could certainly be a little bit better, but not by much!Today, two friends from Spain just left my apartment. They had been here for 10 days doing tourism around the city and visiting lots of stuff. I wish I could also take some vacations like that.And tomorrow it's time for another race: Susan G. Komen Race for the Cure. It is just 5Km in Central Park, so it will be pretty easy but, hopefully, fun enough. The only "problem" is that it is at 9am, and today is Saturday... so I need to hang go out (with some restrictions)! [Continue reading]

  • Live@NYC: Month 1

    Wow, I realized yesterday that I have already been in NYC for a full month! That means that I only have three left before leaving... time flies :-(I also apologize (to those who have noted!) for not writing for the past week, but I have lost "interest".  All posts were starting to be similar to each other because there aren't that many new things to explain every day.  Or, put it another way: I now usually have better things to do in the evenings rather than blogging :-PAs regards work, I have spent the past two weeks trying to code something, but all my attempts were worthless. Yesterday, though, my manager and I found a trivial way to resolve the problem at hand. It is not the nicest solution, but does the trick for now. Ew, two weeks of "wasted" coding efforts! But, as he put it, these efforts have been a good way to introduce myself into big projects within Google.Now I have been assigned another coding task and it seems pretty darn cool to me. This is not related to my real project, though, but it should be possible to finish it in one week and will be useful to give me more exposure to other Google technologies. In special, MapReduce. Yeah, I can say that; after all, what MapReduce is is disclosed ;-)Before finishing, let me point out something else that has caught my attention here in the city. There are lots of places to get your nails done, and all the people in these places is visible from the street. Curious, at the very least.Oh! Be sure to listen to the "Gettysburg trilogy" (the three songs in the second disc) in The Glorious Burden album from Iced Earth. Have been quite addicted to it for the last days. [Continue reading]

  • Live@NYC: Days 21, 22 and 23

    Day 21 (August 8th): Some work at Google and went out later with a friend from work and the friends that are visiting from Barcelona. We went to a bar called Spice Market: quite fancy but also expensive and not that fun.Day 22 (August 9th): Stayed at home for most of the day, which was pretty nice because I had not done this since I left Barcelona four weeks ago. In the morning, I went to a barber though. At night, went out with my roommate and some of her friends to a party. This was also an excuse to celebrate my birthday, which happened today (in day 23rd, I mean).Day 23 (August 10th): Turned 24. Went shopping for a fast external hard disk and ended up buying a Lacie D2 Quadra 500GB. Did some tests using the FW800 connection and the results are quite impressive. I spent part of the evening at Google doing some personal work, then went running and then drank cava at home to "celebrate" my birthday.OK, and now that I remember, here is another thing that has caught my attention in NYC. There are a lot of places to cut your nails... and what's "interesting" is that, from the street, you can see all the people sitting there while they are being served. [Continue reading]

  • Live@NYC: Day 20

    Another regular day at work, except that I have finally been assigned some coding work. Yay! Haven't coded for a rather long while, and I need to do something. Aside that, got some Google gear that I was supposed to get on the first day but didn't. This includes a water bottle, a towel, yet another t-shirt and a notebook.Somewhat related: I also ate way too much, which makes my stomach hurt and feel incredibly tired. Or maybe I am tired due to the run this evening, in which I pushed myself too hard. But I must run the Human Race (10Km) in less than 45 minutes! Yes, personal goal. [Continue reading]

  • Live@NYC: Day 17, 18 and 19

    I should have posted this yesterday at most... but anyway, here it comes so that I do not forget about recent happenings. What I mean is that day 19th was yesterday (August 6th), not today!Day 17: Some more learning work at Google. The interesting thing of that day was that a friend from Barcelona arrived to the city at 10pm approximately, so we went to have a beer to a place in the lower East River. Very touristic but pretty nice. Also, he was riding a hired car, so I was able to see the city from another "perspective".Day 18: Again, some more learning. At midday, I registered for the Nike Human Race, a 10Km race that will take place on August 31st in 25 cities around the world. The nice thing is that for the low registration fee, I got a pair of shiny new sneakers and the Nike+ Sportsband. Now I have Nike shoes, the iPod+Nike kit at home (a present from another race)... so I am now only missing a real iPod! At night I went out with a Spanish colleague from work and two other Spanish guys, friends of him, that are also visiting the city.Day 19: Tried the new sneakers and the Sportsband. Both are pretty amazing! Comfortable shoes and the measures taken by the sensors seem to be quite accurate. Plus you can get a nice history of your runs on the website. At work, I finally did some coding. And, in the evening, we had a Google Boat Cruise for interns (photos available) followed by beers at a couple of bars.Bah, I should get back to posting once per day to avoid such telegraphic descriptions... [Continue reading]

  • Live@NYC: Day 16

    Wow... two weeks have passed already. Time flies :-(Today I got up late after yesterday's hang out and then went to have lunch with my new roommate and his brother. After lunch, my roommate had to got back to work, and his brother and I walked downtown to take a look at the World Trade Center. After seeing that, we walked by the southern coast and saw the Statue of Liberty in the distance. I guess I'll have to take the ferry one day and see it from a closer distance, but better wait until there are not too many tourists and for colder weather. At last, we returned to visit Google going all along Broadway, while visiting some shops.I'm really exhausted. Google Earth says that we walked for almost 11Km non-stop! Check out the photos!Edit (22:50): Just finished watching Season 1 of The Big Bang Theory. Hilarious! I'm eager to see more episodes, but that'll have to wait until Season 2 is aired. [Continue reading]

  • Live@NYC: Days 13, 14 and 15

    Day 13: Not much to comment, other than I finally got my welcome Google t-shirt!Day 14: Moved to the new apartment. It was pretty annoying to have to move all my stuff using the subway, basically due to the incredible heat in the streets and inside the subway. But finally, I have a relatively decent place to stay for the four months. Shared with another person, cheaper than the older apartment and... just one block away from Google! Oh, and at night I went to one of these ultra-classy clubs (230 Fifth Avenue) with several other interns and later to a couple of bars. Wasted way too much money.Day 15: Properly installed in the apartment by emptying the bags and filling the closet and drawers. Finally! I had had all my stuff in the bag for three weeks already and it was pretty annoying to find stuff this way. Also walked around the neighborhood and checked several book shops. I saw many interesting books that I would like to buy one day (in special, one about Haskell!), but just before leaving the last shop I was visiting (Barnes & Noble), I passed by a set of books that were discounted 50% of the price. And one of them immediately caught my eye: A Manual for Writers of Research Papers, Theses, and Dissertations, Seventh Edition: Chicago Style for Students and Researchers. At the ridiculous price it was selling, I couldn't resist buying it! Let's hope it is useful :) Now... maybe it's time to go out again, but this time with my new roommate and his brother. [Continue reading]

  • Live@NYC: Day 12

    At last, my new apartment is... OMG confirmed!  Will be moving on August 1st and will stay there for four months. The location is good because it is just one block away from Google and is also very close to lots and lots of pubs and restaurants.Haven't done much work today though.  I spent all the morning applying for a SSN number, and then I spent part of the evening opening a bank account and getting used to its online services.Hope that tomorrow will be more productive. [Continue reading]

  • ATF talk at NYCBSDCon 2008

    NYCBSDCon 2008 will take place in New York City on October 11th and 12th.  Given that I am already in NYC and will still be by that time, I submitted a presentation proposal about ATF.  I have just been notified that my proposal has been accepted and, therefore, I will be giving a talk on ATF itself and how it relates to NetBSD on one of those two days.  The conference program and schedule have not been published yet, though, so keep tuned.  Hope to see you there! :) [Continue reading]

  • Live@NYC: Day 11

    One more day and nothing special to say. Just that I tried to open a bank account and they require two different IDs, which I was not carrying. Any idea about why is that?Let's hope I can open the account tomorrow... but even then, the transfer of the money I have in Spain to this account will not happen on the same day, so I'm not sure about how I'll deal with the housing payment...Edit (23:10): Oh, and just finished watching Dexter Season 1. Highly recommended! [Continue reading]

  • Live@NYC: Day 10

    Not much to say today other than I was lazy to go running in the morning and that I have finally settled on an apartment. Will move in on Friday as long as I can figure out how to do the payment!  (Probably need to deal with money orders, because I can't really get enough cash.)Also, I think the time when I'm assigned a project at Google is approaching. Hope it will be soon. [Continue reading]

  • Live@NYC: Photos

    I've started uploading the photos of my stay in NYC to the Picasa Web Albums. Feel free to take a look in my page! [Continue reading]

  • Live@NYC: Day 9

    Yet another exhausting day. After breakfast (which was pretty late today because I woke up late), I headed to the Apple Store in 5th Avenue. Instead of taking the subway, I walked all the way down through Central Park which accounts for 40 streets and a couple of avenues. Doing so was pretty nice, as the views in Central Park are amazing — and which is the reason why my camera had 93 photos when I got home. Lots of tourists along the way, though, and a rather funny sign along the way:Once in the Apple Store, I went downstairs. Wow. The place is big and really nice. But it was crowded. Stayed there for quite a while, trying their gadgets... and I think I want a real iPod. You know, the Shuffle I now have is OK for running... but not for other "styles" of listening to music (e.g. at work, on the plane...). Even though, I'm not sure which one I should get: the Nano is cool for running (no hard disk), but the Classic can perfectly hold all my music library. Even though, getting the Touch is stupid; I'd rather get an iPhone 3G for that price and size.Out of the store, I felt hungry so I tried one of these typical hot dogs shown in virtually all movies and/or TV series recorded in NYC. Rather disappointing, though, because they are way too small... so when I got home I had to have real lunch.After the hot dog, I walked to Rockefeller plaza  to see what was it and then entered the NBC Experience Store. I saw a couple of nice t-shirts from Friends: one said "We were on a break!" and the other "How YOU doin'?". I think I'll eventually get one of those. But I should hold on the compulsive-buying feeling I'm experiencing...And then I went to Times Square again to take a some (many) photos. Unfortunately, my journey ended there because it started to rain. So I entered a shop, bought a NY t-shirt and returned home much earlier than I wanted. It was only 4pm. Should I have waited for half an hour or so, the rain would likely have stopped.As mentioned before, when I got home I had some more lunch, surfed the internet a bit and went running through Central Park taking a different route than the other days. Very, very nice. This is something I will miss a lot when moving to another apartment downtown :(After the running session, I had dinner and... I can't stay here! So... I went out for a walk to see if I could find any nice bar to have a drink. Nothing. What a crappy neighborhood! Well, in fact I saw a couple of bars that seemed promising, but they were rather empty. Will try again another day.And finally, time to relax. Will now watch some Dexter and sleep. [Continue reading]

  • Live@NYC: Day 8

    Exhausting. Yes, that's the best word to describe today. I woke up early due to an unexpected phone call and went to have breakfast to some random place close to my current house. Then, and for the first time ever, I went to do the laundry. What a waste of time. Sure, I'd have returned home while the washing machine and the dryer were working but, taking 30 minutes each, it's difficult to do anything productive in these separate periods of time. I think that next time I'll just bring my clothes and let the people there do everything by themselves so that I can pick up the clean clothes in the evening... sure, you have to pay for that service, but it's worth the time savings!Then I paid for this week's rent and, after that, headed downtown to deal with housing. I had to see a room at 4pm, so I went a couple of hours in advance to walk by the neighborhood and have lunch somewhere. It took a while to settle the lunch place, and I ended up in a, I guess, mexican restaurant, where I ate a burrito. The place was very touristic (aka, not cheap) but the food was really, really good.Immediately after lunch, I went to see the room. This was in a 3-bedroom apartment located in Greenwich Village. Sincerely, the flat was quite disappointing (but most of them seem to be here in Manhattan), but the surroundings were, I think, excellent. Lots of bars and restaurants around which certainly warrants cool nightlife! If I take that place, I would shared it with a couple of guys in their mid-twenties too, so they'd have been good party-mates I guess.Later, I walked to Google to see how long it took from the apartment to get there. I didn't take the most optimal route, but it was a 20-minute walk only; with some experience, I'm sure it'd transform into a 15-minute walk only. Why did I go to Google, you say? Well, to check the email and keep browsing Craigslist. (Side note: people here has a problem with air conditioning... it was so damn cold inside the building!)Anyway, from there I called another person that was offering a room (I already exchanged some emails yesterday, so this was planned) and settled that I'd see it today but late in the evening. OK, so I went for a walk to make time and stopped by a supermarket to buy some food for breakfast. Then I walked back to Google because I had not heard from the girl (she was supposed to call me just an hour after I called her) and it was pretty late. From there I called again and, well, I had to wait until almost 10pm to be able to see the apartment because she got out of work late and still had to have dinner.Buuuut, the good thing is that I was able to see this other apartment, which is just one block away from Google and pretty close to 5th avenue. I met the girl, a 28-year old mexican, and I think I finally have a place to stay. Quite cheaper than the other option, cool roommate, and a lot closer to work! She will confirm tomorrow, but I guess (well, hope!) I'm done looking.And at last, headed back home and ate something for dinner. And here I am now, blogging and "enjoying" one of those alcohol-free beers.So why was the day exhausting? I tracked my walk path on Google Earth and it was almost 10km-long! All day walking and carrying my laptop on the back... so it's time for a good sleep. Finally, the jet lag is going away. [Continue reading]

  • Live@NYC: Days 6 and 7

    Wow.  I got some complaints today for not publishing day 6 on time!  Sorry, was too tired to write something yesterday evening.So what did I do yesterday?  It was a pretty regular day, with probably two things worth to note. First, I have not been able to find housing yet, so I asked my current tenant if I could stay one more week in the place where I am now; fortunately, it worked, so now I have one more week to look for something else. And, second, when I got home I went for a (very) short walk to find a decent bar in the neighborhood. Nothing! I really want to move to some other place with more nightlife to explore...And what about today, you say? Well, some more work and, in the evening, I went for dinner plus a couple of drinks with several other interns. Finally tried one of these drinks with fancy names, colors and glasses — a daiquiri with strawberry — but it was pretty good indeed! I think I was the oldest of the group, given that most of the interns around here are still in undergraduate college courses... and it is really annoying to have to show one's ID in every single bar to get a drink served.Tomorrow I have to go to see a couple of apartments and also to do the laundry in one of these ugly places... never done it so I will have to ask someone how the machines work and what the procedure is. I can't understand why most people don't have a washing machine at home!  Lack of space?And one last thing: if I keep up with the current reading "speed", I'll probably have a project assigned by next Wednesday. Really looking forward to it, as I want to start doing something cool and useful but... I don't know what I'll be able to tell you... [Continue reading]

  • Live@NYC: Day 5

    Today was a pretty cool day. Everything was like usual, which means running by Central Park in the morning and then going to work... but then, in the evening, several interns met to go for some beers at some random bar in downtown.  Had a pretty nice time there and met several people at Google!Not much else to say, other than it´s already late and I need to sleep quite a bit.  Oh, yes, I´m noticing that the English keyboard that I have at work is now confusing me while writing with my Spanish one!Ah, and I can´t forget saying that living alone is soooo cool... nobody is watching around what I do. Can get at home at any time I want without giving any explanations at all. Sure, I have to do some work by myself, such as cleaning or the laundry, but those are a bearable enough compared to the feeling of complete freedom. Really. [Continue reading]

  • Live@NYC: Day 4

    Got up early (I'm still jet lagged so this is not difficult at all) and went running to Central Park. What a nice jogging track around the lake! Also surprising was the amount of people running at that hour of the day (7.30). Then I headed to Google early enough to have breakfast there and started to do some work. I'm starting to understand stuff, and it looks like that my work will be exciting! Can't wait until tomorrow to get there again and continue learning. I really wanted to experience this feeling again.Then the typical stuff: had lunch, worked some more and even had dinner there. Having dinner early was good because then I went on foot up to Times Square. Just WOW. This place is small but amazing; it seems a completely different city on its own. Will need to go back again with my camera, which I wasn't carrying today. At last, done some shopping at CVS and went home. Now it's time for blogging and watching yet another episode of Dexter while enjoying a beer. (Well, how do I dare to call it a beer? It's alcohol-free. I picked it up incorrectly at the super market.)So, finally today, I think I'm starting to really understand the subway system. A colleague at work, Patrick, explained me the rationale behind the local and express trains, which in fact seems a pretty good idea. I'll try tomorrow to pick the 4th or 5th to get to Google to see how long it takes. Let's hope the extra train switch doesn't make the trip last for more than the 6th on its own, or otherwise I'll probably miss breakfast.Also, at Google today, I was trying to figure out how the expresso coffee machine worked and asked a guy that was using it. While he was explaining to me the details of how to use it, I quickly noticed that special English accent that Spanish people have (I do too for sure). Guess I'll have someone to go partying this weekend!And at last I'm trying to figure out housing once again. I visited three different flats today. Let's hope I get an answer by tomorrow...Phew, this blog is starting to look like a diary... well, will be good to remember this nice experience in the future. [Continue reading]

  • Live@NYC: Day 3

    Today I spent most part of the day at Google.  I took a tour through the offices, dealt with paperwork and chatted with my boss.  I don't know what things I can say about what I saw, so I will only mention one thing: the place is great.  Looks like it'll be hard to leave when the internship finishes!Later this evening, I went shopping to Whole Foods per a suggestion from my boss.  Everything in there seems pretty darn expensive but also of good quality.  And the place was incredibly crowded.Ah, and one more thing that surprised me about the city and that I forgot to mention yesterday: almost everyone who rides bikes wears a helmet.  That's "hard" (not uncommon, just not typical) to see in Barcelona.Last random note: my annoying journey to look for housing continues... [Continue reading]

  • Live@NYC: Days 1 and 2

    Finally, my adventure in NYC started yesterday. I had to pick up the flight at 14.50 but it got delayed by an hour.  In the end, the plane landed at around 19.15 local time (which means a damn lot of hours inside the plane).  Going through immigration and customs was boring but easy.Getting into Manhattan was quite a mess though.  Instead of picking a taxi, I decided to try to make my way through the subway system.  So I first picked up the AirTrain and, instead of getting to the E train, which is the one I needed, I ended up in the A train.  OK, looking at the map it was clear that A could take much more time to get to the destination than E, but it could bring me there anyway; so I waited for that train instead of going back.  Going through Brooklyn took quite a bit, and when the train got to Manhattan, something happened (it was spoken out loud, but I couldn't understand it) and the train changed its way through another line. So I couldn't get to the station I planned and decided to get down in another one to later take the 6 line. However, to pick the 6 line in the correct direction, I had to go out of the metro system and reenter again at some other place. At this point I was so bored (due to carrying all luggage) and stressed that I stopped a taxi.The thing is that I had to be at the apartment between 9pm and 10pm so that the tenant could give me the keys. As I was certainly going to be late, I attempted to call her when the plane landed, but she didn't pick the phone. As a result, when I got to the apartment, nobody was there. Uhh... scam?  No. Fortunately, I could check my email through the mobile phone and saw a mail that told me to go to another direction to pick up the keys.  This mail also had her mobile number, and I noticed that I had written it down incorrectly... hence why nobody picked it up before. So taxi to downtown again, pick up the keys, and another taxi uptown. Expensive, yes, but I was not going to attempt the subway again carrying all my stuff.At around 11pm I got to the apartment, made the Internet connection work on my laptop and went to sleep. 30-hour long day finished. (Note to self: I had wished multiple times to have longer days. Don't say that again!)As regards today, I have read the New York Times (pretty darn expensive), explored the surroundings of the apartment (located at the Upper East Side), checked the way to get to Google using the Subway (not that difficult, now that I wasn't stressed), had lunch downtown, bought a local SIM card for my cell phone and went running in Central Park. Yay!Now, some things that have surprised me from the city so far.It is amazing how widespread and easy is the use of credit cards to pay for virtually anything and everywhere (even inside the taxis!).  Of course, it's also frightening the fact that there is no ID check for the use of the credit card, so losing it is... uh... scary.  Also frightening is the way you spend money... virtual money is much easier to give away than physical one!Tipping is annoying. Come on, just tell me how much I owe and don't make me figure out how much to add to make it right. You know, taxi drivers, waiters... everyone expects tips and there are some guidelines on how much you are supposed to leave.  I guess some taxi drivers got angry yesterday...The subway system is quite... "interesting". Everything seems very old, and the way it works is not too clear. Some stations don't open all day as others do, in some you have to change the direction by going outside, some trains don't have any clear indication of what is the next station... so far I think Barcelona's system is much nicer. Maybe except for the MetroCard.Water is free. What do I mean with this, you say? This morning I sat down in a bar to get a coffee and, before I even ordered, the waiter served me a big glass with water and ice. Similarly, when having lunch, I also got water without having ordered it. And what's more, the waiter refilled the glass as it emptied.Speaking of glasses... mixed drinks seem to be common here. The lunch menu included one mixed drink (some with champagne and some with vodka), and I saw several bars in which the happy hour started as early as 4pm. Oh, and the names for these drinks are quite "funny". Guess I'll have to learn them and what they are composed of.I don't know which language to use to talk to people. Many of them seem to understand both English and Spanish.The blocks in Manhattan are not as big as some people made me think.  You can, in fact, do trips that span multiple blocks on foot.And, at last, the weather is unbearably hot and humid. Not good for sweat. [Continue reading]

  • Recent news

    Micro-blogging services are preventing me to write real posts in my blog... so here comes a summary of recent happenings.I finished my master's degree in Computer Architecture, Networks and Systems a week ago, when I presented the master's thesis titled Task scheduling on the Cell processor. I'll try to post it somewhere online when I have good internet connection.Then, I've spent this whole week at the ACACES Summer School, a meet-up of people from the HiPEAC project to take courses on several computer architecture topics and get to know new people that works on similar areas as you. This meet-up happens in a campus at L'Aquila, a small town in Italy. I don't understand why some people at my university said that they did not want to come... because for me, it has been a great and fun week!And well, tomorrow I'm leaving ACACES and flying directly to New York City, to start my 4-month internship at the Google SRE group on Monday. [Continue reading]

  • Reinstalled Mac OS X in multiple partitions, again

    Past weekend, for some strange reason, I decided to dump all the MBP's hard disk contents and start again from scratch. But this time I decided to split the disk into multiple partitions for Mac OS X, to avoid external fragmentation slowdowns as much as possible.I already did such a thing back when the MBP was new. At that time, I created a partition for the system files and another for the user data. However, that setup was not too optimal and, when I got the 7200RPM hard disk drive six months later, I reinstalled again in a single partition. Just for convenience.But external fragmentation hurts performance a lot, specially in my case because I need to keep lots of small files (the NetBSD source tree, for example) and files that get fragmented very easily (sparse virtual machine disks). These end up spreading the files everywhere on the physical disk, and as a result the system slows down considerably. I even bought iDefrag and it does a good job at optimizing the disk layout... but the results were not as impressive as I expected.This time I reinstalled using the following layout:System: Mounted on /, HFS+ case insensitive, 30GB.Users: Mounted on /Users, HFS+ case insensitive, 50GB.Windows: Not mounted, NTFS, 40GB.Projects: Mounted on /Users/jmmv/Projects, HFS+ case sensitive, 30GB.Windows had to go before Projects so that the MBR partition table was constructed correctly; otherwise Windows failed to start after installation. The Projects partition holds those small files as well as the virtual machines. And Users keeps all the personal stuff such as photos, music and documents, which are mostly static.Using this layout, the machine really feels a lot faster. Applications start quickly, programs that deal with personal data such as iPhoto and iTunes load the library faster, and I don't have to deal with stupid disk images to keep things sequential on disk. However, the price to pay for such layout is convenience, because now the free disk space is spread in multiple partitions. [Continue reading]

  • Blacklisting a device in HAL

    I have an old Aiptek mini PenCam 1.3 MPixels, identified by USB vendor 1276 and product 20554. I want to use this webcam for videoconferencing in the machine I am setting up for this purpose. This machine carries a Fedora 9 x86_64 installation, as already mentioned in the previous post.Whenever I connect the camera to the machine, HAL detects the new device and then GNOME attempts to "mount" it using gphoto2. The result is that I get a new device on the desktop referring to the camera, which is pretty nice, but it does not work at all: accessing it raises an unexpected error and thus the photos stored in the webcam cannot be seen.Anyway, I do not care about the photo capabilities of this camera, just about its ability to stream video. Hence, I installed the gspca and kmod-gspca packages from the livna repositories and, according to the gspca driver, my camera is, supposedly, fully supported.Unfortunately, I was not able to get the /dev/video device: it didn't exist, even with the kernel modules loaded. After some manual investigation on the console (so that gphoto2 couldn't get in the way), I found that the video device really appears but vanishes as soon as gphoto2 attempts to access the camera. I suspect it is not possible to use the photo and video capabilities of the camera at once with the current drivers.So, how to avoid this problem? I had to tell HAL to omit this device, so that GNOME did not get any notification of its existance and therefore the interface did not attempt to mount the camera using gphoto2. However, there is few documentation on how to do this, so I had to resort to reading the files in /usr/share/hal/fdi/ and guess what to do.I ended up creating a 10-broken-cameras.fdi file in /etc/hal/fdi/preprobe/ with the following contents:<?xml version="1.0" encoding="UTF-8"?><deviceinfo version="0.2"> <device> <match key="usb.vendor_id" int="1276"> <match key="usb.product_id" int="20554"> <merge key="info.ignore" type="bool">true</merge> </match> </match> </device></deviceinfo>What this code snippet does is match the camera device using some of the properties that are attached to it and, once there is a match, appends the info.ignore property to the device description to tell HAL to not use this device any more. In order to set up the matching of a device, you can see the full list of properties of all device descriptors using the hal-device command. [Continue reading]

  • Desktop effects with an nVidia card and Fedora 9

    I'm setting up a machine at home to act as a videoconferencing station so that my family can easily talk to me during the summer, while I'm in NYC. This machine is equipped with an Athlon 64-bit processor and a nVidia GeForce 6200 PCI-Express video card. I decided to install Fedora 9 in this computer because this is the distribution I'm currently using everywhere (well, everywhere except on the Mac ;-). Plus it just works (TM), or mostly.The 3D desktop is not something that is really needed for daily work, but I wanted to try it. Unfortunately, I could not get the desktop effects to work the first time I tried. I enabled the livna repositories, installed the nVidia binary drivers and configured the X server to use them. However, telling the system to enable the Desktop Effects failed, and running glxinfo crashed with a "locking assertion failure" message.Googling a bit, I found a page mentioning that one has to run the livna-config-display command to properly configure the X server. I think I did not do this, so I just ran this manually and later restarted X. No luck.Fortunately, that same page also contained a snippet of the xorg.conf configuration file that was like this:Section "Files" ModulePath "/usr/lib64/xorg/modules/extensions/nvidia" ModulePath "/usr/lib64/xorg/modules"EndSectionEffectively, my configuration file was lacking the path to the nVidia extensions subdirectory. Adding that line fixed the problem: now the server loads the correct GLX plugin, instead of the "generic" one that lives in the modules directory. I guess livna-config-display should have set that up automatically for me, but it didn't...The desktop effects are now working :-) Now I figure out why compiz feels so slow... specially because I have the same problem at work with an Intel 965Q video card. [Continue reading]

  • lib64 problems

    Linux distributions for the x86_64 platform have different approaches when it comes to the installation of 32-bit and 64-bit libraries. In a 64-bit platform, 64-bit libraries are required to run all the standard applications but 32-bit libraries need to be available to provide compatibility with 32-bit binaries. In this post, I consider 64-bit applications to be the native ones and the 32-bit to be foreign.The two major approaches I have seen are:lib32 and lib64 directories, leaving lib to be just a symbolic link to the directory required by the native applications. This is the approach followed by Debian. The advantage of this layout is that the lib directory is the correct one for native applications. However, foreign applications that have built-in paths to lib, if these exist, will fail to work.lib and lib64 directories. This is the approach followed by Fedora. In this layout, the foreign applications which have built-in paths to lib will work just fine, but the native applications have to be configured explicitly to load libraries and plugins from within lib64.I have found so far two instances where the Fedora approach fails because native 64-bit applications hardcode the lib name in some places, instead of using lib64. One of these were the NetworkManager configuration files, which had an incorrect setup for the OpenVPN plugin and it failed to work. This issue has already been fixed in Fedora 9. The other problem was in gnome-compiz-manager where the application tries to load plugins from the lib directory, but as it is a 64-bit binary, it failed due to a bitness mismatch. This has been reported but is not yet fixed upstream. I'm sure several other similar problems remain to be discovered.I personally think that the Debian approach is more appropriate because it seems weird that all standard system directories, such as bin or libexec, contain 64-bit binaries but just one of them, lib, is 32-bit specific.As a side note, NetBSD follows an slightly different approach: lib contains 64-bit libraries and lib32, if installed at all, contains the 32-bit ones. [Continue reading]

  • Twitter and other news

    Don't know why but I finally succumbed to Twitter today, as if I did not have enough things to waste time.  You can follow me with the jmmv nick.I just noticed this post comes after more than a month since the last one; my apologies.  I do not have any free time these days to think about writing decent posts nor doing anything else.  My current work is basically attending class, writing reports, reading papers and going to the gym (this last thing only when possible).One of the things that drew a lot of my time recently was the writing of a paper for WIOSCA 2008, and I was just noticed of its acceptance.  Most likely you won't see me there though.Oh, and it's definitive.  I will be interning at Google NYC from late-July to late-October this year!  Extremely impatient for July to arrive.  Will be joining the Site Reliability Engineering team. [Continue reading]

  • Lost 4x05

    Watched episode 5 from season 4, titled The Constant, yesterday night.  As a couple of friends put it: "Best. Episode. Ever."  Let's hope this trend doesn't end here!3 more left to catch up. [Continue reading]

  • Back to Stone Age

    For a rather long while I had been able to avoid the use of the Subversion services offered by my research group even if they were omnipresent. But today, this lucky trend vanished. I have been "forced" to use one of these devilish repositories to add some of my stuff. Using this goes against my "principles", as a colleague said.If you don't know it, Subversion is a centralized version control system.  Linear history, the non-transparent way to back up the master server, primitive merging interfaces and, the worst thing of all, the need to access the network for every single operation are unbearable facts.Using a centralized VCS is like going back in time a million years. (Oh, excuse me, a million is too few.) I hate it!  I recently went on a trip and didn't have Internet access neither on the plane nor on the hotel; do you know how cool it was to still have full access (not just the working copy, that is) to my code, documents and everything else?  And even if you have Internet access, can you imagine how fast you can work without having to wait for the network?Well, I can't really blame the administrators. As far as I can tell, they are not too familiar with VCSs and, when making a decision, they just went for what was everywhere, which unfortunately is Subversion is everywhere. Everybody is making that mistake in this department and university.Let's see when I will have some free time to prepare a presentation about DVCSs (including Monotone as a case study) and give it to the whole department.  Given today facts, I should do this as soon as possible.Administrators, I know you are reading me.  Don't take this the wrong way! ;-) [Continue reading]

  • NetBSD talk at Isla Cristina

    Yesterday night, I got back from the "I Jornadas Tecnológicas Isla Cristina", a small technological conference organized at Isla Cristina, a little town in Huelva, Spain.The main organizers were the teachers of a local technical school (the IES Padre José Miravent), and they invited me to give a talk about NetBSD development.  I will publish the slides soon, but I have to warn you that you will not like the source format, aka PowerPoint. Being part of the university personnel, I was given a copy of Office 2008 for Mac and I wanted to give it a serious try before judging it.  It is certainly more powerful (or easy to use) than OpenOffice Impress, but it is also a lot slower; I don't know what they have done there, but the application feels really really sluggish.Anyway, back to the point of the conference.  It has been great and surpassed all the expectations I had.  The organization was excellent, the people was very nice, the food was (very) abundant and the talks were interesting (except for a couple of exceptions).  What else could you ask for?Just as a point of fact, there were around 300 registered people, and I guess around 100 of them came to my talk (it was first hour in the morning); that's a lot more public than I have ever had before, and it was a really exciting thing.  I hope the listeners enjoyed it as much as I did.The only thing I regret was not staying there one more day (after the conference) so I could go around the town and take some cool photos.  Maybe next year :-)  Ah, speaking of next year: if you get invited to give a talk, don't think twice and accept the offer! [Continue reading]

  • New Apple keyboard

    I recently went from this:To this:The reason for the change was that the old keyboard was not comfortable any more after around two years of usage. I think that the old keyboard model (in general, not the specific one I had) developed problems after some months of intensive use: its keys lost the smooth pressure feeling they once had.  (Maybe adding some kind of oil beneath them might fix this problem, though, as the keys can be easily detached from the keyboard.)  Due to that, it was becoming extremely hard to type on that keyboard without mistakes. Plus I have lately got used to laptop-style keyboards: short and soft keys.The new keyboard model feels nicely so far. It surely is basically a desktop-sized laptop keyboard, as its keys are very short and soft. But overall I like how I type on it, and my error ratio has lowered back to almost zero again :-)  If you are dubious about buying this keyboard, give it a try! [Continue reading]

  • Problems booting Debian on the PS3

    I had been running Fedora 8 for a long while on my PlayStation 3, but I got sick every time I had to run a yum update: that process was very slow. Furthermore, I prefer Debian as a Linux distribution due to its administration utilities and strong policies, so I thought to give it a second chance on my PS3. Second? Yes, I already installed it a while ago, but given that the Cell SDK is only packaged for Fedora made me switch. Anyway, as I'm not doing as much Cell development as I thought at home, I don't care any more and want to install something I'll enjoy. Eventually that'll be NetBSD...So I installed Debian 4.0 using the experimental installer. The process went flawlessly, but I chose to do manual partitioning: I created a 1GB partition for swap as /dev/ps3da1 and a 9GB ext3 partition for the root file system as /dev/ps3da2, in that order. Then, after installation, I was greeted by an unbootable system: for some reason, kboot failed to mount /dev/ps3da2 as its root file system and hence it couldn't parse its kboot.conf. Salvaging that situation was easy though: just mount that file system by hand, check the contents of /etc/kboot.conf and manually enter the command on the command line.But why was that failing? I already found this problem some months ago with the first attempt at installing Debian. And, for what is worth, things worked fine in Fedora 8, and it had the boot loader configuration files in the same place. After Googling a bit I found no answer, so I opted to read kboot's init code. And there was the explanation: when looking for a root file system, it checks if the file system is marked as active and skips it if not.The solution was to install the pmac-fdisk package under Debian, access the partition table of /dev/ps3d, mark /dev/ps3da2 as active, save the changes and reboot to see the system boot automatically. [Continue reading]

  • Google Summer of Code 2008 and NetBSD

    Google has launched the Summer of Code program once again this year, and NetBSD is a mentoring organization for the fourth time as announced in a netbsd-announce post. Unless things go very wrong in the following days, I will not take part this year as a student because I will be intering at Google SRE during the Summer!However, I will try to become a mentor for the "Convert all remaining regression tests to ATF" project. If you are looking for some interesting idea to apply for, this is a good one! Why?It will let you get into NetBSD internals in almost all areas of the system: you'll need to understand how the source tree is organized, how to add new components to it (because tests are almost in all aspects regular programs), how the current pieces of the system interact with each other...You will need to gain knowledge in some areas (such as the kernel or the libraries) to be able to port tests from the old framework (if it deserves that name ;-) to the new one and, if you are really up to it, even add new tests for functionality that is currently uncovered by the test suite. But adding new tests is something you will not be required to do, because the sole task of migrating the existing ones is a huge task already.Get involved in ATF's development because, as you study the existing test cases and their requirements, you will most likely find that it lacks some important functionality to make things really straightforward.And, of course, make a unvaluable contribution to the NetBSD operating system. Having a public test suite with high coverage means that the system will gain quality. Yes, you will most likely uncover bugs in many areas of the system and give them enough exposure so that someone else may fix them.Note that this project is really a Summer of Code project. It does not have a long design phase on its own so, once you have got used to the system and ATF, you'll just code and immediately make useful contributions. In the past, projects that had a heavy design phase involved were not good because, in the end, the student did not finish the code on time.So... don't hesitate to apply! I'm looking forward to see your applications for this project :-) [Continue reading]

  • Software bloat, 2

    A long while ago — just before buying the MacBook Pro — I already complained about software bloat. A year and two months later, it is time to complain again.I am thinking on renewing my MacBook Pro assuming I can sell this one for a good price. The reasons for this are to get slightly better hardware (more disk, better GPU and maybe 4GB of RAM) and software updates. The problem is: if I am able to find a buyer, I will be left without a computer for some days, and that's not a good scenario. I certainly don't want to order the new one without being certain that I will be paid enough for the current one.So yesterday I started assembling some old components I had lying around aiming at having an old but functional computer to work with. But today I realized that I also had the PlayStation 3 with Fedora 8 already installed, and that it'd be enough to use as a desktop for a week or so. I had trimmed down the installation to the bare minimum so that it'd boot as fast as possible and to leave free resources for testing Cell-related stuff. But if I wanted to use the PS3 as a desktop, I needed, for example, GNOME.Ew. Doing a yum groupinstall "GNOME Desktop Environment" took quite a while, and not because of the network connection. But even if we leave that aside, starting the environment was painful. Really painful. And Mono was not there, at all! It is amazing how unusable the desktop is with "only" 256MB of RAM; the machine is constantly going to swap, and the disk being slow does not help either. I still remember the days when 256MB was a lot, and desktop machines were snappy enough with only half of that, or even less.OK, so GNOME is a lot for 256MB of RAM. I am now writing this from the PS3 itself running WindowMaker. Which unfortunately does not solve all the problems — and the biggest one is that it is not a desktop environment. Firefox also requires lots of resources to start, and doing something else in the background still makes the machine use swap. (Note that I have disabled almost all of the system services enabled by default in Fedora, including SELinux.)If I finally sell my MBP, this will certainly be enough for a few days... but it's a pity to see how unusable it is. (Yeah, by today's standards, the PS3 is extremely short on RAM, I know, but GNOME used to run quite well with this amount of RAM just a few years ago.) [Continue reading]

  • ATF's error handling in C

    One of the things I miss a lot when writing the C-only code bits of ATF is an easy way to raise and handle errors. In C++, the normal control flow of the execution is not disturbed by error handling because any part of the code is free to notify error conditions by means of exceptions. Unfortunately, C has no such mechanism, so errors must be handled explicitly.At the very beginning I just made functions return integers indicating error codes and reusing the standard error codes of the C library. However, that turned out to be too simple for my needs and, depending on the return value of a function (not an integer), was not easily applicable.What I ended up doing was defining a new type, atf_error_t, which must be returned by all functions that can raise errors. This type is a pointer to a memory region that can vary in contents (and size) depending on the error raised by the code. For example, if the error comes from libc, I mux the original error code and an informative message into the error type so that the original, non-mangled information is available to the caller; or, if the error is caused by the user's misuse of the application, I simply return a string that contains the reason for the failure. The error structure contains a type field that the receiver can query to know which specific information is available and, based on that, cast down the structure to the specific type that contains detailed information. Yes, this is very similar to how you work with exceptions.In the case of no errors, a null pointer is returned. This way checking for an error condition is just a simple pointer check, which is no more expensive than an integer check. However, handling error conditions is more costly, but given that these are rare, it is certainly not a problem.What I don't like too much of this approach is that any other return value must be returned as an output parameter, which makes things a bit confusing. Furthermore, robust code ends up cluttered with error checks all around given that virtually any call to the library can produce an error somewhere. This, together with the lack of RAII modeling, complicates error handling a lot. But I can't think of any other way that could be simpler but, at the same time, as flexible as this one. Ideas? :PMore details are available in the atf-c/error.h and atf-c/error.c files. [Continue reading]

  • Rewriting parts of ATF in C

    I have spent part of past week and this whole weekend working on a C-only library for ATF test programs. An extremely exhausting task. However, I wanted to do it because there is reluctancy in NetBSD to write test programs in C++, which is understandable, and delaying it more would have made things worse in the future. I found this situation myself some days ago when writing tests for very low level stuff; using C++ there felt clunky, but it was still possible of course.I have had to reimplement lots of stuff that are given for-free in any other, higher-level (not necessarily high-level) language. This includes, for example, a "class" to deal with dynamic strings, another one for dynamic linked lists and iterators, a way to propagate errors until the point where they can be managed... and I have spent quite a bit of time debugging crashes due to memory management bugs, something that I rarely encountered in the C++ version.However, the new interface is, I believe, quite neat. This is not because of the language per se, but because the C++ interface has grown "incorrectly". It was the first code in the project and it shows. The C version has been written from the ground up with all the requirements known beforehand, so it is cleaner. This will surely help in cleaning up the C++ version later on, which cannot die anyway.The code for this interface is in a new branch, org.NetBSD.atf.src.c, and will hopefully make it to ATF 0.5: it still lacks a lot of features, hence why it is not on mainline. Ah, the joys of a distributed VCS: I have been able to develop this experiment locally and privately until it was decent enough to be published, and now it is online with all history available!From now on C++ use will be restricted to the ATF tools inside ATF itself, and to those users who want to use it in their projects. Test cases will be written using the C library except for those that unit-test C++ code. [Continue reading]

  • C++: Little teaser about std::set

    This does not build. Can you guess why? Without testing it?std::set< int > numbers;for (int i = 0; i < 10; i++) numbers.insert(i);for (std::set< int >::iterator iter = numbers.begin(); iter != numbers.end(); iter++) { int& i = *iter; i++;}Update (23:40): John gave a correct answer in the comments. [Continue reading]

  • BenQ RMA adventures, part 2

    My monitor is back from service! It was picked up on January 30th, and it has been returned today just after 6 days (4 work days). Note that the technical service's office was located in Portugal, just at the opposite side of the peninsula.And best of all, the monitor is fixed: firmware updated, so I can now disable the Overscan feature and get a perfect 1:1 pixel mapping on the HDMI input. Kudos to the BenQ RMA service for this quick and effective service! [Continue reading]

  • ATF 0.4 released

    I'm pleased to announce that the fourth release of ATF, 0.4, just saw the light. The NetBSD source tree has also been updated to reflect this new release.For more details please see the announcement. [Continue reading]

  • Home-made build farm

    I'm about to publish the 0.4 release of ATF. It has been delayed more than I wanted due to the difficulty in getting time-limited test cases working and due to my laziness in testing the final tarball in multiple operating systems (because I knew I'd have to fight portability problems).But finally, this weekend I have been setting up a rather-automated build farm at home, which is composed so far of 13 systems. Yes, 13! But do I use so much machines? Of course not! Ah, the joys of virtualization.What I have done is set up a virtual machine for each system I want to test using VMware Fusion. If possible, I configure both 32-bit and 64-bit versions of the same system, because different problems can arise in them. Each virtual machine has a builder user, and that user is configured to allow passwordless SSH logins by using a private key. It also has full sudo access to the machine, so that it can issue root-only tests and can shutdown the virtual machine. And about the software it has, I only need a C++ compiler, the make tool and pkg-config.Then I have a script that, for a given virtual machine:Starts the virtual machine.Copies the distfile inside the virtual machine.Unpacks the distfile.Configures the sources.Builds the sources.Installs the results.Runs the build-time tests.Runs the install-time tests as a regular user.Runs the install-time tests as root.Powers down the virtual machine.Ideally I should also run some different combinations of compilers inside each system (for example, SUNpro and GCC in Solaris) and make tools (BSD make and GNU make). I'm also considering in replacing some of the steps above by a simple make distcheck.I take a log of the whole process for later manual inspection. This way I can simply call this script for all the virtual machines I have and get the results of all the tests for all the platforms. I still need to do some manual testing in non-virtual machines such as in my PS3 or in Mac OS X, but these are minor (but yes, they should also be automated).Starting and stopping the virtual machines is what was trickiest, but in the end I got it working. Now I would like to adapt the code to work with other virtual machines (Parallels and qemu), clean it up and publish it somehow. Parts of it do certainly belong inside ATF (such as the formatting of all logs into HTML for later publication on a web server), and I hope they will make it into the next release.For the curious, I currently have virtual machines for: Debian 4.0r2, Fedora 8, FreeBSD 6.3, NetBSD-current, openSUSE 10.2, Solaris Express Developer Edition 2007/09 and Ubuntu Server 7.10. All of them have 32-bit and 64-bit variants except for Solaris, which is only 64-bit. Setting all of them up manually was quite a tedious and boring process. And the testing process is slow. Each system takes around 10 minutes to run through the whole "start, do stuff, stop" process, and SXDE almost doubles that. In total, more than 2 hours to do all the testing. Argh, an 8-way Mac Pro could be so sweet now :-) [Continue reading]

  • unlink(2) can actually remove directories

    I have always thought that unlink(2) was meant to remove files only but, yesterday, SunOS (SXDE 200709) proved my wrong. I was sanity-checking the source tree for the imminent ATF 0.4 release under this platform, which is always scary, and the tests for the atf::fs::remove function were failing — only when run as root.The failure happened in the cleanup phase of the test case, in which ATF attempts to recursively remove the temporary work directory. When it attempted to remove one of the directories inside it, it failed with a ENOENT message, which in SunOS may mean that the directory is not empty. Strangely, when inspecting the left-over work tree, that directory was indeed empty and it could not be removed with rm -rf nor with rmdir.The manual page for unlink(2) finally gave me the clue of what was happening:If the path argument is a directory and the filesystem supports unlink() and unlinkat() on directories, the directory is unlinked from its parent with no cleanup being performed. In UFS, the disconnected directory will be found the next time the filesystem is checked with fsck(1M). The unlink() and unlinkat() functions will not fail simply because a directory is not empty. The user with appropriate privileges can orphan a non-empty directory without generating an error message.The solution was easy: as my custom remove function is supposed to remove files only, I added a check before the call to unlink(2) to ensure that the path name does not point to a directory. Not the prettiest possibility (because it is subject to race-conditions even though it is not critical), but it works. [Continue reading]

  • Linux is just an implementation detail

    You can't imagine how happy I was today when I read the interview with KDE 4's developer Sebastian Kuegler. Question 6 asks him:6. Are there any misconceptions about KDE 4 you see regularly and would like to address?And around the middle of the answer, he says:Frankly, I don’t like the whole concept of the “Linux Desktop”. Linux is really just a kernel, and in this case very much a buzzword. Having to mention Linux (which is just a technical implementation detail of a desktop system) suggests that something is wrong. Should it matter to the user if he runs Linux or BSD on his machine? Not at all. It only matters because things just don’t work so well (mostly caused by to driver problems, often a matter of ignorance on some vendor’s side).Thanks Sebastian. I couldn't have said it better.What virtually all application developers are targeting —or should be targeting— is KDE or GNOME. These are the development platforms; i.e. what provide the libraries and services required for easy development and deployment.  It doesn't make any sense to "write a graphical application for Linux", because Linux has no standard graphical interface (unless you mean the framebuffer!) and, again, Linux is just a kernel.I think I have already blogged about the problems of software redistribution under Linux... will look for that post and, if it is not there, it is worth a future entry. [Continue reading]

  • Interview on NetBSD 4

    I'm happy to have been part of the "Waving the flag: NetBSD developers speak about version 4.0" interview.  Enjoy! [Continue reading]

  • BenQ RMA adventures, part 1

    A couple of weeks ago, I called BenQ's RMA service to ask for a fix for my new FP241W Z. I have problems with the HDMI digital input: the monitor crops part of the image on each side and makes it slightly bigger to fill all the screen. It turns out that there is a firmware upgrade for this specific monitor that adds a configuration option to turn off overscan, which effectively should resolve this problem. So I called them to get the firmware in my monitor updated.The operator was very polite and helpful. After asking for the details of the monitor and the problem I was having, she confirmed that, effectively, the problem I was describing was due to an outdated firmware and that they'd be fixing it for free. I gave the necessary data and then —after 20-something minutes— they told me that a carrier would come to my location and pick the monitor up for delivery to the technical center. But first, such carrier had to contact me to set up a date for the hand out.Two weeks after, no one had called me nor sent me any email. In fact, they told me that I should be receiving an email, so I simply waited patiently because, when it comes to email, things can be very slow. I even thought that I might have deleted that message as spam.  But two weeks was already too much.So yesterday's afternoon, I called the RMA service again and explained the situation. They promised to fix it that evening. Effectively, they did. They called me this morning asking for another detail about the monitor, so I assumed things were being dealt with.And this afternoon, at around 15.30, I received a funny SMS from the carrier that had to pick the monitor up. They basically said "We will come to pick up the package on the 30th (aka today). It will be between 9.00 to 12.00 or between 12.00 to 19.00.".  Great!  Could it be more inaccurate please?  First of all, the mentioned times cover practically all day. Second, most of that time had already been skipped. And third, I was getting the notice the same day they were coming. I immediately rang home and asked my mother if somebody had gone there. As they had not shown up yet, I returned home quickly, packed the monitor up in a hurry and soon after, the carrier reached home.  Just a matter of luck I was able to deal with this on time!This isn't really a critique; I just want to explain how the process is going as things progress. The BenQ service has been very responsive and polite until now, and I think I can only blame the carrier service.  Let's hope things move fast and well from now on. [Continue reading]

  • A request to virtualization software developers

    Here is a request for a feature I have not yet seen in any virtualization application — used Parallels Desktop 2, VMware Fusion 1.1 and another product I can't speak of yet — that I'd love to have.  It'd make things so much easier for me...  So here is an open request just in case one of the developers of free alternatives (e.g. VirtualBox) reads it and decides to get ahead of the competence by implementing it.Before explaining my feature request, let's consider you have a server on your network on which you run multiple virtual machines (VMs) for whatever purpose. These machines are exported to the network using bridged networking so that other computers in the network can access them transparently as if they were physical computers. To make this setup trivial, you have a DHCP server on your network that hands out static IP addresses to these virtual servers, and you also have a DNS server that maps these addresses to static names. This way, users on your network can access the virtual machines by simply spelling out their host names.Now let's move to the laptop world where you are connected to different networks all the time (e.g. at home or at work) or no network at all. Here I will assume that you will want to access the VMs exclusively from your laptop. In this case, you should not use bridged networking because you'd be exporting all your virtual machines to the possibly untrusted network. And you cannot rely on the external DHCP nor DNS servers to deal with static IP addresses nor host names for you because in many situations you have no control over them.Your best bet is to use shared networking to configure your VMs (or host-only networking if they needn't access the outside world).  But if you do so, your VMs will get random IP addresses because you have no control over the DHCP sever bundled into the virtualization application. And as a result of this, you cannot assign host names to them. As a workaround, you can manually configure each operating system running on a VM to have a static IP (bypassing DHCP), then add an entry in the host's /etc/hosts file to assign a host name to the guest OS and at last add an entry in the guest's /etc/hosts file to assign a host name to the host OS. Which is painful.In my idea world, the virtualization applications could have the ability to fine-tune the bundled DHCP server to hand out specific addresses to the virtual machines (VMs) and a way to specify DNS host names for them, all from the configuration interface and without having to touch any configuration file in the host system (nor guest, for that matter). E.g. add a little configuration box for the IP address and host name of the guest OS alongside the box that already exists to configure the MAC address. Then have the bundled DHCP server hand out the appropriate entries to the guests, add an entry to the host's /etc/hosts and provide a virtual DNS server to the guests so that they can resolve each other's names.An use case for this? I have two VMs that I carry around in my MacBook Pro that I use very frequently and that I do not want to expose to the outside network at all. One is a Fedora 8 installation and the other a NetBSD one. I start them up from the graphical interface and then access them through SSH exclusively. But in order to reliably use SSH, I need to do the above manual steps to set up a host name for them, or otherwise using SSH is a pain.I am also trying to set up an automatic build farm for ATF (composed probably of 10-15 VMs) and the need to set all these details manually is extremely boring. [Continue reading]

  • Testing the process-tree killing algorithm

    Now that you know the procedure to kill a process tree, I can explain how the automated tests for this feature work. In fact, writing the tests is what was harder due to all the race conditions that popped up and due to my rusty knowledge of tree algorithms.Basically, the testing procedure works like this:Spawn a complete tree of processes based on a configurable degree D and height H.Make each child tell the root process its PID so that the root process can have a list of all its children, be them direct or indirect, for control purposes.Wait until all children have reported their PID and are ready to be killed.Execute the kill-tree algorithm on the root process.Wait until the children have died.Check that none of the PIDs gathered in point 2 are still alive (which could be, but reparented to init(8) if they were not properly killed). If some are, the recursive kill failed.The tricky parts were 3 and 5.In point 3, we have to wait until all children have been spawned. Doing so for direct children is easy because we spawned them, but indirect ones are a bit more difficult. What I do is create a pipe for each of the children that will be spawned (because given D and H I can know how many nodes there will be) and then each child uses the appropriate pipe to report its PID to the parent when it has finished initialization and thus is ready to be safely killed. The parent then just reads from all the pipes and gets all the PIDs.But what do I mean with safely killed? Preliminary versions of the code just ran through the children's code and then exited, leaving them in zombie status. This worked in some situations but broke in others. I had to change this to block all children in a wait loop and then, when killed, take care to do a correct wait for all of its respective children, if any. This made sure that all children remained valid until the attempt to kill them.In point 5, we have to wait until the direct children have returned so that we can be sure that the signals were delivered and processed before attempting to see if there is any process left. (Yes, if the algorithm fails to kill them we will be stalled at that point.) Given that each children can be safely killed as explained above, this wait will do a recursive wait along all the process tree making sure that everything is cleaned up before we do the final checks for non-killed PIDs.This all sounds very simple and, in fact, looking at the final code it is.  But it certainly was not easy at all to write, basically because the code grew in ugly ways and the algorithms were much more complex than they ought to be. [Continue reading]

  • How to kill a tree of processes

    Yesterday I mentioned the need for a way to kill a tree of processes in order to effectively implement timeouts for test cases. Let's see how the current algorithm in ATF works:The root process is stopped by sending a SIGSTOP to it so that it cannot spawn any new children while being processed.Get the whole list of active processes and filter them to only get those that are direct children of the root process.Iterate over all the direct children and repeat from 1, recursively.Send the real desired signal (typically SIGTERM) to the root process.There are two major caveats in the above algorithm. First, point 2. There is no standard way to get the list of processes of a Unix system, so I have had to code three different implementations so far for this trivial requirement: one for NetBSD's KVM, one for Mac OS X's sysctl kern.proc node and one for Linux's procfs.Then, and the worst one, comes in point 4. Some systems (Linux and Mac OS X so far) do not seem to allow one to send a signal to a stopped process. Well, strictly speaking they allow it, but the second signal seems to be simply ignored whereas under NetBSD the process' execution is resumed and the signal is delivered. I do not know which behavior is right.If we cannot send the signal to the stopped process, we can run into a race condition: we have to wake it up by sending a SIGCONT and then deliver the signal, but in between these events the process may have spawned new children that we are not aware of.Still, being able to send a signal to a stopped process does not completely resolve the race condition. If we are sending a signal that the user can reprogram (such as SIGTERM), that process may fork another one before exiting, and thus we'd not kill this one.  But... well... this is impossible to resolve with the existing kernel APIs as far as I can tell.One solution to this problem is killing a timed-out test by using SIGKILL instead of SIGTERM. SIGKILL could work on any case because means die immediately, without giving a chance to the process to mess with it. Therefore SIGCONT would not be needed in any case &mash;because you can simply kill a stopped process and it will die immediately as expected— and the process would not have a chance to spawn any more children after it had been stopped.Blah, after writing this I wonder why I went with all the complexity of dealing with signals that are not SIGKILL... say over-engineering if you want... [Continue reading]

  • Implementing timeouts for test cases

    One of the pending to-do entries for ATF 0.4 is (was, mostly) the ability to define a timeout for a test case after which it is forcibly terminated.  The idea behind this feature is to prevent broken tests from stalling the whole test suite run, something that is already needed by the factor(6) tests in NetBSD.  Given that I want to release this version past weekend, I decided to work on this instead of delaying it because... you know, this sounds pretty simple, right? Hah!What I did first was to implement this feature for C++ test programs and added tests for it.  So far, so good.  It effectively was easy to do: just program an alarm in the test program driver and, when it fires, kill the subprocess that is executing the current test case. Then log an appropriate error message.The tests for this feature deserve some explanation.  What I do is: program a timeout and then make the test case's body sleep for a period of time.  I try different values for the two timers and if the timeout is smaller than the sleeping period, then the test must fail or otherwise there is a problem.The next step was to implement this in the shell interface, and this is where things got tricky.  I did a quick and dirty implementation, and it seemed to make the same tests I added for the C++ interface pass.  However, when running the bootstrap testsuite, it got stalled at the cleanup part.  Upon further investigation, I noticed that there were quite a lot of sleep(1) processes running when the testsuite was stalled, and killing them explicitly let the process continue.  You probably noticed were the problem was already.When writing a shell program, you are forking and executing external utilities constantly, and sleep(1) is one of them.  It turns out that in my specific test case, the shell interpreter is just waiting for the sleep subprocess to finish (whereas in the C++ version everything happens in a single process).  And, killing a process does not kill its children.  There you go.  My driver was just killing the main process of the test case, but not everything else that was running; hence, it did not die as expected, and things got stalled until the subprocesses also died.Solving this was the fun part. The only effective way to make this work is to kill the test case's main process and, recursively, all of its children. But killing a tree of processes is not an easy thing to do: there is no system interface to do it, there is no portable interface to get a list of children and I'm yet unsure if this can be done without race conditions.  I reserve the explanation of the recursive-kill algorithm I'm using for a future post.After some days of work, I've got this working under Mac OS X and also have got automated tests to ensure that it effectively works (which were the hardest part by far).  But as I foresaw, it fails miserably under NetBSD: the build was broken, which was easy to fix, but now it also fails at runtime, something that I have not diagnosed yet. Aah, the joys of Unix... [Continue reading]

  • Got a BenQ FP241W Z flat panel

    As I already mentioned, I was interested in buying a 24" widescreen monitor for both my laptop and PlayStation 3. I considered many different options but, based on my requirements (1920x1200, 1:1 pixel mapping, dual HDMI/DVI-D inputs), I ended up choosing the BenQ FP241W Z (yeah, did it again).This thing is gorgeous as the following photos will show you. Lots of real screen state to work — the ability to have many different, non-overlapping editors and terminals open at once is very convenient — and great to watch videos. But it has a "small" problem (I want it fixed!) that I'll explain after them...So here are two photos of the MacBook Pro working in clamshell mode, connected to the new monitor:And here are a couple of images showing the PlayStation 3 in action:OK, this last image is the one I wanted to discuss. It is showing the "PlayStation Store", accessible directly from an option in the XMB interface. It is easy to see that the image is cropped on the four sides: some letters are cut, and the top and bottom buttons are shown extremely close to the screen border's. This is not what I expected.Even more, booting Linux reports that the framebuffer's dimensions are 1688x964 even though the screen says it is working in 1080p mode (1920x1080). If I force Linux to go to full 1080p, then the terminal is also cropped on the four sides, making it unusable. According to this thread, this is caused by the monitor assuming that the HDMI input has overscan hence it crops the image. (Note that the image is being slightly scaled up to fill the whole screen, because the visible area is smaller than the displayed one! And I certainly don't want that.)It looks like that a firmware update released on May 2007 adds an Overscan tunable option on the settings, which allows you to disable this feature and thus get the whole image. But unfortunately my monitor was manufactured on April 2007, so it has the old firmware. Grr. Will call BenQ support tomorrow and see if they can do anything about it (I guess they'll be able to do a firmware upgrade, but they may need to take the monitor for several days^Wweeks.). Otherwise I may end up returning this unit. Heck, I searched 1:1 pixel mapping like crazy and now I find this other, unexpected problem. No way.Other than that, great display. Now, if only I had a Mac Pro to accompany it... ;-) [Continue reading]

  • 24' widescreen comparison

    As promised in the previous post Choosing a 24" widescreen monitor, here comes the brief analysis I did before deciding which monitor to buy. Refer to the comparison table (or the PDF version if the XHTML one does not work for you) for more details. I'm linking this externally because putting it here, in this width-limited page, would be unsuitable.The data in that table has been taken from the official vendor pages when possible, even though they failed to list some of the details. I tried to look for the missing ones around the network and came up with, I think, fairly trustable data. But of course some of them may be wrong.By the way, be specially careful when comparing the Contrast ratio and Response time fields. Each vendor likes to advertise these in different ways, so you cannot really compare them without knowing what each value really means (and I don't, because they generally don't specify it).Anyway, even the table is not complete (some fields are marked with N/A because I could not easily came up with an answer), I hope it will be useful to some of you. [Continue reading]

  • Interferences in CVS tagging

    Once again, CVS shows its weaknesses. Last night I committed a fix to pkgsrc and soon after I noticed I had a prior e-mail by Alistair, a member of PMC and the one responsible for the preparation of pkgsrc releases, asking developers to stop committing to the tree because he was going to tag it for pkgsrc-2007Q4. It turns out that my fix did not get into the branch because the directory it went in (devel/monotone) had already been tagged. Had I committed the fix to, say, x11/zenity, it would have gone into the branch. Or worse, had I committed a fix that spanned multiple files, some of them would have got to the branch and others not.So what, am I supposed to read e-mail before I can do a commit? What if the mail does not arrive on time? What if the commit had affected many more directories and some of them had already been tagged but some not?This is just another example of CVS showing its limitations and stupidities. Given that each file's history is stored independently — i.e. there are no global changesets — the only way to tag the repository is to go file by file and set the tag on each. And then, you need to check which revision of each file is the one to be tagged. I do not know why is this so slow even when you do a rtag (so the one doing the work is the server alone) on HEAD, but in the case of pkgsrc this process took more than 2 hours!OK, OK, I'm hiding the truth. The thing is there are some ways around this: for example, using the tag command will tag the exact revisions you have in your working copy, or passing a date to rtag will tag the repository based on the provided timestamp. This way you ensure that the tagging process will be consistent even if people keep committing changes to the tree. However, the first of these commands will require a lot of network communication and the second will put a lot of stress on the server, making the command even slower (or that's what I've been told).In virtually all other version control systems that support changesets, a tag is just a name for a given revision identifier. And defining this tag is a trivial and quick process. Well, Subversion is rather different because tags are just copies of the tree, but I think that they deal with these efficiently. [Continue reading]

  • Welcome, 2008

    It is a new year again.Let's see if I can, at least, accomplish one goal: I should try to not delay stuff as much as I have been doing until now. This specially refers to replying to some e-mails and working on some stuff I once started but have not had the time to finish (bad excuses, I know). The clearest example that comes to my mind is Boost.Process, for which I have got many status-requests already... but there are also some tiny pet projects such as genfb support for NetBSD/mac68k and witheouts for tmpfs. Of course, there also is the conversion of more NetBSD tests to ATF.But, and this is a big but, the first semester of the year will probably keep me extremely busy with my Ph.D. courses... and, to make things worse, when I get home in the evening I'm so tired that I don't want to do more work. Will have to try to organize tasks a bit better so that there is time for everything.Anyway, happy new year to everyone! And thanks to your continuous visits and support, this is the 400th post :-) [Continue reading]