• Recent GNOME fixes

    A week has almost passed since someone told me that D-Bus' session daemon was broken in NetBSD. I curse that day! ;-) I've been investigating that problem since then and (very) beleatedly fixing some issues in other GNOME programs during the process.D-Bus' session daemon did not work under NetBSD because it couldn't authenticate incoming connections; that was due to the lack of socket credentials. After some of days of investigation — which included discovering that NetBSD does indeed support socket credentials through LOCAL_CREDS — and multiple attempts to implement them, I finally got D-Bus session daemon to authenticate appropriately.This served me to fix gnome-keyring too, which was broken for the exact same reason, and gnome-keyring-manager, the application I was using to check whether gnome-keyring worked or not.At last I also finally sat down and solved an annoying problem in the gnome-applets package that caused the Sticky Notes applet to crash when adding a new note; this had been happening since 2.12.0 if I recall correctly. I am sure that the root of this problem was also producing incorrect behavior in other panel applets.For more details check these out:dbus: #7798 - Generalize kqueue supportdbus: #8037 - Improve debugging messages in exchange_credentialsdbus: #8041 - Add LOCAL_CREDS socket credentials supportgnome-keyring: #353105 - Implement LOCAL_CREDS socket credentialsgnome-applets: #353239 - Get rid of AC_DEFINE_DIRgnome-keyring-manager: #353251 - Better handling of null pathsOuch... and GNOME 2.16 is around the corner... I'm afraid of all the new problems to come! [Continue reading]

  • More on LOCAL_CREDS

    One of the problems of learning new stuff based on trial-and-error iterations is that it is very easy to miss important details... but that's the price to pay when there is no decent documentation available for a given feature. We saw yesterday multiple details about LOCAL_CREDS socket credentials and, as you may deduce, I missed some.First of all I assumed that setting the LOCAL_CREDS option only affected the next received message (I didn't mention this explicitly in the post though). It turns out that this is incorrect: enabling this option makes the socket transmit credentials information on each message until the option is disabled again.Secondly, setting the LOCAL_CREDS option on a server socket (one configured with the listen(2) call) results in all sockets created from it through accept(2) to also carry the flag enabled. In other words, it is inherited.These features are interesting because, when using combined, avoid the need for the synchronization protocol outlined in the previous post — in some cases only. If the credentials are to be transmitted at the very beginning of the connection, the server can follow these steps:Create the server socket and configure it with bind(2) and listen(2).Before entering the accept(2) loop, set the LOCAL_CREDS option on the server socket.Enter the accept(2) loop and start accepting clients.For each new client:Receive its first message.Get the credentials from it.Disable the LOCAL_CREDS option from the socket used to communicate with that specific client.It couldn't be easier! This is still different to all other socket credentials methods I know of but can be easily adapted to protocols that were not designed to support LOCAL_CREDS (i.e. that do not implement the synchronization explained in the previous post). [Continue reading]

  • LOCAL_CREDS socket credentials

    Socket credentials is a feature that allows a user process to receive the credentials (UID, GID, etc.) of the process at the other end of a communication socket in a safe way. The operating system is in charge of managing this information, sent separately from the data flow, so that the user processes cannot fake it. There are many different implementations of this concept out there as you can imagine.For some reason I assumed for a long time that NetBSD didn't support any kind of socket credentials. However, I recently discovered that it indeed supports them through the LOCAL_CREDS socket option. Unfortunately it behaves quite differently from other methods. This poses some annoying portability problems in applications not designed in the first place to support it (e.g. D-Bus, the specific program I'm fighting right now).LOCAL_CREDS works as follows:The receiver interested in remote credentials uses setsockopt(2) to enable the LOCAL_CREDS option in the socket.The sender sends a message through the channel either with write(2) or sendmsg(2). It needn't do anything special other than ensuring that the message is sent after the receiver has enabled the LOCAL_CREDS option.The receiver gets the message using recvmsg(2) and parses the out of band data stored in the control buffer: a struct sockcred message that contains the remote credentials (UID, GID, etc.). This does not provide the PID of the remote process though, as other implementations do.The tricky part here is to ensure that the sender writes the message after the receiver has enabled the LOCAL_CREDS option. If this is not guaranteed, a race condition appears and the behavior becomes random: some times the receiver will get socket credentials, some times it will not.To ensure this restriction there needs to be some kind of synchronization protocol between the two peers. This is illustrated in the following example: it assumes a client/server model and a "go on" message used to synchronize. The server could do:Wait for client connection.Set LOCAL_CREDS option on remote socket.Send a "go on" message to client.Wait for a response, which carries the credentials.Parse the credentials.And the client could do:Connect to server.Wait until "go on" message.Send any message to the server.To conclude, a sample example program that shows how to manage the LOCAL_CREDS option. socketpair(2) is used for simplicity, but this can easily be extrapolated to two independent programs.#include <sys/param.h>#include <sys/types.h>#include <sys/inttypes.h>#include <sys/socket.h>#include <sys/un.h>#include <err.h>#include <stdio.h>#include <stdlib.h>#include <string.h>#include <unistd.h>intmain(void){ int sv[2]; int on = 1; ssize_t len; struct iovec iov; struct msghdr msg; struct { struct cmsghdr hdr; struct sockcred cred; gid_t groups[NGROUPS - 1]; } cmsg; /* * Create a pair of interconnected sockets for simplicity: * sv[0] - Receive end (this program). * sv[1] - Write end (the remote program, theorically). */ if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) == -1) err(EXIT_FAILURE, "socketpair"); /* * Enable the LOCAL_CREDS option on the reception socket. */ if (setsockopt(sv[0], 0, LOCAL_CREDS, &on, sizeof(on)) == -1) err(EXIT_FAILURE, "setsockopt"); /* * The remote application writes the message AFTER setsockopt * has been used by the receiver. If you move this above the * setsockopt call, you will see how it does not work as * expected. */ if (write(sv[1], &on, sizeof(on)) == -1) err(EXIT_FAILURE, "write"); /* * Prepare space to receive the credentials message. */ iov.iov_base = &on; iov.iov_len = 1; memset(&msg, 0, sizeof(msg)); msg.msg_iov = &iov; msg.msg_iovlen = 1; msg.msg_control = &cmsg; msg.msg_controllen = sizeof(struct cmsghdr) + SOCKCREDSIZE(NGROUPS); memset(&cmsg, 0, sizeof(cmsg)); /* * Receive the message. */ len = recvmsg(sv[0], &msg, 0); if (len err(EXIT_FAILURE, "recvmsg"); printf("Got %zu bytesn", len); /* * Print out credentials information, if received * appropriately. */ if (cmsg.hdr.cmsg_type == SCM_CREDS) { printf("UID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_uid); printf("EUID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_euid); printf("GID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_gid); printf("EGID: %" PRIdMAX "n", (intmax_t)cmsg.cred.sc_egid); if (cmsg.cred.sc_ngroups > 0) { int i; printf("Supplementary groups:"); for (i = 0; i printf(" %" PRIdMAX, (intmax_t)cmsg.cred.sc_groups[i]); printf("n"); } } else errx(EXIT_FAILURE, "Message did not include credentials"); close(sv[0]); close(sv[1]); return EXIT_SUCCESS;} [Continue reading]

  • A split function in Haskell

    Splitting a string into parts based on a token delimiter is a very common operation in some problem domains. Languages such as Perl or Java provide a split function in their standard library to execute this algorithm, yet I'm often surprised to see how many languages do not have one. As far as I can tell neither C++ nor Haskell have it so I have coded such a function in the past multiple times in both languages. (This is not exactly true: Haskell has the words function which splits a string by whitespace characters. Nevertheless I didn't know this when I wrote my custom implementation.)When I implemented a custom split function in Haskell I was really amazed to see how easy and clean the resulting code was. I'm sure there is some better and even cleaner way to write it because I'm still a Haskell newbie! Here is it:split :: String -> Char -> [String]split [] delim = [""]split (c:cs) delim | c == delim = "" : rest | otherwise = (c : head rest) : tail rest where rest = split cs delimThe above code starts by declaring the function's type; this is optional because Haskell's type system is able to automatically deduce it. It then uses pattern matching to specify the algorithm's base and recursive cases. At last, the recursive case is defined by parts, just as you do in mathematics. Oh, and why recursivity? Because iteration does not exist in functional programming in the well-known sense of imperative languages. Also note the lack of variables (except for the input ones) and that everything is an evaluable expression.Let's now compare the above code with two implementations in C++. A first approach to the problem following common imperative programming thinking results in an iterative algorithm:std::deque< std::string >split_iterative(const std::string& str, char delim){ std::deque< std::string > parts; std::string word; for (std::string::const_iterator iter = str.begin(); iter != str.end(); iter++) { if (*iter == delim) { parts.push_back(word); word.clear(); } else word += *iter; } parts.push_back(word); return parts;}This is certainly uglier and much more difficult to prove right; iteration is a complex concept in that sense. In this code we have variables that act as acumulators, temporary objects, commands, etc. Be glad that I used C++ and not C to take advantage of STL containers.OK, to be fair the code should be implemented in a recursive way to be really comparable to the Haskell sample function. Let's attempt it:std::deque< std::string >split_recursive(const std::string& str, char delim){ std::deque< std::string > parts; if (!str.empty()) { std::string str2 = str; parts = split_recursive(str2.erase(0, 1), delim); if (str[0] == delim) parts.push_front(""); else parts[0] = str[0] + parts[0]; } else parts.push_front(""); return parts;}This split_recursive function follows the same algorithm as the split written in Haskell. I find that it is still harder to read and more delicate (I had some segmentation fault s until I got it right).Of course Haskell is not appropriate for everything (which is true for every language out there). I have yet to write a big and useful program in Haskell to really see its power and to be able to relly compare it to other languages. All I can do at the moment is to compare trivial stuff as the above. [Continue reading]

  • What have I learned during SoC?

    One of SoC's most important goals is the introduction of students to the free software world; this way there are high chances that they will keep contributing even when SoC is over. Students already familiar with FOSS (as was my case both years) are also allowed to participate because they can seize the Summer to learn new stuff and improve their skills.As I expected, the development of Boost.Process has taught me multiple new things. First of all, I wanted to get familiar with the Win32 API because I knew nothing about it. I have achieved this objective by learning the details about process and file management and making Boost.Process work under this platform. Sincerely, Win32 is overly complex but has some interesting features.Secondly, I have got a lot more fluent with C++ templates and have learned some curious coding techniques that I never thought about in the past. The most impressive one in my opinion is that templates can be used to achieve build time specialization, avoiding expensive virtual tables at run time and inheritance when these are not really needed. (I only considered them for polimorphic containers before.)At last, I have also got into several utilities used for Boost development. Among them are Quickbook for easy document writing, Boost.Build v2 for portable software building and the Boost Unit Test library for painlessly creating automated test suites.All in all I'm happy with the outcome of the project and the new knowledge. If SoC happens again, you should really consider joining if you have the chance! [Continue reading]

  • Boost.Process 0.1 published

    SoC 2006 is officially over — at least for me in my timezone. Given that the Subversion repository has some problems with public access, I've tagged the current sources as the first public version and uploaded a couple of tarballs to the Boost Vault. Both the tag and the tarballs will also serve historical purposes, specially when newer ones come ;-)You can download the archives from the Process directory in tar.gz and ZIP formats. Enjoy! [Continue reading]

  • Boost.Process tarballs posted

    As everybody is not comfortable accessing Subversion repositories to download source code, I've posted two tarballs with Boost.Process' sources. They include an exported copy of the repository contents as well as prebuilt documentation in the libs/process/doc/html subdirectory.You can download the compressed archive either in tar.gz format or in ZIP. Keep in mind that these will be updated very frequently so please do not use them to prepackage the library.Changes from yesterday's announcement are minor at this point. For the curious ones: there is now a list of pending work and the Last revised item in the main page has been fixed. As a side effect of this last change, Boostbook will support SVN's $Date$ tags if my patch is integrated :-) [Continue reading]

  • Blog migrated to new Blogger beta

    Blogger announced yesterday multiple improvements to their service. These are still in beta — as almost all other Google stuff, you know ;-) — and are being offered to existing users progressively. To my surprise, the option to migrate was available on my dashboard today so I applied for it; I was very interested in the post labelling feature.The migration process has been flawless and trivial. After the change nothing seemed to have changed except for some minor nits in the UI. I looked around for the labels feature but discovered that it is only available once you migrate to the new "layouts system", an easier way to desing your blog's look.The switch to layouts scared me a bit because I was afraid of not being able to integrate the Statcounter code back again. But after verifying that the change was reversible, I tried it. I can confirm that the new customization page is much, much easier to use than before, although still too limited (direct HTML editing is not available yet). Oh, and I seized the oportunity to switch to a slightly different theme (yes, it was available before).Aside from that there are some new nice features such as RSS feeds (weren't they there before?), a better archive navigation (see the right bar), integration with Google accounts and many other things I'm surely missing.Summarizing: It has taken a long while for the Google people to upgrade Blogger's service, but the wait has been worth it. Now more than ever, I don't regret migrating from Livejournal to this site almost a year ago. [Continue reading]

  • SoC: Boost.Process published

    In a rush to publish Boost.Process before the SoC deadline arrives, I've been working intensively during the past two days to polish some issues raised by my mentor. First of all I've added some Win32-specific classes so that the library does not seem Unix-only. These new classes provide functionality only available under Windows and, on the documentation side, they come with a couple of extra examples to demonstrate their functionality.Speaking of documentation, it has been improved a lot. The usage chapter has been rewritten almost completely; it has gained a couple of tutorials and all the platform-specific details in it have been moved to two new chapters. One of them focuses on explaining those features available only under a specific operating system while the other summarizes multiple portability issues that may arise when using the generic classes. Additionally, a chapter about supported systems and compilers has been added.There are still two big things missing that shall be worked on in the (very) short term: add a design decisions chapter to the documentation and incorporate asynchronous functionality to the library by using Boost.Asio. This last thing is needed to keep things simple from the user 's point of view (i.e. no threads on his code).Check out the official announcement for more details.I guess that this closes SoC for me this year. There are still going to be some changes before Monday but don't expect anything spectacular (I'll be away during the weekend... hopefully ;-). But don't be afraid! Work on this project will continue afterwards! [Continue reading]

  • SoC: Status report 3

    Only 8 more days and SoC will be officially over... Time has passed very fast and my project required much more work than I initially thought. It certainly cannot be completed before the deadline but I assure you that it will not fall into oblivion afterwards; I have spent too much time on it to forget ;-)There have been many changes in Boost.Process' code base since the previous status report; let's see a brief summary:The library has been simplified removing all those bits that were aimed at "generic process management". Now it is focused on child process managing only, although extending it to support other process-related functionality is still possible (preserving compatibility with the current API). It'll be better to design and implement these features when really needed because they will require a lot of work and cannot be planned right now; doing so might result in an incomplete and clusmy design. Yup... my mentor (Jeff Garland) was right when he suggested to go this simplified route at the very beginning!Due to the above simplifications, some classes are not templated any more (the stuff that depended on the template parameters is now gone). I bet some of them could still be, but this can be easily changed later on.There is now a specialized launcher in the library to painlessly start command pipelines. This also comes with a helper process group class to treat the set of processes as a unique entity.The user now has much more flexibility to specify how a child process' channels behave. While documenting the previous API it became clear that it was incomplete and hard to understand.Code from all launchers has been unified in a base private class to avoid duplication and ensure consistency across those classes. Similar changes have ocurred in the test suite, which helped in catching some obscure problems.Related to previous, many of the code used to do the actual task of spawning a process has been moved out of the individual launcher classes into some generic private functions. This was done to share more code, improve cohesion and readability.The documentation is now much better, although it still lacks a chapter about design issues. See the online snapshot for more details.And, of course, multiple bug fixes and cleanups.Still, I haven't had a chance to ask for a public review in Boost's developers mailing list. The problem is that I continously find things to improve or to complete and prefer to do them before asking for the review. However, as time is running out I'll be forced to do this in the forthcoming week to get some more feedback in time. [Continue reading]

  • IMAP gateway to GMail

    Update (Oct 24, 2007): OK, this is one of the most visited posts in my blog. Don't bother reading this. As of today, GMail supports IMAP without the need for external hacks! Just go to your settings tab, enable it, configure your mailer and that's it! More information is on their help page.Wouldn't it be great if you could access your GMail account using your favourite email client from multiple computers, yet keep all of them synchronized? That's what you could do if they provided support for the IMAP protocol, but unfortunately they currently don't.So yesterday I was wondering... would it be difficult to write an IMAP gateway for GMail? Sure it would but... guess what? It already exists! The GMail::IMAPD Perl module implements this functionality in a ready-to-use service. All you need to do is copy/paste the sample program in the manual page, execute it and you've got the gateway running.Unfortunately, it's still quite incomplete as it only supports some mail clients and lacks some features — the documentation gives more details on this. I could get it to work with Apple Mail but was very slow overall (maybe because I have a lot of mail in my account) and had random problems. You might get better results though.For your pleasure, it is now in pkgsrc as mail/p5-GMail-IMAPD alongside a patch to accomodate a change in GMail's login protocol. There is also the programmatic interface to the web service used by the former in mail/p5-Mail-Webmail-Gmail, but be aware that the former includes a somewhat obsolete copy of the latter due to non-official modifications.Update (August 26th): I am not the author of the above mentioned Perl module and therefore I cannot provide support for it. Please read the manual page and, if it is not clear enough or if it does not work as you expect, ask the real author (Kurt Schellpeper) for further details. Anyway, to answer some of the questions posted:To get this module to work, install it using CPAN or pkgsrc (recommended). Using the latter has the advantage that the module receives a fix for the login procedure. If you install it manually be sure to apply the required patch!Then open up an editor and paste the example code in the module's manual synopsis section:# Start an IMAP-to-Gmail daemon on port 1143use GMail::IMAPD;my $daemon=GMail::IMAPD->new(LocalPort=>1143, LogFile=>'gmail_imapd.log', Debug=>1);$daemon->run();Save the file as, e.g., gmail-imap.pl and execute it from a terminal using: perl gmail-imap.pl (/usr/pkg/bin/perl gmail-imapl.pl if you are using pkgsrc). Once running, configure your mail client to connect to localhost:1143 using IMAP v4. If it does not work, I'm sorry but you are on your own. (Again, contact the module's author.)Hope this helps. [Continue reading]

  • GNOME 2.14.3 hits pkgsrc

    Last night I finished updating the GNOME meta packages in pkgsrc to the latest stable version, 2.14.3. Yes, I had to take a break from Boost.Process coding (which is progressing nicely by the way; check the docs).The meta packages had been stalled at 2.14.0 since the big update back in April which shows how few time I've had to do any pkgsrc work — well, you can also blame the iBook with its Mac OS X, if you want to ;-) Luckily the packages are now up to date, but I hope they'll not get stalled at this version for too long: 2.16.0 is around the corner (due in one-two months!).I must thank Matthias Drochner and Thomas Klausner for all their work in the GNOME packages during this period of time. Although they did not touch the meta packages, almost all of the components were brought up to date very promptly after each stable release; in fact, I just had to update a dozen of packages on my own to get a complete 2.14.3 installation, aside from tweaking the meta packages.Let me finish with a call for help: the biggest thing missing (in my opinion, that is) in GNOME under NetBSD right now is HAL. It shouldn't be too difficult to get to work but will certainly require several days of discussion and coding. Shall you want to help here (which basically boils down to adding a kernel driver and porting the userland utilities), feel free to contact me for more details. [Continue reading]

  • X11 and the Win keys

    For quite some time I've been having issues with the Windows keys in my Spanish keyboard under X11. I like to use these as an extra modifier (Mod4) instead of a regular key (Super_L), because it is very handy when defining keybindings. The X11 default seems to treat them as Super_L only. For example, trying to attach Win+N as a keybinding to one of the actions in the GNOME Keyboard Shortcuts panel resulted in the Super_L combination instead of Mod4+L, hence not working at all.Fortunately, I found how to fix that within GNOME a while ago. It is simply a matter of enabling the "Meta is mapped to the left Win-key" option in the Keyboard configuration panel. But... I was now forced to use Fluxbox while I rebuild some parts of GNOME and the modifier was not working because the system was using X11 defaults again.After inspecting /etc/X11/XF86Config and some of the files in /usr/X11R6 I found how to enable this behavior in the regular X11 configuration files, bypassing GNOME. It is a matter of adding the following line to the keyboard section of XF86Config:Option "XkbOptions" "altwin:left_meta_win"I guess this works the same for X.Org. [Continue reading]