Ikke's Blog

Aug 22
ReST + git + hooks = useful

Recently I had to write some article. There are several formats to write articles: one can use plain text, some use something like OOo Writer or MS Word, others use LaTeX, some XML fanatics use DocBook, etc. Personally I like writing texts in plain text format, especially using the ReST (ReStructured Text) markup, so the text document is very easy to read, including some basic formatting, and you can transform the text format to XHTML, LaTeX/PDF and several other formats using some simple tools provided by the DocUtils project.

As I'm using a plain text format, it's pretty useful to make use of some revision control system. And guess what, once again Git is a good choice ;-)

Now in this specific case, I got to have this article online too on my webserver. I got a bunch of public git repositories on that machine, including the one containing the article, but obviously this is not very useful as the HTML file rendered from the ReST source file should not be in Git.

So here's what I did: first I created a very basic Makefile which does the output file generation. Here it is:

RSTOPTS=--time --strict --language=en
RSTHTMLOPTS=--embed-stylesheet

txt_SOURCES=myinputfile.txt
HTML=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).html)
PDF=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).pdf)
TMPS=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).txt.tmp)

COMMIT_DATE=$(shell git-show | grep ^Date | sed "s/^Date: *//")
COMMIT_REV=$(shell git-show | grep ^commit | sed "s/^commit *//")

default: all

%.txt.tmp: %.txt
        @sed -e "s/@DATE@/$(COMMIT_DATE)/" -e "s/@REV@/$(COMMIT_REV)/" $^ > $@

%.html: %.txt.tmp
        rst2html.py $(RSTOPTS) $(RSTHTMLOPTS) $^ > $@

%.pdf: %.txt.tmp
        rst2latex.py $(RSTOPTS) $^ > $(basename $@).tex
        pdflatex $(basename $@).tex
        rm -f $(basename $@).log $(basename $@).out $(basename $@).tex $(basename $@).aux

html: $(HTML)
pdf: $(PDF)

all: html pdf

clean:
        rm -f $(HTML)
        rm -f $(PDF)
        rm -f $(TMPS)

There are 3 variables on top you should/could edit: RSTOPTS which are the options passed to all rst2* tools, RSTHTMLOPTS which are the options passed to rst2html.py and txt_SOURCES, which is a simple list of all input files.

I also added functionality to add @DATE@ and @REV@ tags in your source files. These will be expanded to the commit date and commit/revision SHA1 sum of the tree you're working with. In ReST this is useful in the top part of your document where you define the author, contact address, version information, date,...

I committed this Makefile and my input ReST text file to some bare repository on my server (eg. /home/me/public_git/myarticle.git). Once this was done, I created a directory in my htdocs directory (let's call it "/home/me/public_html/myarticle"), chdir'ed into it, and made a clone of the repository: git-clone /home/me/public_git/myarticle.git

Now the Makefile and ReST file were in place, and I could run make html. After making a symlink to generate an index.html file which points to myarticle.html, it was available online.

Now one more feature had to be added: I want the public article HTML to be up-to-date with my repository, whenever I push new changes from my laptop to my server. This is very easy to achieve using git's hook script support. In /home/me/public_git/myarticle.git/hooks I created a file called post-update and ran chmod +x on it. Here's the content:

CODIR="/home/me/public_html/myarticle"
export GIT_DIR=$CODIR/.git
pushd $CODIR > /dev/null
/usr/bin/git-pull > /dev/null 2>&1
make html > /dev/null
popd > /dev/null

exec git-update-server-info

Now whenever I run git-push myserver in my local tree, this script is executed. When it runs, the checkout I got inside my public_html directory is updated, and the HTML file is rerendered, so the last version is online.

Pretty useful!

Aug 15
XKCD "Compiling"

Today's XKCD is just hilareous:

Aug 15
ASUS laptops, multimedia keys and INPUT

For quite a while on several systems where ACPI is used to deliver key press events to the operating system (mostly laptops, that's why we got the asus_acpi, thinkpad, toshiba, sony,... drivers in the kernel). In userspace acpid was used then to fetch these commands, a callout script was used which used some application to inject the corresponding keycode (based on the ACPI event ID, which is just "randomly chosen" by the hardware manufacturers) on some input device, so applications (eg your X server) were able to see the key press events. Especially the acpid -> callout -> map scancode to keycode -> inject in input device isn't such a "nice" solution, but it worked, most of the time.

Recently people (one of them Richard Hughes of GPM fame, you know, that power-eating applet ;-)) started to rework these in-kernel drivers to use the kernel input subsystem directly, so no more round-trip to userspace would be necessary.
There are 2 possibilities here: you can store a scancode to keycode mapping table in the driver itself, or you can remap scancodes to a keycode from userspace on some device using ioctl's.

This last method is made easy by HAL (0.5.10 or git), which includes a callout which allows you to store the mappings in an FDI file, and then performs the remapping when a matched device is found on the system. More information about this can be found here.

IMHO this method is the cleanest, as it doesn't force us to store big device-specific mapping tables in the kernel driver. It's userspace so much easier to make changes, add new device information,...

ASUS laptop owners couldn't make use of this yet as nor the in-kernel driver, asus_laptop, or it's -cvs version provide relaying to a kernel input device. Yesterday I decided to implement this, and today it got somewhat finished, after some minor issues. You can find the resulting patch here, it's against current acpi4asus CVS, but I think it should apply cleanly against a vanilla kernel too. I'll submit it to the acpi4asus list tomorrow. Git repository is here, you want the "acpi_keys_to_input_layer_no_internal_mapping" branch.

After compiling the driver and insmoding it (or modprobe if you install it too, make sure you don't load the old version by accident) you should see some information in dmesg: a message the driver is loaded, the name of your laptop model, and the fact a new input device called "Asus Extra Buttons" was created. Now when you press some multimedia buttons and check dmesg again, it should tell you some keys were pressed it can't map to a known keycode, also providing the scancode. You can use this information to generate an FDI file as described in the pages I linked to before.

If you're using an "old" HAL version, you can download a small utility here to emulate HAL's behaviour using hard-coded information. You need to make a list of scancodes and their meaning using KEY_* values. You can find these constants in /usr/include/linux/input.h, or in include/linux/input.h of your local kernel source tree.
Once you know which scancodes map to which KEY_* value, you can add them to the "mappings" variable defined at the top of the C file. The format is simple: scancode1, keycode1, scancode2, keycode2,...,-1. Make sure you always add pairs of scancode/keycode values, and end the array with a -1.

Once you entered the values, compile the program (gcc -o set-asus-keymap set-asus-keymap.c), figure out the event device node for the Asus Extra Buttons device (check /proc/bus/input/devices, where Name is Asus Extra Buttons, see which eventX is after Handlers), and run the program (as root) using ./set-asus-keymap /dev/event/eventX (now I don't want *any* comments on this entry regarding "I have no eventX node").

When you're done with this, create a nice FDI file too, and send it to the HAL mailing list so others can enjoy your research too :-)

That's about it, your special keys should "just work" and acpid is no longer necessary to handle them.

Aug 9
An introduction to the git bisect feature

Recently git is becoming a very popular revision control system for several projects. I tend to use it regularly too, both for my own projects, because it's used by a projects I hack on, or because I can get all powerful git features when working on a project which uses SVN as RCS (which is a good thing in some environments) through git-svn.

One of the very nice features of git is "bisect", which allows you to pin down the commit which broke code (or functionality) pretty easily. I used it eg. some days ago to figure out which commit in the "avivo" driver caused the driver to break on my system.

How does it work? Basicly, you clone a git repository, you start a bisect session, say "this commit did work", "this one doesnt work", and that's it. git will checkout a commit "in the middle" of the good and bad revisions, you compile/run/test the result, and tell git whether this version did work as expected or not. If it did work, git will checkout the revision between the current one (which was already at the middle between good and bad) and bad for you to test. If it did not work you'll get the revision in the middle of the "good" revision and the current one.
Think about binary search, it's similar (actually, bisect *is* a binary search).

This way you're able to figure out which commit broke the app in a very short time (unless it crashes your system and it takes 3minutes to boot ;-)).

There is even more: if your application got a nice test suite consisting of unit test and alike, you're able to automate the whole process. The only thing you need is some script/tool/... which runs the test suites and returns a 0 exit code on success, or something else on failure. If you got this, git will do the whole bisect automagicly and tell you without any intervention what broke your application.

To demonstrate this I wrote a simple script. You can find it here using gitweb, or git-clone http://git.nicolast.be/git-bisect-sample.git.

Once you got the script on your system, you can test it. It takes 2 arguments: the number of commits to make, and which commit should break the application. The script will create a branch, then start creating a simple bash script. All this script does is assign 0 to i, increment i and decrement i. Every revision one increment and one decrement will be added. At the "bad" revision, one more increment will be added though. At the end i is used as return value.
As you can see, before the bad revision the script will return 0, after it it will return 1, so it's a test suite on its own ;-)

Here's a sample run:

$ sh createrepo.sh 20 16
Switched to a new branch "git_bisect_test"
Doing commit number 1
Created commit 9ce7f50f937b4e0e82c9c7fe143bb0483eb8e308
 1 files changed, 5 insertions(+), 0 deletions(-)
 create mode 100755 testcase.sh
Creating good_tag
Doing commit number 2
.....

Doing commit number 20
Created commit 0298fc50cb61eca005053a1c948f8651b2a346eb
 1 files changed, 2 insertions(+), 0 deletions(-)
Creating bad_tag

So, it created a branch called "git_bisect_test" (just to keep our repository clean), did a first commit, tagged this as "good_tag" (we assume our first commit is a working application here. The good and bad revisions shouldn't be tagged, but this makes it easier for us further on). Then the script is created/updated and committed 20 times, breaking at the 16th commit. At the end, "bad_tag" is created. Once again, this is not necessary at all.

Now we found out our application is broken, so we'll bisect to figure out where it broke.

$ git-bisect start
$ git-bisect good good_tag
$ git-bisect bad bad_tag 
Bisecting: 10 revisions left to test after this
[632c9c04b97578d674a2980648ee4ab748a8b147] commit number 11

We're at "revision #11".

No the magic can start:

$ git-bisect run ./testcase.sh
running ./testcase.sh
Bisecting: 5 revisions left to test after this
[d6514e8d9fe55827fe78d650548d948a4647fc50] commit number 16
running ./testcase.sh
Bisecting: 3 revisions left to test after this
[79e4ce719557e53c203789e1991f1ec00be823f7] commit number 14
running ./testcase.sh
Bisecting: 1 revisions left to test after this
[09f09d9a4e322a49b40de3f9d595f5132198c9b3] commit number 15
running ./testcase.sh
d6514e8d9fe55827fe78d650548d948a4647fc50 is first bad commit
commit d6514e8d9fe55827fe78d650548d948a4647fc50
Author: Nicolas Trangez < >
Date:   Thu Aug 9 23:43:48 2007 +0200

    commit number 16

:100755 100755 bd66b47cf24d9d3d00ac289395851d436b404774 ba01220423f64ea8b92a9df75c36a1a7ddda89ac M      testcase.sh
bisect run success

Exactly, like it should, it figured out commit 16 (well, git got no commit numbers, this is just the commit message to keep things understandable easily) broke our application.

All we got to do now is stop the bisect process

$ git-bisect reset

and fix our application by looking up a diff of the bad revision.

In this sample case we should also clear the cruft the createrepo.sh script created:

$ git-tag -d good_tag
$ git-tag -d bad_tag
$ git-checkout master
$ git-branch -D git_bisect_test

That's it. More information can be found in the git manual and in the git bisect manpage.

May 8
OpenOffice rant

As I wrote in my last entry, I did some presentation on Django yesterday.
I started creating this presentation some days ago, but still had to write most stuff yesterday. So in the afternoon, I took my laptop, sat back, and added new slides to the presentation.

I must confess, I'm not a frequent saver (will change now, read on). Anyway, when I had written +- 30 new slides, I pressed ctrl-s in OpenOffice Impress. Disaster strikes. All of a sudden, Impress crashes, and the OOo document recovery dialog pops up. Before recovering the document, I created a copy of the on-disk original file, then did the recovery. Result: same file as I had before the save, 30 slides lost.

The problem: my / partition (on which /tmp resides) was at 100% (only some bytes left). The reason: some beagled-helper got >150MB .tmp files left in there.
I almost went insane.

So:

  • OOo guys, *please* make your app not crash when a user attempts to save a file and /tmp is at 100%. Use some other scratch location, warn the user (and make a quick backup/recovery dump automagicly in case something goes wrong),... whatever, but do not crash on a save operation, that's completely irrational.
  • beagled-helper, please remove cruft earlier and don't flood my /tmp, thanks

//EOR

Ikke • DesktopPermalink 13 comments

<< Previous Page :: Next Page >>

Categories

Who's Online?

  • Guest Users: 446

Misc

XML Feeds

What is RSS?