Categories: Technology, Coding Corner, Docbook, Summer of Code 06, Desktop, Ivman, Linux, Networks
10/29/07
Some time ago I got a very basic cardreader to use with my eID. It was fairly easy to get this working under Linux, only had to create one ebuild for the acr38 driver. Looks like you don't even need the Zetes/FedICT tools to do authentication in Firefox, the standard OpenSC libs work too.
For the record: what you need is opensc, pcsc, and the acr38 driver, that's about it to start playing around. The FedICT tools are nice to play around and view which data is stored on the card.
Anyway, on-topic now :-) In my previous post I wondered whether it'd be possible to get an SSL certificate, using the key on my card. Looks like this is easier than I thought :-)
You need to have openssl (du-uh) and engine-pkcs11 installed to do this.
To generate a request, open a console and launch openssh. Once at the OpenSSL prompt, issue these 2 commands:
engine -t dynamic -pre SO_PATH:/usr/lib/engines/engine_pkcs11.so -pre ID:pkcs11 -pre LIST_ADD:1 -pre LOAD -pre MODULE_PATH:/usr/lib/opensc-pkcs11.so
Adjust paths if necessary, of course. This loads the pkcs11 engine inside OpenSSL.
req -engine pkcs11 -new -days 100 -key id_02 -keyform engine -out myrequest.csr -subj "/C=BE/ST=O-VL/O=My Organisation/CN=My Name/emailAddress=my@email.tld"
Adjust the days, out and subj parameters, at least. The key ID can be found using
pkcs15-tool -c
Use the ID of the Authentication X509 certificate.
You'll be asked to enter your PIN code, once this is done your certificate signing request will be stored in myrequest.csr (or whatever filename you chose), ready to be sent to some CA administrator, after which he can sign the request (added code to do this to CAAdmin some minutes ago, about to commit), send back the certificate, and you're all set.
How to use the certificate depends on your application, of course. You can add the pkcs11 authentication provider to Firefox, OpenVPN got some pkcs11-related settings, etc.
I'll try the OpenVPN stuff in a minute :-)
Pretty cool stuff, if this'd work... Both VPN and SSH authentication will be done using my eID if this turns out well.
Edit: right, I was able to sign my generated request (using my eID's authentication certificate) using our VPN's CA, but now I got some issue with issuer certificates: OpenVPN seems to look for an issuer certificate matching the C/CN/SN/GN/serialNumber of the certificate on my eID. This is, obviously, not the way I'd want it to work... Isn't it possible to tell OpenVPN to use some_file.crt as certificate, but use the key in some slot on my eID as key? Using PKCS11 seems to disable the ability to use file-based certificates :-(
Lately at VTK we started to use SSL (and X509 keys) at more places than just one webserver. We figured out using a central CA (and not one per server) and managing keys centralised would be A Good Thing.
So I created a LUKS volume on one of our servers (which is only usable by us admins) to store CA data. OpenSSL is kinda tough to work with though (well, lots of commands with lots of command line parameters ;-)), so I decided to create some sort of text-based interface around it, inspired by OpenVPN's EasyRSA scripts.
I titled the end result CAAdmin. You can find a gitweb view (including pull URL) here if interested. Fixes or patches to add functionality are very welcome (email :-)).
Currently it allows you to:
- Create a new CA
- Generate server keys and certificates
- Generate client keys and certificates (both password protected and without password)
- List your CA's CRL
- Create a CRL file to distribute to your servers
- Revoke a certificate
Functionality to sign an incoming certificate request should be added. I'd love to figure out whether it's possible to use my (belgian) eID card (and reader): I can read the data on it and use it for SSH authentication, but I didn't figure out yet whether it's possible to pull out a signature request out of it, so I can use the private key stored on it to access some of our key-based SSL services... Any pointers?
09/02/07
Most people most likely saw the YouTube movie on content-aware image resizing which got blogged quite a lot lately. I read the corresponding paper, and wrote an implementation (not finished/perfect at all, but well) in Python. If it would ever become "production quality" a Gimp and/or GEGL plugin would be nice.
Here's a sample:
Original image
Resized using Gimp, cubic interpolation, 150px
Resized image, 150px
Overview of removed pixels
This transformation is done in about 2 seconds (mainly because of some calculations in pure Python. For most calculations I use the Python Imaging Library and SciPy/NumPy, which are mainly C modules and much faster). As you can see the implementation still needs lots of love.
You can see another sample (image resized from 1000 to 250px in 8 seconds) here.
Git repository is here. Please email any patches!
The algorithm itself is surprisingly "simple" and easy to understand, great job by the researchers! More on that later. I should be studying mathematical analysis now, 2nd time I got to redo this exam, bloody university :-(
Update:
Using very expensive algorithm
This image was generated by:
- Loading the input image
- 150 times:
- Calculate energy and cost of current working picture
- For every pixel in the top row, calculate the cost of the "best path" starting at this pixel
- Figure out which path is the cheapest
- Create an image which is the working image, minus this best path
- Replace the working image with the image generated in the previous step
This took 273 seconds on my system, as the complexity is something like O(150*N*N*N*N*N*N*M) where M is the complexity of the gradient magnitude calculation.
Conclusion: not a workable solution :D
Do notice there are significant changes between this image and the one posted above. As I wrote this as a quick hack, I didn't include code to show which paths were removed from the original image.
08/22/07
Recently I had to write some article. There are several formats to write articles: one can use plain text, some use something like OOo Writer or MS Word, others use LaTeX, some XML fanatics use DocBook, etc. Personally I like writing texts in plain text format, especially using the ReST (ReStructured Text) markup, so the text document is very easy to read, including some basic formatting, and you can transform the text format to XHTML, LaTeX/PDF and several other formats using some simple tools provided by the DocUtils project.
As I'm using a plain text format, it's pretty useful to make use of some revision control system. And guess what, once again Git is a good choice ;-)
Now in this specific case, I got to have this article online too on my webserver. I got a bunch of public git repositories on that machine, including the one containing the article, but obviously this is not very useful as the HTML file rendered from the ReST source file should not be in Git.
So here's what I did: first I created a very basic Makefile which does the output file generation. Here it is:
RSTOPTS=--time --strict --language=en
RSTHTMLOPTS=--embed-stylesheet
txt_SOURCES=myinputfile.txt
HTML=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).html)
PDF=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).pdf)
TMPS=$(foreach t,$(filter %.txt,$(txt_SOURCES)),$(basename $(t)).txt.tmp)
COMMIT_DATE=$(shell git-show | grep ^Date | sed "s/^Date: *//")
COMMIT_REV=$(shell git-show | grep ^commit | sed "s/^commit *//")
default: all
%.txt.tmp: %.txt
@sed -e "s/@DATE@/$(COMMIT_DATE)/" -e "s/@REV@/$(COMMIT_REV)/" $^ > $@
%.html: %.txt.tmp
rst2html.py $(RSTOPTS) $(RSTHTMLOPTS) $^ > $@
%.pdf: %.txt.tmp
rst2latex.py $(RSTOPTS) $^ > $(basename $@).tex
pdflatex $(basename $@).tex
rm -f $(basename $@).log $(basename $@).out $(basename $@).tex $(basename $@).aux
html: $(HTML)
pdf: $(PDF)
all: html pdf
clean:
rm -f $(HTML)
rm -f $(PDF)
rm -f $(TMPS)
There are 3 variables on top you should/could edit: RSTOPTS which are the options passed to all rst2* tools, RSTHTMLOPTS which are the options passed to rst2html.py and txt_SOURCES, which is a simple list of all input files.
I also added functionality to add @DATE@ and @REV@ tags in your source files. These will be expanded to the commit date and commit/revision SHA1 sum of the tree you're working with. In ReST this is useful in the top part of your document where you define the author, contact address, version information, date,...
I committed this Makefile and my input ReST text file to some bare repository on my server (eg. /home/me/public_git/myarticle.git). Once this was done, I created a directory in my htdocs directory (let's call it "/home/me/public_html/myarticle"), chdir'ed into it, and made a clone of the repository: git-clone /home/me/public_git/myarticle.git
Now the Makefile and ReST file were in place, and I could run make html. After making a symlink to generate an index.html file which points to myarticle.html, it was available online.
Now one more feature had to be added: I want the public article HTML to be up-to-date with my repository, whenever I push new changes from my laptop to my server. This is very easy to achieve using git's hook script support. In /home/me/public_git/myarticle.git/hooks I created a file called post-update and ran chmod +x on it. Here's the content:
CODIR="/home/me/public_html/myarticle"
export GIT_DIR=$CODIR/.git
pushd $CODIR > /dev/null
/usr/bin/git-pull > /dev/null 2>&1
make html > /dev/null
popd > /dev/null
exec git-update-server-info
Now whenever I run git-push myserver in my local tree, this script is executed. When it runs, the checkout I got inside my public_html directory is updated, and the HTML file is rerendered, so the last version is online.
Pretty useful!
08/15/07
Today's XKCD is just hilareous:
:: Next Page >>