Over the past few months I’ve been building up some frameworks for large scale metadata mining; there’s still a few pieces left to go, but I’m reasonably satisfied with the general architecture.

I’m using a distributed approach layered over MPI. MARC parsing is handled using custom code that tries to minimize copying and consing.

Performance is quite reasonable; using six cores on three low-end machines, I can count all tag/subcode usage in 7 million gzip compressed marc records (LC Marc Bibliographic records from 12/2006) in just over 23 seconds.

What I’d really like to do is install grid agents on the public access machines in the university library. With a bit of luck it should be possible to run a complete pass of support set counts in under a second. Moore’s law scares me sometimes.

Simon

Leave a Reply