Main menu:

Site search

Feeds

Categories

December 2014
S M T W T F S
« Sep    
 123456
78910111213
14151617181920
21222324252627
28293031  

Archive

martini 2.0 design decision

This is a message I just sent to the martini-devel mailing list. I’m posting it here as well because it’s useful to me as a blueprint, and because I often get useful feedback on things I post here.

***************

I’m in the process of re-writing the indexing module for Martini. Before I decide how to do it I would appreciate some feedback from community members about what the most useful thing for you will be.

Currently, you would have your xml files, probably olive xml files, in a directory. You would then take an ant build file, configure it to tell it where the files you want to index are, run the ant build file, and some time later you have an index built. The current ant build file looks like this:




[... snip for brevity ...]







[... snip for brevity ...]


So (for those who haven’t used Martini before) the above code creates an index at $indexlocation, and indexes the files in $datadir, creating an index with fields called doctype, title, displayTitle, language, body, and date, each of which have indexing instructions particular to the type of data they are. You run “ant newspapers” on the command line, and it goes away and does its thing.

While this has worked pretty well in the past (and Peter, I know you’ve made changes that aren’t being reflected here. Sorry.) I think it is un-necessarily complicated. If we’re using solr anyway, I would rather have the user configure a solr schema file. Solr schema files are easier to read, more configurable, and there’s a larger group of people to write the documentation for them. I think that decision is a no-brainer. Here’s an example of a solr schema file:





All the xpath determination of what part of the document belongs with what indexfield happens when the solr document is prepared. Solr won’t run against your XML files natively, you have to interpret them into a form solr can understand. So what does this mean for the indexing workflow? Assuming standard Olive XML files, we can include an XSL file to transform each article into a solr file. Then, we can POST each solr file to the solr servlet to add it to the index.

So here’s what I’m planning:

  1. Have distributed ant tasks. The ant tasks should be able to run on any machine, not just the machine hosting your xml files. (added later: I might also do this with a simple .jar file. I know ant has been a big part of this project, but for ease of use, isn’t it easier to copy a .jar file to multiple servers instead of making people install and configure ant in multiple places?)
  2. each ant task is given a list of urls to index. These should be urls that can fetch each article xml file over the network.
  3. ant is configured with an xsl to turn the article xml into a solr document. It grabs the document via http, transforms it, and writes the output to a tempfile on local disk (or, better yet, each ant task grabs the xsl from the main cocoon instance each time. that way you don’t have to worry about updating all your ant instances every time you make a change to the xsl)
  4. the tempfile is posted to solr
  5. the tempfile is deleted

This gets around one of my big questions, which is “We don’t want to keep all those solr files sitting around on disk, do we?” It seems like they would take up too much space and get stale too quickly. Instead, you could have some report that could generate a list of all the xml files that have changed, and be able to pass that to the indexer so that it would only go through and reindex where needed.

So, that’s what I’m thinking. I hope this hasn’t been too incoherent. Writing it has helped me clarify exactly what I’m trying to do, at least.

Comments, anyone? Is there a reason I’m not seeing why this is a bad idea? How could I improve this process? Peter and Tricia, how might this fit with the pre-processing step you’ve been doing?

Bess

Write a comment