Comparing LambdaXML to XML Pipelines

Here are some quick notes about the difference between what should be possible with lambdaXML and what is currently possible using the XML Pipeline Definition Language as implemented by the Markup Technology Pipeline.

What is good-old-fashioned pipelining?

Pipelining is just sending input data into a single process and out to possibly another process. Rather arbitarily, I will consider all concrete programs (such as an instance of XSL) to be called processes, while I will consider all abstractions of them to be functions. Indeed, the binding of an abstract function to a process is much like the binding of a variable in lambda calculus. In terms of lambda calculus, a process can be considered abstracted by a function, which we will call f and call the application function f with the argument x just lambda x.f x. So, if one wanted to create a good-old fashioned linear "pipeline" through functions f1...fz, then the function just becomes lambda x.fz...f2f1x. One could imagine a pipeline taking multiple arguments for the first process f1, and producing named arguments elsewhere that are stored. This take on pipelining is modelled on the UNIX pipeline, of course.

Problems The crucial problems with good-old fashioned pipelining are then apparent:

  1. Linear Order
    It seems like it would be good to have conditionals to allow the orderingto branch. Also, recurison would be useful for some tasks, but then the termination is unsolvable in combination with variables.
  2. Single anonymous argument and output a single argument per function
    This is not to say that the functions have different arguments, such as an XSLT processor taking a particular stylesheet, and in general it seems that XMPDL gives us the ability to name multiple result and input infosets, although one is allowed to be anonymous.
  3. No anonymous functions
    In other words, functions are second-class citizens!

What does XMPDL 1.0 give us?

  1. Outputs and Inputs
    output and input simply map onto the output and input of every function (with optional param named inputs, with the input and outputs being named by a label. This gives us lx:resultset as output, and the lx:bind as input and params. However, obviously lx:bind should not just bind variables (or just names in XMPDL) to values, like functions to variables.
  2. Binding Functions to Processes
    It gives us a way of binding a process to a function through the processdef
  3. Naming
    It gives us the process input that names the function and its associated input, output and param units. Obviously this will have to be done, and it seems XMPDL just "names", and does not create real variables...or does it?

Adding the lambda calculus

  1. conditonal: the equivalent of cond, given by Henry as lx:cond with the lx:case and otherwise implementing the two parts of the traditional cond...else expression.
  2. lambda: this obviously is important, and Henry mentions it in the form of lx:lambda that has no clear match in XMPDL - after all, we want anonymous functions.
  3. let and bind working right: We would have to implement not just names, but actual variables, who could then be bound to lambda expressions, not just processes.
  4. map:Apply a function as an argument to any data.

Non-linear Pipelines

    It appears that NetKernel is also trying to solve this problem in a fairly interesting way as well...

    Things that "non-linear pipelines" do that map onto the lambda calculus or lambdaXML clearly

  1. Tees
    One obvious thing is tees, as implemented by the Oberon people, in which a document is fed to multiple transformations happening parallel. However, since in the lambdaXML the order of operations is just left-associativity, there should be no reason not to declare a two or more functions that take the same variable.
  2. Iteration
    This is easily done using recursion.
  3. Aggregation
    This is done automatically by lambdaXML as the results of the function applications are stored as the result document.
  4. Exception Handling
    These can just be considered functions that take as arguments the functions used as processors to check their correctness, and give as an output their error state. As such, I think a less awkward mechanism than the traditional try...catch mechanism would just be to say that each function has an associated failure state much like the error element. Every function then just includes that as an optional parameter, although how the binding of error states of the the actual process invoked by lambdaXML to the error state needed by lambdaXML should proceed is unclear.

Future Additions

  1. Web Services
    Just allow the functions to be bound to WSDLs. Simple!
  2. Types
    This should basically give the type of an XML document to be its XML Schema, and then a typed variable would just make sure the input and output were scheme-validated, which would lead to less explicit schema-typing. Therefore, it would have to be validated schema before going through the processor as input. Obviously the type of a function would be wrt to the schema of its output(s).
  3. Semantic Web as Types
    This would allow us to say things about XML documents not just on a schema level, but on a "semantic" level: such as is this document (or portion) of this part of this class or not. I would highly recommend we bind the Semantic Web information directly into the PSVI.