http://zorch.w3.org/wf/w3c/HTTP-NG/100016/greetings.html
I've been in touch with most of you over collaborating in making HTTP-NG
real. The Collaboration workshop was very useful, and generated much grist
for the NG mill. Tim and I also met with Dave Clark last Friday to perform a
sanity check on goals for HTTP-NG.
Problem Statement
In order for the Web to continue to grow and prosper, we need to broaden the
area of usability and scalability of the Web. Here are some example areas
where current HTTP performs poorly or not at all:
* You are attending a lecture in an auditorium full of 300 others with
laptop's sharing a singleradio or I/R link. You want to be able both to
see the lecturer's slides (which contain both animations and video),
while browsing elsewhere on the Web.
* You use your PDA or portable over your cellular telephone or wireless
connection. Research prototypes exist today for such systems.
* An advertiser buys a 30 second commercial during the World Cup or
Super Bowl, and at the next commercial break, you, along with 2 million
other people attempt to access the same URLs. We already see problems
of "flash crowds" on the Web, and these will become more and more
common.
* You want pages that contains both dynamically updating text and and
embedded video window. Such prototypes exists today in commercial
laboratories.
* You are using a disconnected (from the Internet) laptop on the beach
in the Carribean. When you reconnect, your system it should be able to
re validate its cache, pick up notification of changes in the Web you
are interested in, and potentially post forms you've filled out.
Many more examples are easy to generate. Each of these examples can be
characterized by one or more of the following issues:
* Scaling
* Latency
* Bandwidth
* Disconnected operation
While one can argue that bandwidth and latency of the Internet will improve
to wired locations, the additional constraints of power consumption for
wireless PDA's and portable machines makes it clear that latency in the >1/2
second range and bandwidths in the 9600-19200 baud range will be with us for
a long time. Solutions to some of the above examples will likely require use
of multi-cast. Latency and bandwidth are both free variables; for example
satellite IP systems exist today which provide good bandwidth to remote
locations, but poor latency. Both round trips and bandwidth usage must be
minimized.
Scaling issues are obviously a major issue for us. The Web now is the
heaviest user of bandwidth on the Internet. Any Web protocol deployed must
deal with issues that affect the network; e.g. congestion control, use of
name services, etc. We must worry seriously how to scale the system up to
handle the high load cases.
I see HTTP-NG as a family of protocols. These include protocols for:
* Caching and replication of Web servers and proxies
The caching/replication protocol will likely be primarily used between
replicas and caching proxy servers.
* Notification of changes
Users need to be notified of changes in the Web. This theme came up
many times at the collaboration workshop as an essential enabler for
collaborative systems. Exact requirements were not yet understand. Note
the potential interactions between caching proxies and end users for
disconnected operation for notification.
* Client/server transport, primarily browsers to (proxy) servers.
Most current users of the WWW are now at home, optimistically a minimum
of 160 milliseconds from the closest part of the internet. (measured
from my home to my ISP, using 28.8K-baud modem). Slower modems,
cellular modems and many wireless systems have even higher latency and
lower bandwidth. HTTP 1.X is a simple request/response protocol, not
designed for the environment where it is now most heavily used.
Persistent connections in HTTP 1.X will solve some, but not all of
these problems; HTTP itself requires many unneeded round trips. The
current protocol is also a limitation on browsers
pre-fetching/post-fetching information as links are followed.
To solve the problems presented by these realities of the global
internet, we need a client/server protocol enabling out of order
operation, priority control over operations, streaming ("batching"),
and good bit efficiency, to enable better browser performance and
better use of the web over low bandwidth and high latency connections.
Simon Spero's work is a good starting point for discussion; he has
worked hard to minimize round trips in his design, to overcome latency
problems. He has a complete analysis of a HTTP session worth reading.
Simon has stated he hopes to get an updated version of the March 26th
protocol specification done by sometime next week.
I believe we can make progress most quickly with the server/client protocol,
but that it cannot be designed in a vacuum without regard to other parts of
the problem. I do not mean to imply that a browser on a relatively high
speed network might not also participate in notification and caching
protocols, but that the client/server protocol is intended as the
human-driven transport protocol.
The WWW is only one of a number of important Internet protocols; our
challenge is also to allow for smooth integration of other services into the
WWW, rather than believing that HTTP-NG protocols are universal transport
for all applications (though it will be a very important one...). We need to
think out issues like security, authentication and authorization well in
this context; such information must be able to be shared with other
companion protocols as have already come up and will continue to come up in
the WWW.
Process
I'm sorry to bore you all with such a topic, but it is clear that the WWW
community has had major problems here.
Recent experience with IETF mailing lists for design work around WWW has not
been encouraging; progress has often been dismayingly slow. (e.g. URI
mailing list). The signal to noise ratio is so poor that good people have
been avoiding participation in the web. For example, I saw Van Jacobson at
SIGCOMM and asked if he would be interested in participating in WWW design
issues; his response was that he had seen no venue in which he could make a
significant contribution. For W3C to succeed, this problem must be solved.
The IETF venue has worked best in polishing proposals, and possibly choosing
between competing proposals. Simon Spero send me the current HTTP-NG mailing
list, and it was up to over 110 names, some of which are mailing lists. W3C
has talked to both John Klensin (IETF area director) and Barry Leiner
(DARPA) around these process problems, and they concur that other venues for
development are in order, particularly during initial design. We need a
venue where half baked design ideas can be explored without the chaos that
ensues when such ideas are aired on a public list.
As a straw man, I've been thinking of following the process for HTTP-NG
1. Draft initial complete specification
(in private, by invitation only.)
(At this stage, it is possible there might be several proposals
developed, if appropriate.)
2. Editorial review board
(nominated by Consortium membership, to review our design).
(Also in private).
3. IETF working group review, and W3C member review.
(Public comment and review; there is a large overlap between IETF and
W3C).
Let me know if you foresee any problems of this process.
I see us in the first stage of this process.
We have two choices for how to work:
1. use conventional mailing lists, and archive them.
2. use Web technology for the discussion.
While we are not directly part of any working group that will result from
the Collaboration Workshop, the sentiment to use the WWW for its development
strikes a chord for me. Digital has an product called Workgroup Web Forum,
announced last week, that is well beyond WIT in its capabilities. There was
a strong call at the Collaboration Workshop to start using the Web for its
development. My suggestion is that we try to use it for HTTP-NG development.
Please visit the Web Forum registration desk yourself, and try it out; if
people are uncomfortable with it, we can use mailing lists (or if others can
suggest other Web based tools worth looking at, we're open to suggestions).
The upload applet for UNIX systems is available; it is written in TCL-X and
can be ported to your favorite UNIX box. There is also a Windows upload
applet available. I've set up an HTTP-NG Web Forum in the W3C access control
area; it has access control set up to allow you to read and write in that
Forum, after you have registered yourself. Please use your commonly used
user name for your Web Forum login name, as I've already set up a protection
group.
I've set up a mailing list for this group (http@webforum.w3.org); here is
its current contents:
jg@w3.org, jag@scndprsn.eng.sun.com, mogul@wrl.dec.com,
ses@tipper.oit.unc.edu, robm@netscape.com, ange@hplb.hpl.hp.com,
paulle@microsoft.com, stewart@openmarket.com, bcn@isi.edu, connolly@w3.org,
fielding@ics.uci.edu, frystyk@w3.org, timbl@w3.org van@ee.lbl.gov
I anticipate adding a few more people to the list as I get in contact with
them.
Specification and implementation possibilities
There are (at least) two possible specification and implementation paths we
can see.
Hand crafted implementation path
The current specification was written by Simon Spero, and is a good starting
point for discussion for the browser/server protocol. Andy Norman of HP Labs
in Bristol has been working on a pair of proxy servers implementing
(currently part of) this specification; the implementation has just started
to run. Such a proxy pair is clearly needed in any transition strategy. This
implementation is currently hand crafted. At the moment, there does not seem
to be any easily adopted tools for building stubs for this protocol;
commercial ASN.1 compilers are expensive, and still beg the question of
language interfaces to the protocol. This path does not particularly bother
me; I've written more stubs and library interfaces than I care to think
about in my day, and while tedious, the outcome is quite certain.
ILU as a possible specification/implementation path
Dan Connolly and I been looking into the ILU system developed at Xerox as a
possible strategy for specification and/or implementation of HTTP-NG. ILU
has already been ported to almost all platforms we would need it on (current
exception is Macintosh), and is free of intellectual property right
problems. At worst, I hope we can use it for specifying the interfaces to
WWW services. It has support for streaming and message based interfaces.
Please take a look at it and let us know what you think.
There are several attractions to the ILU approach beyond the avoidance of
manual stub generation.
* Separation of the specification from the messaging transport, with the
potential to move more easily to new transports as they may become
available (e.g. TTCP),
* Multiple language support across many platforms.
* A more object oriented approach to the Web.
It remains to be seen if its promise can be realized.
I am currently recasting Simon's initial specification into ISL as an
initial experiment, both to learn about ILU and expose what areas it will
likely need extension. Our initial study of ILU has made us believe ILU
comes closest of any system we know of to our needs, but it is also clear it
doesn't as yet meet all of our requirements; ILU is also most modular and
malleable to change of such systems. (In particular, we believe we will need
our own transport protocol.) Our attitude is to push on it and see where it
breaks; the Xerox folks are interested in working with us to try to remedy
problems, but if it cannot be made to meet our needs adequately, we can feel
free to "build it by hand". For the moment, regard our work here as
exploratory.
I'm currently working on a memorandum of understanding between W3C and Xerox
to help define our relationship; please do not spread this information
further at this time, until such a memo has been completed.
Status
Simon is completing another draft of his specification, which he hopes to
have done in the next week or so. In addition, Andy Norman (HP labs,
Bristol), has a proxy server pair implementation underway, which is
beginning to show signs of life.
I've been rewriting Simon's springtime specification into ISL, and hope to
have something to look at around the end of this week.
How to progress from here
Some of you have suggested that we should schedule a weekly teleconference
to discuss progress and spur the work onwards. I'd like to know if the rest
of you think this is a good idea. I'm certainly willing to organize such a
teleconference.
I've also heard suggested that we may want to get together for one or more
face to face meetings to hash things out. I'd prefer to schedule such a
meeting for a week or two after we have a good specification together, so
that we have something concrete to discuss and can make good progress.
- ---------------------------------21570780319132--
------- End of Unsent Draft