Wade Hampton (whampton@staffnet.com)
Wed, 12 Jan 2000 16:36:45 -0500
Greg Ferguson wrote:
>
> On Jan 12, 8:54am, Aaron Turner wrote:
> > Subject: Re: ODE Summary Report #1
> > On Tue, 11 Jan 2000, Paul M. Foster wrote:
> >
> > ...
> > Maybe some other group would be interested in a local document
> > retrieval engine like you describe, but I personally find that
> > a much harder and less powerful solution. You either end up
> > requiring the user to install a ton of support programs to power
> > the features you want or give them a simple/stripped down/barely
> > useful at all system that is easy to install.
> > Maybe I'm over simplifying things, but that's how I see it- I can
> > build a much more complicated system on a dedicated server than
> > try to make the same system work on the 10 million different Linux
> > systems around the world.
> > ...
>
> I'll re-iterate my point on this again -
>
> A local solution does not require the entire engine, as you describe
> it. "It" should work with whatever content is at it's disposal
> on one's local machine.
It should be smart enough to have areas to search/index, and use
those. I have mentioned such a concept for a year or so. The idea
I was mentioning was something like indexing via the standard locations,
all the /usr/doc tree, /usr/src/linux/Documentation tree, man pages,
info pages, and possibly a private tree for the doc system itself.
It should have an ability to "import" docs from external sources
(read from the WWW or install via deb or rpm?). It should also be
capable of indexing and/or referencing external documents via
URL (import only an index).
>
> Yes, it may obviously be a less powerful solution, but I (and others)
> my still find it extremely useful.
As would I. My main development machine is on a private intranet
and not on the internet. The laptop is on the Internet. This makes
for a real pain....
>
> If we are looking into building this, why not create it in a scalable
> manner to allow for a local capability (using a shared code base,
> etc.)? Again, it doesn't need all the bells and whistles of the master
> site, but I think something lightweight and powerful can still be
> created.
Reusable technology such as the converters (man to YY, info to YY,
etc.),
the core of the search technology, the gui, etc. However, the back end
storage of docs on a public WWW server should have pre-converted
docs (e.g., pre-converted man pages via man to YY or man2html), and
should
be database driven -- hence different, scalable, back ends.
>
> What about those in a closed/Internet-disabled environment? We (SGI)
> run into this situation all the time (secure sites/installations),
> especially within certain branches of the gov't. we sell h/w to.
Many intranets that are not connected to the Internet or are only
connected for something like mail.... This is a corporate
REQUIREMENT, and a Government requirement!
>
> If we can't do this, I think we're shooting ourselves in the foot
> right from the start. A well-thoughtout, scaleable architecture
> should provide for this, IMO. Perhaps this is not the group to
> provide such a solution, but it seems like the logical home to me.
As I stated earlier -- our first step should be agreeing upon the
overall requirements. Indexing and compatiblity with tools such as
this should be a requirement.
Cheers,
-- W. Wade, Hampton <whampton@staffnet.com> Support: Linux Knowledge Base Organization http://linuxkb.org/ Linux is stability, performance, flexibility, and overall very fun! The difference between `Unstable' and `Usable' is only two characters: NT
This archive was generated by hypermail 2.0b3 on Wed Jan 12 2000 - 16:38:03 EST