Accept no substitutes

Day: June 12, 2006

JCDL – augementingg interoperability

Don Waters convening and describing OAI protocol for metadata harvesting and pointing out the paupacity of Dublin Core for this use.

Tony Hey of Microsoft. Carl Lagose of Cornell. Herbert von de Sompel. Cliff Lynch.

Tony has slides. He says that Microsoft is commited to supporting scientists and engineers via open standards. He ran the equiv of NDIIPP in UK. e-Science (aka data centered science) is the future. Microsoft has to live in the world of open source. by e-Science he means not so much data centered as dataprocessing/IT driven. Contoso virtual science library looks like Amazon for scientists with some Microsoft icons on it. (slide of Bill Gates). e-Science Mash-ups? Interoperability of repositories. Ginsparg(arXiv)/Lipman(NLM)/Harnad (eprints) cited.

Herbert has slides too. He’s at Los Alamos these days. digital objects and repositories and their value chains. Need richer cross-repository services such as discovery — must have digiobject representation and semantics. Scholarly communication workflow for say an overlay journal by recombining and adding value. Too many datamodels and service that do not interoperate.

Carl on the digital objects, data models and surrogates. unbaked says Carl but here goes. He’s presenting findings from the Pathways Project which comes from Herbert and Carl and others. Pathways he says sits above dspace, fedora, eprints, etc.
repository-centric indentifier paradigm (see J Kunze and others).

Cliff now moderates ?s. But he has his own comments first. how far we’ve come with metadata harvesting protocol but it has limits as it doesn’t really get you any objects, says CL. dealing with very messy complex objects to the point that metadata and object are confused and conflicted. basically trees with decorations on them — how to decorate the trees. pretty simplified way of looking at complex problems. How far should harvest be pushed toward search. http://msc.mellon.org/Meetings/Interop/ for notes from the April meeting. if i had it all to do again, i wouldn’t use ‘surrogate’ but ‘representations’ says CL.

Many of the questioners recall the failed attempts of the 80s and 90s to do effective data harvesting etc. Still no working models is charged. I too complained to Kunze and to CL at OAI presentations that it was not ready for adoption outside the research community. In some ways it’s like watching Wiley Coyote opening his latest Acme box with the product that will help him finally catch the Roadrunner. This time we’ll be sure to catch him! Rich interoperability is done well within constricted domains but not so well across domains. like not at all.

JCDL – Named Entities

The difficulty of automatic extraction of named entities from mid-19th century American newspapers discussed in the first paper from Project Perseus.
Futures include how to interate user corrections etc. (wikipedia-like additions as an adjunct/correction to autogeneration)

?s How to deal with new names particularly in modern and expanding universes.
Machine learning systems vs rule writing. The Perseus folks are using rule writing, but machine learning is getting better and PP will prolly use one of the Open Source versions.
Have news correspondents’ use of pseudonyms caused problems and/or have they been identified? No.
Information network overlays using RDF and Fedora could be very interesting.

Learning to Deduplicate. from two Brazilian universities. Use of fitness functions discussed as a way to deduplicate.

Why people with common names need ClaimId! Lots of looking at the ACM Digital Library to notice multiple entities for a single author and also listings of common names (especially but from from exclusively Asian names — Jeff Ullman is a recurring example). ClaimId isn’t mentioned but it would help greatly. All of this work is highly mathematical and doesn’t involve humans — hmmmm.

“Also by the Same Author” UK-based project to move to the semantic web called Advanced Knowledge Technology. how to interate Citeseer metadata with ontology. Disambiguation amongst 3000 plus citations. Observation people cite their own work so if they cite an author with a similar name then they asre likely the same person (98% of the time). So you can use that to get at a bunch of varieties of names that way. So self-citation is greatly helpful in increasing precision. Why would it ever fail, the 2 – 8%? Many many co-authored papers are the biggest problem especially if the co-authors are very diverse and the work is “not so coherent as to subject matter.”

Dr Fun throws in the towell

Bye Dr Fun
Peeps, spam, microbes, chickens and more will be missed. Dr Fun’s creator and perpetrator, Dave Farley wrote last night to say that he had completed 10 years of the single panel comic and that he was done. I’ve loved Dr Fun of the 13 years it took Dave to get the 10 years done. I’ve always been delightfully surprised and amazed. There are stories behind the panels of course and having known Dave for over a decade I can understand his frustrations as well as his triumphs. All of Dr Fun will be still available on ibiblio.org forever or longer.
Dave is really a creative and weird (in the best sense) guy. I’ve never met him even though we’ve worked together since 1993. I’m looking forward to seeing what he’s up to next.

Note: Dr Fun is now on Slashdot.

JCDL – Panel on Google and Scanning

Intros by Cathy Marshall of Microsoft

Dan Clancy – Engineering director at Google Books
David Ferrario – NY Public Library (a Google 5 site)
Dan Greenstein – California Digital Library
Cliff Lynch – moderator; dir of CNI

Dan Clancy starts with an overview of Google Books. only 15% of the books are in print, but how to deal with the 85% that are out of print. The 15% is fine by the publisher since they hope to sell from that 15%. G5 = Harvard, Stanford, Oxford, Michigan and NY Public. Everything — not just that that is “valuable.” Sample pages (from publisher); snippet (Google claims fair use); full text (generally public domain). “Scanning is the easy part.”

David F – about NY Public’s involvement in G5. Non-exclusive agreement to do comprehensive scanning of public domain materials only.

David Greenstein – about Open Content Alliance (note UNC is a member both the Libraries and the School of Information and Library Science). A bit about the formats that come with OCA including JPEG2000 and dejaVu files. Funding is different from Google that is smaller and more diverse and non-profit. More complex entity whose members embrace the “qualities of open.” All scanning of public domain works. 3rd party indexing is allowed by OCA and not by Google.

Cliff Lynch talks about some studies of the collections in G5. 430 different languages. only 49% English. This supported by Clancy and Ferrario.

Differences between working with Google engineers and Google lawyers are discussed at length… Research limitations are bring discussed. What should Google do to facilitate research in digital libraries? Tell Dave Clancy (he says). Greenstein says that public universities can’t possibly make all this available, but he’s mostly wrong says Clancy. Support is always an issue.

Open to audience. post-1923 stuff may be in public domain. what is being done to indentify that stuff? DG says nothing beyond the obvious (that being federal). Orphan works will be especially hard. DC says Google is in legation so they have a very conservative interpretation, but once they are done scanning that there will probably be more work on renewals and orphans. DF says NYPL same as OCA.

Linking, annotation, etc for adding value content? DC speaking as a researcher not as Google mostly restates the question (he admits). DF says NYPL wants to enhance the content. The G contract allows content sharing with places like the Digital Library Federation. DG notices that these mostly open projects are closed because some of the problems are too big to be gotten at. Orphans. etc. Persistance.

© 2025 The Real Paul Jones

Theme by Anders NorenUp ↑