Can Linked Data Help Hypertext Search?

Harry Halpin, <H.Halpin@ed.ac.uk>

RDF

Can Linked Data Help Hypertext Search?

RDF data...

a set of circles and arrows

...merges just like that.

more circles and arrows superim

Subject and object node using same URIs

The Semantic Web’s real (and perhaps only) selling point is URI-based data integration.

Linked Data

The application of Web architecture and the 303 decision to the Semantic Web is the second-generation Web of linked data.

Claims billions of RDF triples! causal

The Problem

Users need to re-use URIs for vocabularies and entities in order to get the full benefit of the Semantic Web.

Semantic Web-enabled ontology mapping and entity integration does not seem to work quite well at the moment in open domains.

Imagine how urgent this need is for W3C RDB2RDF Working Group's proposed standard, that maps text strings used as identifiers in databases to Linked Data URIs.

Why not use Web search?

Iif a user searches for a concept via keywords, you get too many URIs and its very user-hostile to inspect the RDF (or lack thereof) at these URIs to ascertain their meaning.

How can we get semantic search results to be better so that we can choose the "best" via an API?

Hypothesis

How can we actually make a virtuous cycle between hypertext search and semantic search in order to improve results of structured data without asking user to do anything different than they currently do.

Are there URIs in Practice?

Question:Are there too many or no Semantic Web URIs of interest?

Answer: Sample the Semantic Web using a query log, prune any query with less than 10 repetitions.

Use brute-force simple rules and gazetteers (U.S. Census for names, Alexandra project for places, Wordnet and hyponym with hypernyms) to discover named entities with low recall and high precision.

Data Set: Microsoft Live Search Query Log for 1 month from 2007

Too Many URIs!

There are too many URIs for the same thing...

An average of 1,339 URIs (S.D. 8,000) returned per query. That's a lot, but most may obviously be URIs that just mention the term...

return frequency

Top Entities

Number of Searches (RDF Hits) Entity:
  1. 7311 (99) david blaine
  2. 2997 (134) jessica alba
  3. 2100 (16723) nick
  4. 1280 (178) michael hayden
  5. 1098 (10) marcus vick
  6. 1092 (199) keith urban
  7. 1015 (43) lane bryant
  8. 990 (55) desmond dekker
  9. 922 (312) jennifer white
  10. 900 (100) clay aiken
  11. 883 (359) bill cosby

Top Concepts

    Concepts:
  1. 11383 (10767) weather
  2. 10321 (7777) dictionary
  3. 3675 (434333) people
  4. 3217 (189115) music
  5. 3117 (7196) monster
  6. 2192 (1444) autism
  7. 1468 (149436) map
  8. 1198 (17562) travel
  9. 1191 (12067) pregnancy
  10. 1104 (82074) news

The power-law blues

Very few empirical studies have been done on Linked Data, and often of the type "Look, it's a power-law!" without any proper statistical tests. See Clauset, Newman et al. paper referenced in my paper for a better method! correlation frequencies

Entity Queries: alpha = 2.31, with long tail behavior starting around a frequency of 17 and a Kolmogorov-Smirnov D-statistic of .0241, indicating a significant good fit.

Concept Queries: The alpha= 2.12 of the queries for concept queries, with long tail behavior starting around a frequency of 36 with a Kolmogorov-Smirnov D-statistic of .017.

Hypertext Query Frequency and Linked Data?

Question: Is the amount of Linked Data returned correlated with the popularity of the query?

No: Spearman's rank correlation statistic was the insignificant .0077 (p > .05), while for concept queries, the correlation was the still insignificant at .0125 (p > .05)

correlation frequencies

Hypertext Query Frequency and Linked Data?

Question: Is the amount of Linked Data returned correlated with the popularity of the query?

No: Spearman's rank correlation statistic was the insignificant .0077 (p > .05), while for concept queries, the correlation was the still insignificant at .0125 (p > .05)

correlation frequencies

Looking at Status Codes

The majority of URIs, 51,873 (74%), served RDF via 303 redirection.

200 status codes without 303 redirection still form a substantial fraction (9%) of Semantic Web URIs - URIs just served not as Linked Data.

All hash convention URIs would by default still technically commit a redirect to be served by a 200 status code.

This is only a minority (27%) of those URIs returning a 200 status code.

The rest are likely caused by people serving RDF that does not have the access to the Web server configuration needed to serve RDF using 303 redirection.

Top 10 HTTP Status Codes for crawled URIs

Number Percent Status Code
51,873 73.97% 303
6,061 8.65% 200
4,517 6.44% 404
4,257 6.07% 500
3,147 4.49% 300
246 0.35% 406
20 0.03% 403
4 0.00% 302
3 0.00% 502

Triple Level Analysis

Moving the level of URIs to the level of the triples accessible from the URIs.

Some URIs were inaccessible, this reduced size of sample to 60,972, a reduction of (13%) from the crawled URIs.

The accessible crawled URIs contained 24,074 accessible crawled concept URIs (95% of all crawled concept URIs) and 36,898 (82% of all crawled entity URIs) accessible crawled entity URIs.

Each of the crawled accessible URIs was accessed, and this resulted in a total of 59,228 Semantic Web documents with only 48 URIs not allowing access to a Semantic Web document.

URIs of Vocabulary Terms

Number Percent Vocab URI
366,849 33.55% DBpedia URIs
109,300 9.99% RDF URIs
100,340 9.17% RDF(S) URIs
94,520 8.65% Cyc URIs
34,136 3.12% OWL URIs
6,563 0.60% SKOS URIs
4,728 0.43% dblp.l3s.de URIs
3,263 0.29% FOAF URIS
2,170 0.20% YAGO URIs
1,836 0.16% WordNet URIs

Triple Level Statistics

A total of 411,574 RDF triples in the crawled triples, with 242,829 (59%) triples for concepts and 168,745 (41%) triples for entity URIs.

Of these triples, there were a total of 1,051 triples containing blank nodes, a measly .25% of all triples in the corpus, of which 772 (73%) were subjects and only 279 (27%) were in the object position.

This means that the use of blank nodes are almost non-existent in our sample.

Removing blank nodes, the composition was split between URI nodes (66%) and a surprisingly large minority of RDF literals nodes (34%).

Of the literals, a total of 403,119 (almost 98%) were RDF string literals, while only 2% were of some other data type.

Common Data Types in Crawled Triples

Number Percent Data Type
403,119 97.95% RDF plain literal
3,103 0.75% w3c:/XMLSchema#integer
2,789 0.68% w3c:/XMLSchema#string
1,185 0.29% w3c:/XMLSchema#double
522 0.13% w3c:/XMLSchema#date
248 0.06% w3c:/XMLSchema#float
136 0.03% w3c:/XMLSchema#gYear
65 0.02% w3c:/XMLSchema#gYearMonth
59 0.01% dbpedia:Rank
46 0.01% dbpedia:Dollar
14 0.00% w3c:/XMLSchema#int
9 0.00% dbpedia:Percent

What Language? RDF or OWL?

Of the total 1,093,212 URIs in triples harvested from the crawled accessible URIs, only 243,776 (22%) were from one of the primary W3C Semantic Web knowledge representation languages, either RDF, RDF(S), or OWL.
  1. RDF: 109,300 URIs (45%)
  2. RDF(S): 100,340 URIs (41%)
  3. OWL: 34,136 URIs (14%)
Does not mean OWL is irrelevant, as ontologies constructed with OWL could be deployed to model the concepts and entities employed in `instance' data.

The usage of OWL, RDF(S), and RDF terms does not form a power law. This is because while a few terms vastly dominate, the vast majority of other terms are not used at all.

RDF and OWL Constructs in Crawled Triples

Number Percent Language Construct
73,451 30.31% rdfs:Class
47,044 19.30% rdfs:comment
44,113 18.10% rdfs:subClassOf
8,630 3.54% owl:Ontology
7,256 2.97% rdfs:label
6,618 2.14% rdf:Subject
5,107 2.09% owl:ObjectProperty
3,642 1.49% rdfs:subPropertyOf
1,157 0.47% owl:sameAs
535 0.29% rdfs:range

The Great owl:sameAs Debate

One of the most popular OWL constructs is indeed the controversial owl:sameAs term used to declare equivalence.

One critique holds that it is declaring equivalence between things that are not equivalent, so equivalence is being used too much.

Only 47% of overall Semantic Web modelling term usage, it is far from insignificant, with 1,157 occurrences.

Given the amount of Semantic Web URIs returned by the queries, it appears that the manual discovery and publication of co-referential URIs using owl:sameAs falls far behind the actual growth of Linked Data.

Surprise! Likely owl:sameAs is not being used enough.

Conclusions

The Linked Data Web is full of interesting structured data, but mostly on DBpedia.

Most Linked Data seems to actually conform to the 303 redirect and other recommendations (...or is that mostly DBpedia?)

There is a lot of potential information that users may be interested in! So lots of work needs to be done on ranking linked data.

Yet is this data really relevant to the user's needs? How many are returned in are irrelevant. This is in my SemSearch2009 paper.

How badly biased is this sample by the use of FALCON-S? Can we repeat it using other search engines?

An Algorithm For Finding URIs

  1. Given a term, retrieve a set of web-pages (Using Yahoo!).
  2. Given a term, retrieve a set of Semantic Web URIs and all triples (facts) associated with them, using FALCON-S.
  3. Human searches through web-sites.
  4. For each web-page the human clicks:
    1. Strip out HTML and reduce to words.
  5. For each Semantic Web returned by the query log
    1. Extract all text and typed data from each RDF fact
    2. Decompose RDF into "a bag of words" with lemmatization and removal of words from end of URI.
    3. match converted RDF to HTML using information retrieval text.
  6. Pick the URI with the best ranking score given by IR techniques.

Relevance Feedback

Have a human judge actually figure out what web-pages are relevant, and then use those to feed back and expand the query, in order to re-rank the results.

In their instructions, relevance was defined can be determined by whether or not accurate information about the information need is expressed by the result. This excludes both link farms, non-standard redirects, and legitimate hubs.

Experiment: Had 200 queries from previous work, retrieved top 10 hypertext Web (Yahoo!) results and top 10 (FALCON-S) results each judged by 3 judges for relevancy. Fleiss's Kappa=0.5724 (p < .05, 95% Confidence interval [0.5678,0.5771]), indicating the rejection of the null hypothesis and moderate agreement.

referent

Results

Results of Relevance Judgements:

Results: Hypertext Semantic Web
Resolved: 197 (98%) 132 (66%)
Unresolved: 3 (2%) 68 (34%)
Top Relevant: 121 (61%) 76 (58%)
Top Non-Relevant: 76 (39%) 56 (42%)

Relevance Results: Hypertext

referent

Results of Querying the Hypertext Web

Relevance Results: Semantic Web

referent

Results of Querying the Semantic Web

Finding Right IR Technique and Parameters

referent

Average Precision Scores for Vector-space Model Parameters: Relevance Feedback From Hypertext to Semantic Web

Next result was BM25 with the slight performance-enhancing modifications used in the InQuery system and comparison function used with standard Rocchio relevance feedback with slight modifications as used by Okapi and window size = 100 was best.

Finding the Right URI is now Acceptable!

referent

Why?

Text automatically extracted from hypertext documents is `messy,' being of low quality and bursty, with highly varying document lengths.

So, it is unwise to normalise the models, as that will almost certainly dampen the effect of valuable features like crucial keywords.

The reason BM25-based vector models in particular perform so well is that they are able to effectively keep track of both term frequency and inverse term frequency accurately.

BM25 provides a slight amount of rather unprincipled non-linearity in the importance of the various variables, effectively keep track of both term frequency and inverse term frequency accurately while massively lowering the power of another document length.

Run it in reverse!

Now apply relevance feedback from Semantic Web search engines to the hypertext Web! referent

Average Precision Scores for Language Model Parameters: Relevance Feedback From Hypertext to Semantic Web

The best relevant models sampled over top 10,000 words with a cross entropy smoothing factor set to .5. Relevance models over all concatenated relevant documents beats relevance models with documents sampled individually and then combined, as well as all vector-space models.

Looking at results

Results: Feedback FALCON-S
Top Relevant: 118 (89%) 76 (58%)
Non-Relevant Top: 14 (11%) 56 (42%)
Non-Relevant Top Entity: 9 (64%) 23 (41%)
Non-Relevant Concept: 5 (36%) 33 (59%)

A respectable 19% in average precision over the engine FALCON-S, intuitively makes the system's ability to place a relevant URI in the top rank acceptable for most users.

The Semantic Web can help Hypertext Search

referent

Why?

As the Semantic Web data is mostly manually high-quality curated data from sources so the actual natural language fragments on the Semantic Web (found for example in Wikipedia abstracts) are much better samples of natural language than the natural language samples found in hypertext.

The distribution of `natural' language terms extracted from RDF terms, while often irregular, will either be repeated very heavily or fall into the sparse long tail, which can then be dealt with by relevance models.

The hypertext search engine is being `seeded' with a high-quality accurate description of the information need expressed by the query to be used for query expansion.

Please see Semantic Search 2009 paper for actual equations describing all the IR frameworks we used, i.e. relevance models, BM25, Ponte's method, local content analysis, and more.

Semantic Search 2010 Entity Resolution Task

Running first large-scale evaluation of all Semantic Web Search engines at Semantic Search 2010

  1. Normalized Index: Billion Triple Challenge
  2. Standard Queries: 200 hand-annotated queries from Yahoo!
  3. Relevance Judgements: Made possible by crowd-sourcing via Amazon Mechanical Turk

Should bootstrap the semantic search community

A Sample HIT

causal

Human Computation for Spec Developoment

mechanical turk picture

The future of data-driven semi-automated specification development?

Currently am crawling the Linked Data Web for owl:sameAs examples, hope to run thousands of HITs by Mechanical Turk to inspect various features of OWL and RDF.

Of course, one of the most interesting is the (ab)use of owl:sameAs!

When owl:sameAs isn't the same?

owl:sameAs: the built-in OWL property owl:sameAs links an individual to an individual and Such an owl:sameAs statement indicates that two URI references actually refer to the same thing: the individuals have the same identity.

Given that OWL has no unique name assumption, once there is an application of owl:sameAs to two different URIs, then any statement that is given to a single URI is true for every other URI that has an owl:sameAs link anywhere

Both symmetric and transitive, so that anyone can link to your data-set with owl:sameAs from anywhere else on the Web without your permission, and any statement they make about their own URI will immediately apply to yours

Same Thing As But Referentially Opaque

When the two URIs do refer to the same thing, but all the properties ascribed to one URI are not necessarily accepted by the other (the Principle of Substitution is violated).

Example: The OpenCyc ontology says that an element is the set (class) of all pieces of the pure element, so that for example sodium in Cyc has a member which is the lump of pure metallic sodium. On the other hand, sodium as defined by DBPedia is used to also include isotopes, which have different number of neutrons than ``standard'' sodium.

Same Thing As But Different Context

Two URIs do refer to the same thing and all properties do hold of both URIs, but that we cannot re-use the URI in a different context.

Example: To use an example from Lynn Stein, when at a meeting of the PTA (Parent-Teacher Association) she is Ms. Stein, Rachel's mum, not Professor Stein of MIT. This does not mean that in the PTA meeting Ms. Stein is somehow not a professor, but that within that context those properties do not matter.

Represents

While the term representation is often very contentious, its intuitive definition is that, just as a picture of something depicts something, a URI can be for a representation of a thing rather than the thing itself.

Example: Think displaced reference: It also might be worth distinguishing between using a representation to refer to the represented, such as using a picture of Berners-Lee to refer to Tim Berners-Lee himself. The example of using an e-mail box to refer to a person is not an error but rather more a displaced reference.

Very Similar To

Two things are not identical but simply closely related in some manner.

Example: There are hard-to-specify relationships between things, such as the relationship between isotopes and elements, the quantity and a measurement of a quantity, and an image and a facsimile of that image.

The SKOS vocabulary has a number of ``matching'' predicates that are close in meaning to this, ranging from hierarchically structured skos:broadMatch and skos:narrowMatch to the more suitable skos:closeMatch.

Conclusions

Results: Finding the best URIs on the Semantic Web can be built out of the social and empirical semantics implicitly given by the behavior (like search, deploment of RDF) of ordinary users.

referent

We need to deploy methodology from IR, NLP, machine-learning, databases, and actual human evaluation in order to make the Semantic Web actually work.

Going "back to basics" with RDF, emphasizing URI re-use, high quality associated descriptions, links, and simple vocabularies and entities humans want to use rather than high-level ontologies and agents.