Archive for the “linguistics” Category

Now that registration is open, you may be thinking about attending this year’s ADHO Digital Humanities Conference, DH2010, which will be

hosted at King’s College London by the Centre for Computing in the Humanities and the Centre for e-Research, with the support of the School of Arts and Humanities, Information Services and Systems, and the Principal, Professor Rick Trainor.

Before you decide to register, you might wish to consider whether King’s can really afford to be hosting an event of this size, especially  in the light of the savage cuts  that King’s College has felt forced to make this year . Faculty cuts include Shalom Lappin! It’s like MIT laying of TimBL.

You might feel it better for King’s  for you to focus your attention and submissions on other venues, and to bypass DH2010.

If you do, be sure to drop an email to  Professor Trainor explaining your decision.

Comments No Comments »

Those cynical chaps at the Speculative Grammarian make a mockery of all that is good and holy about Computational Linguistics and Information Retrieval.

Recision and Precall – Accuracy Measures for the 21st Century

[...]

So, rather than trying to dumb things down and arrive at such a single “accuracy” number, we propose instead to dumb things up—constructing measures that focus on the real needs of a measurable theory, including the meta-system/contextual-matrix in which it is embedded (including, explicitly and for the first time, the researchers and grad-students on the research team).

These two new measures are called recision and precall.

Recision is a measure of the amount of data that must be ignored (or surreptitiously dumped in the river with a new pair of cement shoes) in order to get publishable results. If 10% of your data must be “lost” in order to get good results that support your pre-computed conclusions, then your theory and your research team have a respectable recision score of 10%. If only 10% of your data is useable, then your recision score is a dismal4 90%.

Precall is a measure of your team’s ability to quickly and correctly predict how well your algorithm or system will perform on a new data set that you can briefly review. Correctly predicting “This will give good results.” or “This is gonna suck!” 90% of the time translates directly into a precall score of 90%. Good precall (especially during live demos) can save a project when results are poorer than they should be. The ability to look at some data and accurately predict and, more importantly, explain why such data will give poor results shows a deep understanding of the problem space.5 Even when performance is decent, though, prefacing each data run with “We have no idea how this will turn out!” makes your team look lucky, at best, or, at worst, foolish.

Comments No Comments »