Difference between revisions of "INLS 740: Digital Library Evaluation"

From Pomerantz
Jump to: navigation, search
Line 16: Line 16:
 
These are just suggestions: other evaluation questions of your invention are fair game.
 
These are just suggestions: other evaluation questions of your invention are fair game.
  
 +
Then, look at the [http://docsouth.unc.edu/support/about/analytics.html Google Analytics data] for DocSouth.
 +
<!--
 
Then, look at the two sources of data to which we have access about DocSouth:
 
Then, look at the two sources of data to which we have access about DocSouth:
 
# [http://docsouth.unc.edu/support/about/analytics.html Google Analytics data]
 
# [http://docsouth.unc.edu/support/about/analytics.html Google Analytics data]
 
# [http://docsouth.unc.edu/support/about/readers.pdf Readers' Comments about DocSouth]
 
# [http://docsouth.unc.edu/support/about/readers.pdf Readers' Comments about DocSouth]
 +
-->
  
 
You may or may not be able to answer your evaluation questions with the data available to you, or you may be able to answer different questions. Part of this exercise is for you to identify the questions that you ''ideally'' want to be able to answer, and then the questions that are ''actually answerable'' given the data available to you. Given that gap, make recommendations about additional data that it would be useful to collect, to answer important but currently unanswerable questions.
 
You may or may not be able to answer your evaluation questions with the data available to you, or you may be able to answer different questions. Part of this exercise is for you to identify the questions that you ''ideally'' want to be able to answer, and then the questions that are ''actually answerable'' given the data available to you. Given that gap, make recommendations about additional data that it would be useful to collect, to answer important but currently unanswerable questions.
  
This assignment will proceed in two stages: individually then collectively. First, individually develop a set of evaluation questions, look at both datasets, identify which of your questions are answerable or not, and brainstorm about additional data that DocSouth should collect. Next, make a post to the course site about all of that. Give your post the category ''DL Evaluation''. Finally, we will discuss your findings and recommendations.
+
This assignment will proceed in two stages: individually then collectively. First, individually develop a set of evaluation questions, look at the dataset, identify which of your questions are answerable or not, and brainstorm about additional data that DocSouth should collect. Next, make a post to the course site about all of that. Give your post the category ''DL Evaluation''. Finally, we will discuss your findings and recommendations.
  
 
This assignment will be evaluated according to the [http://www.wmich.edu/evalctr/archive_checklists/program_metaeval.pdf Program Evaluations Metaevaluation Checklist (short version)]. Not all points in this checklist will apply to this assignment and your evaluation in particular. Adhere to the standards and checkpoints that make sense for your DL evaluation, and ignore the ones that don't.
 
This assignment will be evaluated according to the [http://www.wmich.edu/evalctr/archive_checklists/program_metaeval.pdf Program Evaluations Metaevaluation Checklist (short version)]. Not all points in this checklist will apply to this assignment and your evaluation in particular. Adhere to the standards and checkpoints that make sense for your DL evaluation, and ignore the ones that don't.

Revision as of 12:34, 3 January 2013

The IMLS/NISO Framework of Guidance for Building Good Digital Collections states:

Collections Principle 6: A good collection has mechanisms to supply usage data
and other data that allows standardized measures of usefulness to be recorded.

Many such mechanisms exist. For this exercise we will look at two of them: automatically-collected weblog statistics and user comments.

The digital library that you will be building for this course will not be fully enough developed, nor used heavily enough, for evaluation data to be meaningful. We will therefore be looking at data from a well-established DL: Documenting the American South. DocSouth is the flagship project of the Carolina Digital Library and Archives, and contains several collections, with a wide range of types of materials, file formats, and presentation styles.

Imagine that you are the Head of DocSouth, or the Head of the CDLA, or the University Librarian. What would you want or need to know about DocSouth, in order for you to do your job? Develop a set of evaluation questions about DocSouth. For example:

  • Which of DocSouth's collections receive the most / least use?
  • What user communities make the greatest / least use of DocSouth's collections? For what are different user communities using the DocSouth collections?
  • How to users navigate the DocSouth site? What are users browse paths? For what do users search?
  • Given users' behaviors on the DocSouth site, what functionality or tools might be most appropriate to provide?

These are just suggestions: other evaluation questions of your invention are fair game.

Then, look at the Google Analytics data for DocSouth.

You may or may not be able to answer your evaluation questions with the data available to you, or you may be able to answer different questions. Part of this exercise is for you to identify the questions that you ideally want to be able to answer, and then the questions that are actually answerable given the data available to you. Given that gap, make recommendations about additional data that it would be useful to collect, to answer important but currently unanswerable questions.

This assignment will proceed in two stages: individually then collectively. First, individually develop a set of evaluation questions, look at the dataset, identify which of your questions are answerable or not, and brainstorm about additional data that DocSouth should collect. Next, make a post to the course site about all of that. Give your post the category DL Evaluation. Finally, we will discuss your findings and recommendations.

This assignment will be evaluated according to the Program Evaluations Metaevaluation Checklist (short version). Not all points in this checklist will apply to this assignment and your evaluation in particular. Adhere to the standards and checkpoints that make sense for your DL evaluation, and ignore the ones that don't.

For the curious: Other sets of criteria for evaluating evaluations (metaevaluation) include:


« Back to course main page »