I spoke to Dave Lankes yesterday about Reference Extract and Bibliomancer, two new projects at the Information Institute of Syracuse. Dave explained these two projects very clearly: Both are specialty search engines. The corpus searched by Reference Extract is URLs scraped from of dig ref transactions. The corpus searched by Bibliomancer is the content of the pages at those URLs.

Dave is interested in having some evaluation done of these search engines. We discussed some more or less straightforward IR-style evaluation: Start with questions with known answers (for example, the questions that led to the indexed transactions), search on those questions, & evaluate the quality of the returned results (for example, does the engine return the same resources the human librarian did?). The next step in this evaluation might then be to compare these results against, say, searching the same question in Google. Another next step might be to move on to new questions (that is, other than those that led to the indexed transactions), and TREC-style, have experts evaluate the quality of the returned results. We also discussed what I suspect would be a much more difficult problem to tackle: to categorize the different contexts surrounding the provision of resources in a reference transaction. In other words, to provide some sort of explanation for why a particular resource was provided in response to a particular question.

Me, I’m interested in having some of the following questions answered:

  • Are the resources provided by a human reference librarian in response to a specific question useful as a baseline for evaluating the performance of automated searching or question-answering? (I believe yes, but it would be nice to have some empirical evidence.)
  • Under what circumstances can a resource that a librarian provided in answer to one question be “repurposed” to answer another?
  • Under what circumstances would a librarian provide a poor resource?
  • Are the questions asked or resources provided different between different types of dig ref services (e.g., affiliated with an academic or public library, an AskA, a statewide consortium) or between dig ref provided via different media (e.g., email vs. chat)?

Any SILS students who are interested in working on these or any related projects, get in touch.