Sitting now in the Supporting Education session at JCDL. Carl Lagoze is talking now, & giving some lessons learned from the NSDL experience. They all boil down to:
- This was a lot harder than we thought it would be, &
- People satisfice.
Carl’s conclusion is that the NSDL was not a successful project. [Apparently I’m mistaken in writing this; see the comments below.] Chris Borgman says: “this is extremely depressing news.” Why not successful? Carl says: Smart people did good work & didn’t do anything wrong. So why? We don’t know. Carl suggests a conference session to discuss. (More study is required; we need more smart people to figure out why the last set of smart people failed. Will that work?)
Mimi Recker up now. She has a tough act to follow; we all want to throw ourselves out a window now. Talking about the Instructional Architect. What are teachers doing, when they are giving assignments & using resources from DLs? “Blogger meets del.icio.us.” Most requested feature: teachers want not to be limited to the NSDL, but want to be able to include materials from anywhere on the web. Most teachers do not fill out the checkbox set to indicate what the topic & grade level of their project, or the applicable core curriculum standards. 37% of resources used came from NSDL search, 43% of resources used are from domains under the NSDL… so a lot of browsing is going on.
I have to ask, does it make sense to build tools for packaging DL materials for teaching, around a perhaps defunct DL? Depends on how married to the DL the tool is, I suppose. So, an argument for modular tool development. I’ll be interested to see how much use the Instructional Architect gets, post-NSDL.
Not to contridict Carl, but I might put a few other reasons for NSDL’s failings:
1. Required standards in all NSDL grants (standards and requirements for metadata)
2. Ambiguous governance (is the CI really in charge)
3. Lack of focused development on working tools
Make a great case study some day.
I agree with Dave that the “ambiguous governance” is/was a major problem for NSDL, as were a number of questionable decisions about direction, leadership, etc.
On the other hand, I disagree with Carl that “we didn’t know how hard this would be” (and I’ve disagreed with him on this point publicly a number of times). Building production services is ALWAYS hard–much harder than doing research projects that can be easily discarded when the money dries up.
Determining whether NSDL is a failure or not requires more than Carl’s opinion, and I would say that the jury is still out on that.
(formerly NSDL Director of Library Services and Operations, but that’s another story)
Carl decidely desagrees with the notion that he characterized NSDL as “not a successful project”. In fact, the notion that this paper won Best Paper award is evidence that indeed the project has produced major successes and benefits to the DL production and research community.
What Carl will repeat from his talk (and paper) is that the vision of semi-automated data flows from distributed metadata providers just doesn’t work with a good deal of human intervention. I’ll not repeat the paper, but this is not a matter of the “difficulty of building production services”. I believe it lies in the premise that distributed metadata production by non-experts in techology and metadata production can work without lots of hand holding. That level of hand holding contradicts the basic intention of the architecture and makes one at least question whether distributed metadataa production/collection/ and normalization can really be at the heart of a large scale digital library. Or said differently, is the kind of content/context analysis that Google is doing the best we can do at a reasonable cost? I don’t propose that I know the answer to this quesion but the NSDL experience is sobering.
I’ve said it elsewhere, but it may be time that we (digital) library types put aside notions that descriptive metadata can play a dominant role in resource discovery, and put our efforts into the mechanisms for knowledge enhancement, context building, etc. that can distinquish our efforts from the search engine vendors.
Lastly, this is what we are doing quite actively in NSDL CI. If you have followed our latest presentatinos and writings we are in the midst of very exciting work that focuses on resource context rather than structured description.
Apologies to Carl; I really thought I recalled him saying that he considered the NSDL to have been not a successful project. I retract that statement. Personally, I think the NSDL has been successful, though perhaps for different reasons than the CI group thinks so. I consider learning that something is a lot harder than you thought it would be, to be a successful outcome. Better still if you then communicate that the the wider world, which they have. Carl’s actual lessons learned (as he stated them, not as I interpreted them) all fell out of it being a lot harder than they thought it would be.
I disagree with Carl that we should probably ditch descriptive metadata. I agree that one-size-fits-all descriptive metadata, of the type we have traditionally used in libraries (e.g., AACR2, etc.), is probably not useful in the online or DL context. But I have high hopes for collaboratively-generated descriptive metadata, a la collaborative tagging. I think the next big research problem in descriptive metadata is how to filter out the noise in collaborative tagging systems.