Wednesday, January 30, 2008

Open Source ILS for Academic Libraries

This got forwarded to my email, from the CNI listserv:
The Duke University Libraries are preparing a proposal for the Mellon Foundation to convene the academic library community to design an open source Integrated Library System (ILS). We are not focused on developing an actual system at this stage, but rather blue-skying on the elements that academic libraries need in such a system and creating a blueprint. Right now, we are trying to spread the word about this project and find out if others are interested in the idea.

We feel that software companies have not designed Integrated Library Systems that meet the needs of academic libraries, and we don’t think those companies are likely to meet libraries’ needs in the future by making incremental changes to their products. Consequently, academic libraries are devoting significant time and resources to try to overcome the inadequacies of the expensive ILS products they have purchased. Frustrated with current systems, library users are abandoning the ILS and thereby giving up access to the high quality scholarly resources libraries make available.

Our project would define an ILS centered on meeting the needs of modern academic libraries and their users in a way that is open, flexible, and modifiable as needs change. The design document would provide a template to inform open source ILS development efforts, to guide future ILS implementations, and to influence current ILS vendor products.Our goal is not to create an open-source replica of current systems, but to rethink library workflows and the way we make library resources available to our
constitutiencies. We will build on the good work and lessons learned in other open source ILS projects. This grant would fund a series of planning meetings, with broad participation in some of those meetings and a smaller, core group of schools developing the actual design requirements document.
I agree that the current ILS marketplace doesn't deliver for academic libraries.

I'm not sure if a traditional open source project is the best solution, either. Seems to me that the next generation ILS should follow more of a cloud computing model instead of many disparate systems sharing a single base of code.

In my opinion, the question of a next generation ILS should be approached first from the data side, and then the software application side. As Tim O'Reilly puts it, Web 2.0 means "data is the next Intel Inside." The next generation ILS should be all about large pools of programmable, shared data. Organizations like OCLC and Serials Solutions have some of this data, but lots of other data dispersed across the net in various silos.

If libraries want to deliver a user experience at anywhere near the level of Google, we need to be using the same techniques that they are. And their most important technique is aggregating large amounts of data.

What am I talking about, more concretely? Our digital collections of unique materials should be managed in a centralized system that can leverage network effects in search, folksonomies, and more. Our library catalogs should simply be a subset of a larger shared catalog. Organization of licensed content should be facilitated by sharing metadata about that content. Even user/patron data should be managed in a network fashion using systems like OpenID.

In some ways, the next generation ILS is already emerging in the form of data driven products like Serials Solutions 360 and WorldCat Local. Of course, there is also more work to be done.

If a consortium of universities creates their own ILS, I'm afraid that it'll be a glacially moving monstrosity of a project like Fedora or DSpace. A theoretically wonderful piece of code that doesn't amount to much when it's installed in many isolated instances.

1 comment:

Kyle said...

The next generation ILS should be all about large pools of programmable, shared data

Bingo. What makes library services next generation is that our users increasingly need us to direct them to information maintained by a growing number of providers outside the library.

This problem is all about managing data, and it takes more than improved ranking algorithms, facets, and eye candy to do that.

Currently, there is enormous duplication of effort in maintaining systems and data. For example, redundant authority control alone costs libraries a fortune and leads to an inferior result at much higher cost.

If we can eliminate inefficiencies like these, we will have more resources to provide services that will benefit our users.