The Orbis Cascade Alliance's Collaborative Tech Services Team has been charged with implementing "a Shared Best Practices working group to develop guidelines for effective technical services policies and operations that support the Alliance goal of a shared ILS (Integrated Library System)."
At our team's first meeting, I was charged with taking a shot at what an ideal Alliance shared ILS might look like. I am supposed to produce a white paper on this topic for consideration by the Team.
The work of the Shared Bibliographic Database Task Force serves as a good starting point in this thinking process. Their report points out many of the potential implications of a shared ILS as well as advantages and drawbacks (scenario 3).
Perhaps the most common notion of a shared ILS would be all 36 Alliance members piling on board a traditional ILS software package that was designed for large, complex organizations (but might not be designed to handle as many separate and large organizations and sub organizations as comprise the Alliance). A big advantage in doing this would be savings on system maintenance, local system administration and hardware costs.
To make this happen successfully, member libraries would have to standardize their operations around certain system settings like circulation loan rules because the system simply couldn't handle as much local variation as is in place now. This standardization might create efficiencies in itself in that it would promote common best-practices workflows around the shared settings. A shared system could also create efficiencies through the reuse and sharing of data. For example, by sharing bibliographic records, we could reduce the overall time and effort required for database maintenance.
The big disadvantage in sharing a system this way would be a potential lack of flexibility to tailor the system to the each institution's needs: whether this customization came in the form of special loan rules, distinct subject headings, etc. Another issue might be that the system would simply become unwieldy for staff to use because data from all the institutions would overload user interfaces for staff. Imagine having to wade through hundreds of item records for a given bibiliographic record (of course these effects could be mitigated by special views, etc.) And finally another disadvantage would be the potential red tape needed to make any change to system settings.
Another model of a "shared ILS" would provide every library with an independent "virtual" ILS delivered in a Software As a Service (SAAS) fashion. Because the software would be delivered over the web, we would "share" the underlying software and computing infrastructure. We'd be "sharing" an ILS with other Alliance members (and perhaps whomever else the vendor contracted with) but we wouldn't even know it. By doing things this way, we wouldn't lose any independence and we'd still potentially save money on system maintenance (both vendor supplied and via our own staff). But we wouldn't be gaining any potential benefits possible through the sharing of data.
The third option would be a hybrid of the two above and is the one that probably corresponds to the most likely reality. The ILS would be shared where there were benefits to be gained, separate where there were not. For example, in circulation every library could have their own patron types and loan rules, but those patron types and loan rules would map to higher consortial levels of abstraction in order to support borrowing between institutions (kind of like they do now in Navigator but in the same system). In acquisitions, data about what materials were on order at each institution would be shared, but fund data wouldn't. In cataloging, we would share bibliographic records but perhaps control some of our own fields. In e resources, both individual and group purchases and licenses would be supported.
Another twist, of course, is the notion of sharing the ILS on a global level, which is more or less the vision of OCLC webscale management services. This creates the need for an even higher level of abstraction than at that of the consortium.
In coming up with a model for an ideal shared ILS for the Alliance, I'll be considering all of these scenarios.
Showing posts with label ILS. Show all posts
Showing posts with label ILS. Show all posts
Monday, February 15, 2010
Thursday, April 10, 2008
Innovative Interfaces abstains from DLF initiative
While waiting for paint to dry (literally) at 2 am, came across this.
At code4lib, we heard from Terry Reese and Emily Lynema about the DLF's initiative to create standards interfaces for ILSs to support external discovery services. An announcement from Peter Brantley confirms that a basic set of these has been adopted under the title "ILS Basic Discovery Interfaces: A proposal for the ILS community."
The proposal's goals are modest, but nonetheless set a baseline of functionality that most ILS vendors should be able to provide without a whole lot of difficulty:
At code4lib, we heard from Terry Reese and Emily Lynema about the DLF's initiative to create standards interfaces for ILSs to support external discovery services. An announcement from Peter Brantley confirms that a basic set of these has been adopted under the title "ILS Basic Discovery Interfaces: A proposal for the ILS community."
The proposal's goals are modest, but nonetheless set a baseline of functionality that most ILS vendors should be able to provide without a whole lot of difficulty:
1. Harvesting. Functions to harvest data records for library collections, both in full, and incrementally based on recent changes. Harvesting options could include either the core bibliographic records, or those records combined with supplementary information (such as holdings or summary circulation data). Both full and differential harvesting options are expected to be supported through an OAI-PMH interface.The proposal is undersigned by the following vendors:
2. Availability. Real-time querying of the availability of a bibliographic (or circulating) item. This functionality will be implemented through a simple REST interface to be specified by the ILS-DI task group.
3. Linking. Linking in a stable manner to any item in an OPAC in a way that allows services to be invoked on it; for example, by a stable link to a page displaying the item's catalog record and providing links for requests for that item. This functionality will be implemented through a URL template defined for the OPAC as specified by the ILS-DI task group.
- Talis
- Ex Libris
- LibLime
- BiblioCommons
- SirsiDynix
- Polaris Library Systems
- VTLS
- California Digital Library
- OCLC
- AquaBrowser
Abstention:
- Innovative Interfaces, Inc.
Wednesday, January 30, 2008
Open Source ILS for Academic Libraries
This got forwarded to my email, from the CNI listserv:
I'm not sure if a traditional open source project is the best solution, either. Seems to me that the next generation ILS should follow more of a cloud computing model instead of many disparate systems sharing a single base of code.
In my opinion, the question of a next generation ILS should be approached first from the data side, and then the software application side. As Tim O'Reilly puts it, Web 2.0 means "data is the next Intel Inside." The next generation ILS should be all about large pools of programmable, shared data. Organizations like OCLC and Serials Solutions have some of this data, but lots of other data dispersed across the net in various silos.
If libraries want to deliver a user experience at anywhere near the level of Google, we need to be using the same techniques that they are. And their most important technique is aggregating large amounts of data.
What am I talking about, more concretely? Our digital collections of unique materials should be managed in a centralized system that can leverage network effects in search, folksonomies, and more. Our library catalogs should simply be a subset of a larger shared catalog. Organization of licensed content should be facilitated by sharing metadata about that content. Even user/patron data should be managed in a network fashion using systems like OpenID.
In some ways, the next generation ILS is already emerging in the form of data driven products like Serials Solutions 360 and WorldCat Local. Of course, there is also more work to be done.
If a consortium of universities creates their own ILS, I'm afraid that it'll be a glacially moving monstrosity of a project like Fedora or DSpace. A theoretically wonderful piece of code that doesn't amount to much when it's installed in many isolated instances.
The Duke University Libraries are preparing a proposal for the Mellon Foundation to convene the academic library community to design an open source Integrated Library System (ILS). We are not focused on developing an actual system at this stage, but rather blue-skying on the elements that academic libraries need in such a system and creating a blueprint. Right now, we are trying to spread the word about this project and find out if others are interested in the idea.I agree that the current ILS marketplace doesn't deliver for academic libraries.
We feel that software companies have not designed Integrated Library Systems that meet the needs of academic libraries, and we don’t think those companies are likely to meet libraries’ needs in the future by making incremental changes to their products. Consequently, academic libraries are devoting significant time and resources to try to overcome the inadequacies of the expensive ILS products they have purchased. Frustrated with current systems, library users are abandoning the ILS and thereby giving up access to the high quality scholarly resources libraries make available.
Our project would define an ILS centered on meeting the needs of modern academic libraries and their users in a way that is open, flexible, and modifiable as needs change. The design document would provide a template to inform open source ILS development efforts, to guide future ILS implementations, and to influence current ILS vendor products.Our goal is not to create an open-source replica of current systems, but to rethink library workflows and the way we make library resources available to our
constitutiencies. We will build on the good work and lessons learned in other open source ILS projects. This grant would fund a series of planning meetings, with broad participation in some of those meetings and a smaller, core group of schools developing the actual design requirements document.
I'm not sure if a traditional open source project is the best solution, either. Seems to me that the next generation ILS should follow more of a cloud computing model instead of many disparate systems sharing a single base of code.
In my opinion, the question of a next generation ILS should be approached first from the data side, and then the software application side. As Tim O'Reilly puts it, Web 2.0 means "data is the next Intel Inside." The next generation ILS should be all about large pools of programmable, shared data. Organizations like OCLC and Serials Solutions have some of this data, but lots of other data dispersed across the net in various silos.
If libraries want to deliver a user experience at anywhere near the level of Google, we need to be using the same techniques that they are. And their most important technique is aggregating large amounts of data.
What am I talking about, more concretely? Our digital collections of unique materials should be managed in a centralized system that can leverage network effects in search, folksonomies, and more. Our library catalogs should simply be a subset of a larger shared catalog. Organization of licensed content should be facilitated by sharing metadata about that content. Even user/patron data should be managed in a network fashion using systems like OpenID.
In some ways, the next generation ILS is already emerging in the form of data driven products like Serials Solutions 360 and WorldCat Local. Of course, there is also more work to be done.
If a consortium of universities creates their own ILS, I'm afraid that it'll be a glacially moving monstrosity of a project like Fedora or DSpace. A theoretically wonderful piece of code that doesn't amount to much when it's installed in many isolated instances.
Thursday, November 29, 2007
a hypothetical vendor response to the dis-integrating ILS
It's becoming clear that libraries can pick and choose from a variety of digital library products independent from their main ILS platform. These products include open url resolvers, federated search systems, e journal management services, digital asset management systems, and most recently, next generation catalogs.
It's now possible, and perhaps even common, for a library (or a consortium of libraries), to have an aging, but reliable ILS performing the basic inventory management functions of the 'bought' collection while a bevy of other digital library products peform the new digital library functions mentioned above.
But what about the vendor of that core ILS product? Is it not likely that prior to these disconnected array of digital library products, they were able to reliably sell upgrades and add-ons to their installed base of ILS customers? Now, they must face competition with many other vendors for finite resources that libraries have for new technology. It might also be likely that this vendor isn't positioned that well to compete against some of these newer, more nimble firms out there in the digital library marketplace, and that even thought they offer digital library products, they have trouble selling their products even to their existing base of customers.
What is this vendor likely to do in this situation? Simply sit there, idly supporting their aging traditional ILS system and watch their chunk of revenue from the library technology marketplace decline? That would be pretty painful, I imagine.
The easy answer, of course, is that they need to reinvent themselves, innovate, and regain market share. But if that doesn't work, there is another option. Even though the traditional ILS is losing some of its strategic importance for libraries, it is still a core part of a library's operations. And generally speaking, libraries depend on their vendor's support to keep these systems running. Specifically, they depend on a support contract, which is a voluntary agreement between two the vendor and the library. If a library system vendor doesn't like how a library (or perhaps group of libraries) is spending its money, they could simply threaten to discontinue that contract in order to bring the library into line. After all, most libraries would be hard pressed to switch ILSs on short notice as it is quite a costly and time-intensive undertaking.
Of course, this would be a dangerous game to play for a vendor in the long term, as a core ILS can be replaced with plenty of different options. But it is one tactic that a desperate ILS vendor could use against libraries in the short term.
It's now possible, and perhaps even common, for a library (or a consortium of libraries), to have an aging, but reliable ILS performing the basic inventory management functions of the 'bought' collection while a bevy of other digital library products peform the new digital library functions mentioned above.
But what about the vendor of that core ILS product? Is it not likely that prior to these disconnected array of digital library products, they were able to reliably sell upgrades and add-ons to their installed base of ILS customers? Now, they must face competition with many other vendors for finite resources that libraries have for new technology. It might also be likely that this vendor isn't positioned that well to compete against some of these newer, more nimble firms out there in the digital library marketplace, and that even thought they offer digital library products, they have trouble selling their products even to their existing base of customers.
What is this vendor likely to do in this situation? Simply sit there, idly supporting their aging traditional ILS system and watch their chunk of revenue from the library technology marketplace decline? That would be pretty painful, I imagine.
The easy answer, of course, is that they need to reinvent themselves, innovate, and regain market share. But if that doesn't work, there is another option. Even though the traditional ILS is losing some of its strategic importance for libraries, it is still a core part of a library's operations. And generally speaking, libraries depend on their vendor's support to keep these systems running. Specifically, they depend on a support contract, which is a voluntary agreement between two the vendor and the library. If a library system vendor doesn't like how a library (or perhaps group of libraries) is spending its money, they could simply threaten to discontinue that contract in order to bring the library into line. After all, most libraries would be hard pressed to switch ILSs on short notice as it is quite a costly and time-intensive undertaking.
Of course, this would be a dangerous game to play for a vendor in the long term, as a core ILS can be replaced with plenty of different options. But it is one tactic that a desperate ILS vendor could use against libraries in the short term.
Friday, November 9, 2007
Canadian Libraries
From my experience at Access 2007, I got the idea that Canadian academic libraries might be a little ahead of the curve in their incorporation of web technology, particularly open source solutions. This article (coming to me by way of the Frye listserv) reinforces this idea.
A number of institutions are looking at Evergreen, while there are some pretty cool digital collections systems put together from open source tools, especially the combination of Drupal/Fedora, which they are doing at UPI:
A number of institutions are looking at Evergreen, while there are some pretty cool digital collections systems put together from open source tools, especially the combination of Drupal/Fedora, which they are doing at UPI:
Subscribe to:
Posts (Atom)