Wednesday, December 9, 2009

Semantic Web at Liberal Arts Colleges

As a NITLE online seminar, NITLE's Director of Research Bryan Alexander gave a great overview of the semantic web and potential applications in academia. It was impressive how much he fit into an hour long session. Today, I was thinking about how this would have made a cool day long workshop, but then we really would have needed to move from the theoretical to the practical.


I think that as more academic content gets marked up semantically, it should change the way we do research. Searching and organizing content as one does academic research should get more sophisticated.

We could start marking up data in our digital collections using RDF. Looks like there is a good way of doing this for creative commons data. Search engines are starting to use this information more.

Friday, December 4, 2009

Digital Initiatives at a Liberal Arts College Library

At work, I've been thinking about new digital services that the library could introduce, potentially in partnership with other units at L&C. Here is the draft of a short paper that I've put together on the topic:

Potential Future Digital Initiatives at Watzek Library

This paper is intended to stimulate thinking and discussion about potential digital initiatives involving
a variety of constituents at Lewis & Clark including faculty, the Library, IT, New Media, etc.

As the nature of research, scholarship, and learnin
g changes in an increasingly digital environment, academic libraries need to rethink the services that they are providing. This rethinking is even more crucial in a tight budget environment where we need to maximize the impact of the funds expended on library services and resources.

In the past five years, Watzek Library has developed its capacity to support digital initiatives in three main areas: enhancements to the library website and the functions it provides to support research, the Visual Resources Collection, and Special Collections and Archives. Some of the projects that we have completed include the Special Collections and Archives Digital Collection, the MDID image collection, our senior theses collection, accessCeramics, and the William Stafford Archives. Work on these projects has allowed us to develop expertise in information architecture, design, web programming, metadata management, Web 2.0 technologies, and digital scholarship.

As we look towards the future, we would like to broaden the impact of Watzek's digital initiatives and make more connections with academic endeavors across the College. These are a list of possible digital services that the library could offer in the future. In one form or another, they are being offered by colleges and universities around the United States. We are putting this list forward to gauge interest and applicability at Lewis & Clark.

Thematic Digital Collections:
Faculty may have interest in developing an online archive of primary materials, scholarship, or data associated with their scholarship and/or teaching. The Library might partner with faculty on the development of online collections of images, documents, or other media surrounding a particular topic. We could provide the expertise in digitization, software selection, database design, metadata schemas, information architecture and search engine optimization needed to develop such projects. Our collaboration might take the form of a consultation or a more extensive partnership for larger collections that would fit in with Watzek Library's long term digital collections. One example of such a project is accessCeramics, a database of contemporary ceramics images developed as a partnership between Assistant Professor of Ceramics Ted Vogel and Watzek Library. accessCeramics has paired Vogel's connections to the ceramics community and interest in curating an online collection of images with Watzek Library's expertise in digital collections. A few other examples of thematic archives arranged around faculty research interests include the Gerald Warner Taiwan image collection,
a project of Associate Professor Paul D. Barclay and Digital Inititives Librarian Eric Luhrs at Layfayette College, the Anarchy Archives, a project of Professor David Ward at Pitzer College, and the Murals of Northern Ireland collection a project of Tony Crowley, the Hartley Burr Alexander Chair in the Humanities at Scripps College and one of numerous thematic collections in the Claremont Colleges Digital Library.

Institutional Repository
: The library could support an online digital archive devoted to storing and making accessible digital objects associated with the academic life of the College. The content might include faculty and student scholarship, materials from College symposia, and other media. The library already archives student theses, as do some individual departments. For an example of an institutional repository at a liberal arts college, see Macalester's Digital Commons.

Platforms for Collaborative Student Research:
In the digital environment, there are growing opportunities for students to work together on research projects. Using social bookmarking software and wikis, students can share resources with each other. Lewis & Clark's Environmental Studies program uses delicious.com to accumulate and organize research resources around particular sites. Software like the History Engine gives students a platform for the publishing of original research using primary sources. The library could serve as a consultant with faculty in the deployment and use of these resources. The library could also act as an agent to preserve the output of these collaborations over time.

Data Curation: Lewis & Clark has several active research laboratories in the sciences and social sciences. The library could serve as a consultant in the organization and long term storage and preservation of data output as a result of this research, whether in local or remote repositories. The library could recommend remote digital archives, storage technologies, metadata schemas, and information architectures that suit the needs of a particular research lab. To our knowledge, this is a relatively new area for liberal arts colleges and we do not have successful examples of this type of service.

Expanding Visual Resources:
Our Visual Resources Collection currently supports teaching with images of art and culture through a local collection of images (MDID) as well as licensed collections of images such as ARTstor. These images are used primarily by Art and Art History faculty, but are also used by faculty in other humanities disciplines as well as the social sciences. Should we expand our support for images to include scientific images and the scientific disciplines? Currently, our expertise in images is limited largely to still images and 2d images. Should we develop expertise in acquisition and delivery of moving images as well as three dimensional image technology?

Web Archiving: Content on the web represents a range of activities across Lewis & Clark, both academic and non-academic. Meeting minutes, departmental rosters, symposia programs, syllabi, campus news, etc. all live on the web. But much of this content is ephemeral: it is taken down and disappears after it no longer has currency. Should the library take responsibility for archiving all or part of Lewis & Clark's web output for the needs of future generations? Haverford, Bryn Mawr, and Swarthmore have a web archiving initiative underway using the Achive-It software from the Internet Archive.

Services to Support Scholarly Communication in the Digital Environment: The library could develop a menu of services to support faculty as they publish their research. These services could include: consulting/education on copyright and open access, assistance with acquiring rights for digital assets (such as images) for use in publication, advice on publishing research data, assistance with scholarly reputation management on the web. Oberlin College's Library has an initiative focused on transforming scholarly communication, which includes advising faculty on copyright and open access opportunities.

Mark Dahl
Associate Director for Digital Initiatives and Collection Management
Watzek Library
Lewis & Clark College

Monday, October 26, 2009

flatlands and failures of curation

As a counterpoint to my last post on the rise of the verticals, I've been thinking about the importance of horizontal library collections. On the one hand if a library wants to make a difference in the web environment, they should develop unique vertical collections that focus in on particular subject areas and are of interest globally.

But what of the notion that libraries, particularly college libraries like my own, should provide their users with a strong general collection in line with their institution's curriculum? In the long tail, hybrid print/digital environment of the early 21rst century, this idea of a broad and shallow local collection perhaps doesn't make as much sense. As we try to expand our patron's information universe with consortial borrowing and large aggregations of e content, not to mention awareness of what's out there on the web, the idea of a limited general book collection seems quaint, like your neighborhood book store.

Somehow, we still want our patrons to be able to be able to identify the most important works in a subject area without getting overloaded with choices. One might argue that Google's success is based on doing something like this for the web as a whole. Google is able to reliably pull up the most popular and trusted websites on a given topic.

Our discovery systems need to do a better job of giving some relief to the information landscape. Our users should be able to tell if some titles are more popular, more widely cited, etc. than others. If a text is a classic work of literature or a classic in the field, it should be obvious s in search results.

Ranking search results based partly on the number of holding libraries like WorldCat.org does is a step in the right direction: the collective intelligence of collection development work, if you will. FRBRization is another one. Use of citation analysis could be another. Folksonomies and recommendation engines another. Human curation also has a role.

The commercial world is getting good at using these techniques. Libraries really have a chance to lead in the FRBRizaton arena, I think. This is something the commercial world hasn't figured out, as Mike Shatzkin points out out here:
Recommendation engines aside (”based on what you bought before, have we got a book for you!”), online book retailers have a long way to go to enable the customized curation that seems both possible and desireable in the digital age. Even as sophisticated a retailer at Barnes & Noble will present multiple duplicate entries of a public domain scan from Google to an ebook search for a Shakespeare play. And even as sophisticated a retailer as Amazon will sell you a Kindle ebook that is a self-published tome in a way that is indistinguishable from a book from a legitimate publisher. These are failures of curation.

Monday, October 5, 2009

the rise of the verticals

Mike Shatzkin, a commentator on the book publishing industry, makes the following observation:
Horizontal aggregation was more efficient in a world of physical delivery. Vertical aggregation makes more sense in a world of digital delivery. And enabling the customer or user to have some control over the curation is possible in the digital world but hardly is in the physical.
Shatzkin sees the future information ecosystem trending towards niches or 'verticals' with global audiences.

He is contrasting this model with traditional bookstores and trade publishers that cover a wide range of subjects. It also seems the opposite of the way that a traditional academic or public library is setup with books spanning a wide range of subjects and positioned to serve a local audience.

old=local and horizontal
new=global and vertical

I would argue that in the academic repository arena, we can already observe the difference between these two approaches.

Institutional repositories aggregate scholarship that crosses a wide range of subject areas only tied together by affiliation with a single academic institution. They might be described as local and horizontal.

Disciplinary repositories like the Social Science Research Network and arxiv.org concentrate content in certain academic disciplines. They might be described and global and vertical.

Which model is more successful, the disciplinary repositories or the institutional ones? If this ranking is right, it is the disciplinary repositories. They have the most momentum and interest behind them.

Generally, I think that digital initiatives in libraries will be most successful if they are able to build on a vertical community. Projects that are too wide in scope end up being about nothing.

Wednesday, September 23, 2009

Summon 'web scale'? I don't think so.

I think it's strange that Serials Solutions is attempting to apply the "web-scale" adjective to their Summon Service.

As far as I can tell, the library community has really co-opted this term from its original use, which pertained to computing infrastructure that could support web sites that handle huge amounts of traffic. Perhaps Lorcan Dempsey widened the use of the term in January 2007:
'Web-scale' refers to how major web presences architect systems and services to scale as use grows. But it also seems evocative in a broader way of the general attributes of the large gravitational hubs which are such a feature of the current web (eBay, Amazon, Google, WikiPedia, ...).
This reference to 'web scale' is now at the top of Google results for the term, making me think that the library community has just about taken over the term.

I attended a webinar on Summon yesterday, and found out that with Summon, Serials Solutions creates a broad index of content available to your library: books, journals, digital collections, etc. It gets the data from your library uploading data and from the e content vendors with which your library has relations. The data goes in a SOLR index, which then can serve as a comprehensive discovery tool for your library's content. Because it is built on local data and tailored for a particular user community this sounds much more like an 'intranet' type search than anything that is "web scale."

WorldCat Local with its upcoming metasearch features does something similar, but I think that it can make a more legitimate claim to the "web scale" designation because it is attached to the WorldCat.org database. In my opinion, WorldCat.org is web scale in the sense that it is used and improved by a global community.

Summon and WorldCat Local are competing in the same discovery interface space. On first glance, it appears that Serials Solutions is ahead of OCLC in the incorporation of article content, perhaps because of their close relations with content vendors. OCLC seems to have the edge in books: they are able to leverage holdings data in relevance rankings and they have a more sophisticated treatment of various editions of the same work (FRBR). OCLC is also endeavoring to provide delivery services in addition to discovery.

It will be interesting to see if OCLC can use its global database and the Web 2.0 principle "it gets better the more people use it" to differentiate its product from competitors like Summon.

I don't think its obvious, but what OCLC is trying to do with WorldCat is much bolder than Serials Solutions and Summon. With Summon, libraries are basically throwing all of their content into one index to break down the data silos within an institution. But what you end up with is a big search silo for that institution.

With WorldCat, the vision is to break down not only the silos within institutions but also the silos between institutions. And not just break down those silos in the sense of harvest-and-search. The concept is that libraries and their patrons will be working together to improve a shared database through intentional and professional metadata. This shared database will be big enough to have a real impact on the web. Its records will surface in search engine results. Its interface will be familiar to many, and it will be customizable for a particular audience via the WorldCat Local route.

We'll see if this grand vision takes hold.

Wednesday, September 9, 2009

WorldCat Local Review

I've written a fair amount in the abstract about the benefits of WorldCat.org and WorldCat Local.

At Watzek, we launched "L&C WorldCat" around July 1. Here are some thoughts based on my experience with the implementation.
  • There is already a sense developing at our school that "everything" is in or should be in WorldCat Local. People expect all articles and books to be there (even though they aren't). I may post more on this later.
  • Compared with launching an III OPAC, the process of bringing WCL up is refreshingly simple. They have consciously limited customization to the very basics (logo, colors, etc.)
  • Even so, as I've said before in this blog, I'd prefer a greater level of customize-abilty, kind of on the level of Blogger. Give me full access to the stylesheet. Let me add code snippets.
  • It's backward that the software pulls in live holdings data for print items from your ILS, but can't pull in links to digital content from your link resolver. When students come upon an article, they want the direct link to it up front, not a click or two away. OCLC should scrape resolvers like they do ILSs to embed link resolver links in records for articles.
  • I'm excited about the idea of OCLC partnering with content providers like EBSCO and indexing their content in WC. One thing I speculated on when writing the Digital Libraries book in '06 was that following on the success of search engines, meta indexing services for library content would eventually emerge. We now see that with Serials Solutions Summon and WorldCat.
  • The idea of also incorporating in traditional real-time meta-searching seems like a backward compromise: OCLC should be firm with content providers and resolve to only incorporate content that they can put into their index.
  • The stats module for WCL is basically a commercial web analytics package slapped onto WCL with a few limited custom reports. Basically, you can look at your site traffic and search terms being used.
  • I like the idea of using standard web analytics software on WCL, but please let me drop the code snippet in for Google Analytics.
  • If they did some url rewriting so as to map some of the search/browsing activity to clean URL paths (eg "/author/" "/title/" "/facet/video/") web analytics software becomes more useful because you can collate together like activities based on url paths.
  • For a minute, I was thinking that to provide access to an e book package we purchased through WCL, all we'd need to do is "flip the switch" and activate our holdings for those records in WCL, forget about ILS records. But then I remembered: the URLs to that package need to go through our proxy server so they need to be drawn from our ILS. WCL is not making our lives easier yet.
  • A little off the subject, but now that OCLC owns EZproxy, aren't they in a great position to develop some better, more graceful form of remote authentication than proxy? OCLC could act as a trusted third party and provide single sign on to content provider websites.
I will likely post more comments at a later time.

Tuesday, September 8, 2009

Economist on Google Books

The Economist has a leader supporting the Google Books Deal, and an interview with Paul Courant, Dean of Libraries at Univ. of Michigan.



He talks some about the product that Google will be offering to libraries with this deal.

I have to wonder if this product will be the watershed moment for e books in academic libraries. If Google's library of books is big and broad enough to serve as a general library on its own, Google's platform for e books could become the place to do research in books.

Much of its success will depend on how much current content is in their index, and this is really dependent on Google doing deals with thousands of publishers. If Google's index is largely made up of older scanned books, it'll be a useful research tool, but not compelling as place for general research.

Google might become the place to do research in books, whereas recreational e book reading will happen through other vendors like Amazon.

Thursday, September 3, 2009

What I did for my summer vacation


Our library picked up an Amazon Kindle for staff to try out, and I brought it on our family vacation to Manzanita on the Oregon Coast a couple weeks ago.

Let me give you some of my thoughts on it. My first impression was that it was kind of awkward to navigate. The little joysticky thing that functions as a mouse isn't all that intuitive. I kept wanting it to be like an iPhone/iTouch with a larger screen.

Once I figured out how to navigate content, I liked reading on it. The very simple presentation of text is refreshing. It eliminates the distractions of a PC operating system and really lets you concentrate on the text. The e ink technology works well, though I do wish it could illuminate itself in the dark. It is slim and fits into a beach bag as easily as any paper back, though I popped it in a ziplock to keep out the sand.

I found myself navigating around the Amazon store some, reading samples of various books. Having this limited body of content to chose from--just books with a recreational bent, as opposed to the whole web--felt kind of relaxing. Sort of like I was in another limited media environment like a movie theater or flipping channels on TV. (I know it has a web browser built in, but I avoided it because I was on vacation.)

After I got back from vacation and was preparing for a class that I'm teaching on digital libraries, I decided to download a couple PDF reports to the Kindle, just to check out how they would work. (I had assigned a few reports for students to read and needed to read them fully for myself.) I used Stanza to convert them to Kindle format.

It was nice to go out in the backyard and hang out on the hammock and do the reading on the Kindle as opposed to my laptop.

The Kindle is a nice device for concentrated reading in the same way that a big flatscreen TV is a nice device for watching a feature length movie.

In some ways, I wish it wasn't even connected to the network. That way there would be even fewer distractions.

In other ways, I wish it was an iPhone with a bigger screen.

Wednesday, September 2, 2009

Video for Seattle Pacific U Retreat

I thought that I would post this video, shot as an introduction to my article in the Spring OLA Quarterly on the "Evolution of Library Discovery Systems in the Web Environment." Seattle Pacific University Library is using the article as a discussion piece for their retreat.



Evolution of Library Discovery Systems in the Web Environment

Thursday, July 23, 2009

funding models for digital projects

Thanks to Liberal Education Today for the reference to an Ithaka report called "Sustaining Digital Resources: An On-the-Ground View of Projects Today." We've been having discussions on how to sustain our comparatively tiny accessCeramics digital project and came up with a similar list of options offered in this report, including: subscription, licensing to publishers and users, custom services, corporate sponsorship, author fees, endowment, and grants. Not surprisingly, it doesn't have any easy answer regarding which one is best.

The report is very critical about relying too much on what can be the invisible support of parent institutions.

To some extent, I think one just has to accept that these kind of projects can be somewhat transitory in nature. The report appears to be reaching for some kind of formula for permanent sustainability. But, indeed, if a project has a viable life for a decade and then its content migrates along to a new home, all is well.

The Symposium on Teaching with Digital Collections in the Liberal Arts in May at Reed College had a few cases of small scale digital projects at liberal arts colleges. In most cases, they revolved around supporting a research and teaching interest of a particular faculty member. The product would be used in instruction at a local institution, but at the same time had a global reach. Lafayette College's image collection of Taiwan under Japanese Colonial Rule, co curated by a historian at the school and the library's special collections unit was one example. Claremont had several others. In these kinds of cases the institution is really supporting the work as part of faculty teaching and research and the library is acting as a kind of institutionally-sponsored laboratory.

I wonder if there are system-wide solutions that could make it easier for small scale digital projects to create revenue streams. Should digital collections software like ContentDM make it possible to sell high quality images, for example? Should it facilitate donations or sponsorship of collections?

OCLC now offers the ability to post local digital collections into WorldCat. But what if a library wants to license out some of its digitized content? A player like OCLC could develop pools of topically oriented, "premium" digital content from member libraries and charge for it. I have to believe that libraries will strive to keep their digital projects open and free.

The reality is that we get a lot of information on the open web for free now. But what incentive is there to pay that back by contributing something ourselves?

Friday, July 17, 2009

How to gain efficiencies in technical services?

I'm on a task force whose mission is to, more or less, figure out a way that the Orbis Cascade Alliance consortium can save money in technical services operations across its institutions, which range from small private colleges to big universities. This conversation was started with a report from R2 Consulting, The Extended Library Enterprise: Collaborative Technical Services & Shared Staffing.

How can we achieve this? Let me take a stab at this question from the perspective of acquisitions, cataloging, and processing of physical materials. (Saving money on handling digital stuff like e books and e journals is another topic worthy of consideration, of course.) I'll add the disclaimer that these are my personal thoughts and not those of my employer, this task force, or anyone else.

My general belief is that libraries should outsource as much work as possible in this area. One approach is to outsource cataloging and processing work to book vendors. The vendors are already handling books and they have the economies of scale in their favor, so let them handle stuff like spine labels and matching to the correct OCLC record. One of the ideas that we've discussed in the task force is sharing expertise in the implementation of these services.

Even if a library outsources as much work as it can to its primary book vendor, it is still left with plenty of tech services work to do locally. For example, sending in orders, managing duplicates and superseded editions, interfacing with the institution's financial system, dealing with materials coming from non-mainstream book vendors, gift processing, repairs, weeding projects, etc. As we've found with WorldCat Cataloging Partners, the book vendor outsourcing helps speed up your main artery of materials coming in, but there are plenty of other categories of stuff to deal with.

The very specialized work such as cataloging foreign language materials, preservation work, etc. that can't be handled locally can be outsourced or shared with other institutions fairly easily. The Alliance could develop a better method of doing this, but this doesn't strike me as a high-impact area. There are already ways to outsource these things through providers like OCLC and MARCIVE.

The nuclear option in the context of this conversation is to consolidate institutional tech services departments into an a single (or perhaps a few regional) tech services department(s) for the consortium. The obvious advantage would be greater economies of scale for both common tech services tasks and more specialized ones. And if indeed we're moving into a future with less and less printed materials, it makes sense to consolidate the expertise in handling them.

The main problem with this idea, as discussed in the R2 report in a few places, is that it creates an extra stop for the materials between the book vendor and the library, adding to shipping and logistical costs. It also removes employees from a home institution and probably makes their jobs more specialized and mundane. A disconnection between the tech services workers and the collections and institutions they support would likely develop.

I wonder, realistically, how many economies of scale would kick in in this scenario: there still would be idiosyncrasies in interfacing financial transactions to individual institutions, for example.

Ironically, the prospect of fewer and fewer print materials adds to the risk involved in building such a center: as soon as it is created, there might need to be a continual downsizing of it as its services are needed less and less.

These approaches all outsource and/or centralize the work of technical services: ordering, receiving, cataloging, processing, etc. But I wonder if this is where most of the savings are to be had? It might be that we'd gain more efficiencies by centralizing the management of technical services operations and leaving the technical services work distributed geographically.

It seems like every library has its own idiosyncratic practices for things like checking for duplicate orders, applying spine labels, choosing book vendors, copy cataloging procedures, updating standing orders, etc. (See this R2 report from Rollins College for some examples) It can be hard and time consuming for acquisitions and cataloging librarians to keep on top of the best ways of doing these things. If the methodologies and procedures for doing technical services were handled centrally, perhaps there could be big efficiencies gained, both in terms of time saved doing technical services tasks and time saved by librarians figuring out how to do them and documenting them.

The Alliance could create a "virtual" centralized technical service department that establishes best practices across a variety of technical services tasks. Participation in the virtual department could be entirely voluntary, but in principle would go along with the idea of the Alliance having a shared collection. I'm sure this idea would encounter a lot of skepticism and resistance, but when seen in the context of the many other big changes our libraries have absorbed in the last couple decades, it might work, especially if it went along with some other systematic change like a migration to a new library management system.

Wednesday, July 1, 2009

upgrade/downgrade

We just upgraded our Innovative Interfaces system to their latest release. One interesting thing to note: they removed some of the data that they were providing through the XML access to item and checkin (serials) records.

I wonder, is their intention to hinder the use of third party software with their systems? We had been using this data our course reserves application and our journal title search.

The upgrade also broke a few connectors in place with WorldCat Local, and WorldCat Navigator (we use for Summit). We're quickly repairing everything now. All in all, it shouldn't be too painful, but does demonstrate the difficulties of using a closed system like Innovative with other applications.

Tuesday, May 5, 2009

Kindle in Higher Ed

According to the Wall Street Journal by way of Inside Higher Ed we will likely here an announcement tomorrow about the use of the Amazon Kindle for textbooks. Portland's very own Reed College is partnering with Amazon as they test this out.

Thursday, April 30, 2009

Springtime in Ohio

OCLC has had some interesting announcements over the last few weeks regarding the WorldCat platform. Their new partnership with Ebsco will really enrich WorldCat Local as an article discovery tool and bring it closer to being a kind of Google for libraries. It'll be interesting to see how much full text content they index vs. citation level indexing. This could be a huge step forward in the search fragmentation problem that federated searching has been trying to solve for a long time.

Andrew Pace commented recently in his blog on the spring weather in Ohio. Perhaps the warmer temperatures have those folks in Dublin thinking that they are in Northern California, looking out at the golden rolling hills around Silicon Valley rather than the verdant hills of central Ohio. His next post announces OCLC's plans to give away WorldCat Local for free (sort of)! Do these folks think they are running a Web 2.0 start-up company or what?

The bigger announcement was that OCLC is entering the ILS fray with a "web-scale" library management system. OCLC's description of the product makes the distinction between a SAAS model and what they are trying to achieve.
OCLC's vision is similar to Software as a Service (SaaS) but is distinguished by the cooperative "network effect" of all libraries using the same, shared hardware, services and data, rather than the alternative model of hosting hardware and software on behalf of individual libraries.
I think they are on the right track. The important idea here is that the OCLC community can aggregate library management data together and gain huge advantages. OCLC has holdings data and bibliographic data, which they have put to use effectively in WorldCat.org searching. Circulation data, e resource usage data, license data, etc. could bring major improvements in workflow and business intelligence.

The point that people miss here is that this endeavor is not about competing against other library management systems. It's about making libraries relevant in the broader, Google- centered information ecosystem. There are big problems with the way libraries work currently when viewed from the perspective of the modern day web:
  • resource fragmentation-we have too many silos of data for searching; people want the kind of big indexes that Google provide
  • the finite collection-if people want to read any article or a book, they should be able to click to it and have it appear; there is an expectation of this on the web, in the blogosphere, etc.; waiting a day for an article that is already digitized somewhere to be scanned and sent over ILL is too long; libraries are still tied to this notion that they provide their patrons access to a finite physical and licensed collection
  • walled garden effect-often you have to be going through the library's web gateway to benefit from its resources
  • Web/library sector content divide-our systems are often only aware of information resources within the products we provide--there is a disconect with the broader web that tools like Google Scholar bridge
  • local value-what kind of local customization are libraries providing regarding information resources? I think often we fall short in providing enough added value to justify our existence as middlemen
If we don't solve some of these, we may lose our position as information provider/mediator to our communities.

Beyond making existing processes more efficient, the network level ILS should be an agent of change for the way that libraries purchase, license, and provide information. Its infrastructure and data should support more sophisticated arrangements with content providers (I think the aforementioned Ebsco arrangement demonstrates this).

In Karen Coyle's article on this initiative, she points out the connection between this project and some of the findings of the Working Group on the Future of Bibliographic Control:
A report from the Working Group on the Future of Bibliographic Control (www.loc.gov/bibliographic-future/news/lcwg-ontherecord-jan08-final.pdf) noted that libraries spend a great deal of time on repetitive tasks, such as cataloging best-sellers, while ignoring the most valuable aspects of their collections: the archives, the rare items, the unique collections. The report urged libraries to "transfer effort into higher value activity" and separately called for libraries to embrace the web as the primary technology infrastructure.
The web scale library management system should provide the tools for libraries to do this higher value work, including synthesizing and specializing resources for a local environment.

Furthermore, rather than competing with other library sector technology vendors, OCLC should build the infrastructure that allows those vendors to build services on top of the WorldCat platform in the same way that Flickr works with partner companies who add value to their services. I know this is a tricky process, but it probably starts with open APIs.

Monday, April 27, 2009

Evolution of library discovery systems in the web environment

An article that I've mentioned previously was just published in the Spring Issue of Oregon Library Association Quarterly, which is chock full of good articles on the future of library catalogs. It kind of sums up my thinking on library systems over the last several years.

Evolution of Library Discovery Systems in the Web Environment

Saturday, April 25, 2009

The gathering storm

I'm on my way home from Kentucky right now after giving a presentation last night on cloud computing at a NITLE workshop on collaboration at Centre College.



Had a few interesting questions about the presentation:

  • in the 80 core/20 context scenario, aren't you simply dumping the context work on your clientele, especially with sending them to mainstream applications
  • in the 80 core/20 context scenario, won't most of your faculty, as is the currently the case, be uninterested in going beyond the basics of technology. What will you do for them?

Monday, April 20, 2009

future scenarios for college IT departments

College IT in 2020, the dark scenario

In this case, the same forces that work to undermine the library also undercut the importance and influence of the IT department. Cloud computing, enabled by applications delivered from massive data centers over low cost bandwidth, have allowed applications formerly managed in house to be run from the network.

In 2009, when the department first began its first major cloud initiative with GMail and Google Apps, it seemed like these applications were simply another type of enterprise software that they could control and manage centrally. But the trend has been toward a major decentralization in the management of IT resources.

Because the IT department no longer controls resources essential for networked, multi-user applications, the management of those apps has devolved to the departments. This is partially because there is less technology to manage, but also because the technology has become invisible. It's just an integral part of the work of each part of the enterprise. The business office manages the financial side of the ERP application, while the registrar handles the academic side. The HR department, as part of their mission to promote organizational effectiveness, manages the use of institutional communications software (email, groupware, calendaring, wikis, etc.)

The course management system no longer exists. Various fairly generic communication tools, the descendants of Google Apps, are easy to bring together for shared communication among students/faculty in a course and institutional data can be mashed within them. And there are many more discipline specific apps on the network. Many departments, academic and non, and individual faculty members buy applications on the network.

There remains some demand for academic technology support, but most faculty have personal networks, external and internal where they can get the support that they need that best fits their teaching/research niche. Faculty that are fairly non-technologically intensive in their academic work are able to navigate generic applications effectively--it's an invisible part of doing their work. Those that are at the cutting edge and pushing cyberinfrastructure to the limits need highly focused help that they acquire remotely.

End user technology, including PCs, laptops, handheld devices have also gone decentralized. Over the years, these have become such a personalized device that people prefer to buy and configure their own, and the IT department no longer provisions the campus with desktop PCs in the case of computer labs or for employee desks. Employees are given a pay subsidy to provide their own personal devices.

IT's main role is to maintain the physical network and installed devices on campus, which it does by contracting out much of the work. It also continues to play an important but limited role in systems integration and security, stitching together external applications and supplying them with institutional data.

College IT in 2020, the bright scenario

In this case, we still see many applications move to the network. But there is still a need in the organization for the kind of concentrated expertise in data management, programming, software configuration, systems integration, and security that comes with a centralized information technology unit. Furthermore, there are several important organization-wide applications that benefit from centralized administration and integration. These include communication (email, groupware), ERP, fundraising, and the descendant of the current day CMS. These systems may be in the cloud, but they are 10X more sophisticated than their ancestors of today. Positions devoted to installing patches and tweaking databases of the old ERP system have evolved into new jobs to analyze, manipulate and mash up the data in these new systems

With applications on the network, personal computers have indeed evolved to personal devices and as in the dark scenario, IT has given up buying and installing desktop PCs for staff and student labs alike. The positions formerly supporting desktop installation and troubleshooting have been repurposed to academic technology support. Digital technology has become a huge part of research and teaching, with remote cyberinfrastructure resources serving as virtual laboratories in many disciplines. Students do their academic work in a digitally sophisticated manner that mirrors the way they'll neet to work in 21rst century organizations. Faculty, more overloaded than ever, turn to their local academic technologists to help with course design and research challenges.

The trend that we see at present where higher education is scrambling to apply consumer applications like microblogging, wikis, lightweight video production, mobile apps, etc. has run its course. These technologies are still important, and have become part of the way the organization works. But a new wave of technologies (in 3D visualization, remote sensing, or ???) has emerged. These technologies involve expensive physical devices and favor implementation at the organizational level, and IT has stepped in to support them. On-site personal are needed to install and configure a growing set of devices that we wouldn't recognize today.

IT is recognized as strategically critical to the competitiveness of the institution as progress in research and teaching is highly dependent on its effective use. The IT department is more important than ever.

Wednesday, April 15, 2009

future scenarios for the college library

I'm giving a talk on cloud computing at a library/IT conference sponsored by NITLE in next week at Centre College, located in Danville, KY, the heart of Kentucky Bluegrass country.

One of the things I'd like to discuss is possible futures for college library and IT departments given current trends in cloud computing and digital technology more broadly. Guess this ties back to that "core vs. context" session at the NITLE Summit. My idea is to present two visions of the future: a "dark" future and a "bright" future, the dark one making the case that libraries and IT department will basically shrink in size and importance, the bright one supporting the idea that their role will in fact strengthen in importance and influence.

A college library in 2020, the dark scenario:


In this case, libraries play a much less important role in bringing people and information together. Electronic access to book and journal content through open access academic publishing models combined with new models for purchasing content on an on-demand, per individual basis have removed the library as intermediary. Because the network allows it, smaller actors with specific needs now purchase, license, and manage content in more focused ways. Faculty license access to research databases for specific courses and maintain their own mini digital libraries in the cloud. Students purchase e-content on their own as they do their research, similar to the way they buy textbooks.

The library still exists as a rump organization. Physically it serves as a somewhat charming study hall. Much space formally devoted to books has been cannibalized by various other interests on campus. The library still provides a few general purpose electronic research tools to the community as a whole, doles out micro-credits to purchase electronic content and maintaining a small collection of print materials for those disciplines still interested in the physical book. The reduced physical and electronic collections and correspondingly low usage statistics have led to smaller staffs in all library departments supporting the discovery-to-delivery chain: acquisitions, cataloging, collection development, systems, circulation, and ILL.

With more sophisticated search systems, finding basic academic articles and books on a topic has gotten easier and this has undermined the role of reference/instruction librarians. Students still need help with research, but because librarians no longer manage the most important research sources, their tacit authority in this area has waned. Students turn to other figures on campus for research help such as the faculty, more senior level undergrads, graduate students, etc.

Compared to other library departments, special collections has fared rather well, maintaining their existing staffing levels. The digital environment has amplified the impact of their work, making it visible to a wider audience and because it is of a unique nature, it faces little competition from the network. Nevertheless, their ability to grow is hampered because they are disconnected somewhat from the teaching mission of the institution. Efforts offer digital archiving services for various constituencies have fallen flat as most campus departments prefer self management of digital archives in the cloud.

A college library in 2020, the bright scenario:

In this case, the role of the library as information provider and mediator stays strong and even grows.

The library still maintains its role as purchaser and provider of information for its institution for several reasons. The marketplace for academic information products remains complex, with many different commercial and non-profit providers, a wide range of formats, both physical and virtual (many of which we've never heard of right now), knotty copyright restrictions, a wide range of purchasing and licensing options. The library is needed to manage this complexity. This environment is also ever changing and consequently the library has a particularly important role in providing access over time to information in out-of-date format.

Furthermore, there is continued consensus on the value of giving students in an institution a bundle of information sources in which they can explore freely without incremental cost. Finally, a general inertia in academia, and the publishing and library worlds prevents too much change in they way academic information is bought and sold. The libraries love their budgets too much, and so do the publishers, and the symbolic value of the library prevents most schools from being too ruthless with budget cuts.

For these reasons, staffing in the entire discovery-to-delivery chain has remained fairly strong, though the roles have shifted somewhat from lower-paid physical processing positions to somewhat fewer higher paid, higher skill digital content management positions. Circulation and traditional acquisitions and book processing work have fallen off with less printed content being purchased. ILL has become mostly irrelevant but for esoteric items, as economical digital purchasing/delivery of per-item content has taken over.

Collection development has shifted away from picking individual books to purchasing and licensing aggregated sets, and the management of these sets is done using a globally connected integrated library systems, where much of the management data is already populated. Managing (or synthesizing) this content requires strong analytical skills and the positions in charge of this work are fewer than the old paper acquisitions/serials management jobs but pay more and require more knowledge and skills. The systems work required to specialize and mobilize this content for the college lightens as it shifts to the network level.

As digitally formatted information becomes more of the norm, the outside demand for expertise in older printed and digital information formats unexpectedly grows and some librarians specialize in this kind of expertise. For instance, there is now a "printed materials" librarian specializing in book preservation and the nuances of the traditional codex. This person works in special collections and the main collection, which more and more is about book as art and artifact rather than book as just information delivery device.

Because of the very complex information environment, the demand for reference and instruction increases. As scholarship and scholarly communications evolves in the digital environment, navigating it becomes ever more complex. Expectations for what constitutes a college research project increase, with faculty demanding more than the traditional 10 typewritten page paper. Some of these increasing expectations could include: the increased use of images, multimedia, sophisticated manipulation of statistics, mining digital archives, and actually making the research a public contribution to a body of work. These increasing expectations correspond to the types of demands placed on students when they go to work in 21rst century organizations after graduation. Faculty, already overworked, are even more so in 2020, and they need to leverage the library and librarians to make these complex research projects happen.

The evolution of research, scholarship and teaching in the digital environment creates new opportunities for what would have been cataloging and systems personnel in the old library. Faculty together with their students are creating organic niche digital collections of knowledge that they build on over time. Digital initiatives librarians and metadata experts serve as consultants in the construction of these archives, which provide a Web 2.0 style participatory style of learning and advance knowledge in their own right.

The physical library, while perhaps relinquishing some of the space formerly occupied by physical books and journals, becomes ever more the congregating space for this type collaborative learning and scholarship, and can now incorporate an array of student support services. The library remains a sanctuary for individual study and learning but also a collaborative place.

Special collections enjoys ever more relevance in the long tail world, especially as it makes it case that it's presence on the network increases the institution's prestige globally. As the web matures and people began to miss material from earlier decades that is suddenly lost, digital archiving becomes a high priority and a role that the library can fill for the college. Some of the positions devoted to circulating and processing print materials are re purposed in this area.

Overall, the library plays a bigger, better role than ever on campus.

***************

Next up, the two scenarios for IT. And then a prescription to make the bright scenario happen. Actually, no, I'll be making the case that the library has some influence over which of these plays out but that much of it is out of our control.

Friday, April 10, 2009

Middlemen

Nick Carr has an interesting take on Google as the "middleman", how it has sort of stolen that role from the newspapers. Funny, I was just talking about newspapers and libraries as "middleman" organizations threatened by the Internet in a recent post.

I'm not sure I agree with his prescription for the news business.

Friday, April 3, 2009

Ed Ayers on Digital Scholarship

One of the more enjoyable parts of the NITLE Summit was the final keynote by Ed Ayers, president of the University of Richmond. Ayers is a historian previously at U Va and is one of the people behind the much vaunted Valley of the Shadow project.

The main theses of his presentation was that collaborative digital scholarship that engages students can broaden the horizons of students at smaller institutions while at the same time providing a tighter sense of community and connectedness for students at larger institutions. The History Engine, which lets students/scholars contribute to a crowdsourced collection of historical episodes drawn from primary sources, was his prime example of this type of digital scholarship.

When Ayers came to Richmond, one of the things he asked for was a Digital Scholarship Lab, which has done The History Engine and a few other projects.

Ayers spoke of a missed opportunity by the academy to really embrace the revolution in networked technology. In his view, digital technology can be trans formative to humanities scholarship, not just teaching.

I was encouraged by the talk, especially to have someone at the highest level of liberal arts college leadership encouraging the types of digital projects that we're trying to foster here at Watzek. He said that when college presidents get together and eat rubber chicken these are the kinds of things they like to show off to each other. What an endorsement!

In my mind, The History Engine falls into a category of project in which you have undergraduate students doing research and contributing publicly to an evolving body of knowledge. Ayers thought that students learn better when they know that their work is public and making an original contribution to a body of knowledge.

I can think of a few projects like that around here, chief among which would be our situated research initiative in Environmental Studies. We've also had interest from our SoAn folks in building a digital library of senior project bibliographies. Many of the science labs on campus accumulate data over the years through student research projects, too.

Should academic libraries develop competencies in building these collaboratively created digital knowledgebases? Will this kind of project be an aspect of future library expertise and service? If the library has been the laboratory for humanists to date, are these a future version of that laboratory?

Thursday, April 2, 2009

Moving Metadata into the Cloud

Here's the ppt of a poster presentation that I gave at the NITLE Summit 2009 in Philadelphia this past weekend. It's about moving metadata from local databases to global ones and cites applications of WorldCat Local, delicious.com, and flickr as examples.

Nothing that I haven't blogged on before, but thought some folks might appreciate it.

Wednesday, April 1, 2009

The Kindle Question

At the "Core vs. Context" session at the NITLE Summit, Rick Holmgren of Allegany College likened the dilemma of higher ed library and IT services to that of newspapers, making reference to Clay Shirky's piece on the latter.

Hopefully, I'll find some time to blog more on that session. But for now, I thought I'd toss out an idea that came to me this morning. Ever since that session, I've been thinking about a trigger might cause the current "organizational form" of the academic library to collapse.

Could new models for e-books be that trigger?

A liberal arts college library like ours does around 100,000 circulation transactions a year. We spend around $500K to buy print books. The staff time devoted to book selection, acquisitions, cataloging, ILS administration, circulation, stacks maintenance, etc. would easily equal $300K a year. I imagine that the space used to house the book stacks for a 400,000 volume collection in a library like ours is worth $200K a year or more not including study spaces in the library, computer labs, special collections, etc. (it would cost that much to rent similar space with all utilities included, etc.).

What if all the books that a college like ours needed were available electronically for the Kindle price of $10 each? $10 X 100,000 circ transactions=$1,000,000, about what libraries like ours are spending right now to keep up a print collection. So our library should just give a million dollars to Amazon a year and be done with it, right?

That begs the question, why even have the library or the college as the middle-man? Instead of raising their tuition for another year in a row, why not pass the $1,000,000 savings on to students, and let them buy the books they need themselves?

Of course, this is an extreme scenario, impossible at the moment. Currently, the Kindle only has a small fraction of the books we purchase and house. Most of our books cost more than $10 each, though this might be different if the economics of book sales changed. Institutionally purchasing content does have potential advantages in cost savings and perhaps the incentive it gives students to read and research widely without thinking about incremental costs.

The point I guess is that the network can remove the advantages that print institutions had in bundling together information. The bundle of information that a library provides in its stacks (and web site) might lose its value in the same way that the bundle of information that is a print (or even online) newspaper has.

Information is atomized in the network environment and middle man organizations are increasingly irrelevant. Libraries, bookstores, publishers are essentially middle men between the author and the reader just as newspapers are (were) middle men between journalist and reader.

Tuesday, March 24, 2009

thinking more about OneBoxing WorldCat Local



I just checked with our implementation guy at OCLC about including some code in the header of our WorldCat Local instance that would allow us to add customized widgets into WorldCat Local search result screens. Sounds like it's a no go for now. WorldCat Local has a refreshingly simple branding customization options compared to what we're used to with Innovative's OPAC. But that simplicity will keep us from inserting some magic Javascript to achieve the OneBox effect.

I'm not sure if I was clear enough about what I'm interested in. Another way of looking at this is analagous to Google Ads. Google has established that placing context sensitive ads alongside search engine results is an effective way to drive traffic to advertiser websites.

If WorldCat Local becomes our library's search engine, shouldn't our library be able to put context sensitive "ads" next to results? These "ads" (or OneBoxes) would appear based on the search term and offer things like:
  • links to library created research guides that seem relevant to search at hand
  • links to course reserves if a prof's name is searched
  • results from a site search of our library's site
  • image results from ARTstor (ala Google images)
  • results (if any) from the library's digital collections
To offer something like this, OCLC wouldn't need to do anything unprecedented. Lots of web applications (including this one I'm using right now, Blogger) allow you to embed bits of HTML. Javascript widgets inside OPACs have been around for awhile too, a prime example being LibraryThing for Libraries. If OCLC put the search results data into some nicely formated JSON and allowed Javascript to be inserted in various places, it wouldn't be hard for libraries or third parties to build these little things.

The WorldCat API is nice and all, but who (besides Terry Reese) wants to build an entire interface from scratch using it?

Click on the image above for an illustration of the WorldCat Local OneBox concept.

Tuesday, March 17, 2009

OneBoxes for WorldCat Local

In thinking through the options for placing the searchbox for WorldCat Local on our website, I'm inclined to argue for a single search box on the homepage rather than the somewhat confusing tabbed box that we have now. After all, WCL should get us to our "catalog" content, journal titles, and provide a general article search.


The problem with offering a single search box to patrons is that we miss content that they might want from a library site search: links to research database, course reserves, library hours, librarian contact info, etc.

WorldCat Local might be improved if it had the option to integrate Google style "OneBoxes" in its results display. A little box off to the side might highlight items like course reserves or matches from a site search.

I have a feeling, I'll probably lose the argument regarding a single search box on our website. After all, even Google offers users multiple silos of content to search (Books, Web, Blogs, News, etc.).

academic web pages and student recruitment

Given the uncertainties about enrollment at colleges in general and my institution specifically, there's a movement afoot at our campus to get faculty to update their web pages. Our admissions office knows that prospective students browse our website intensively to help decide whether or not they want to enroll. They don't simply gloss over the top level pages, either. They drill down deep into departmental and faculty pages to get a sense of what's going on here. The idea is that if faculty express more about the interesting kinds of things that they are doing, the more attractive the institution will be to incoming students.

I think this is a great idea.

There are a couple approaches that faculty can take to address this. They can view their website as a kind of brochure that they update once a semester or so with details about their teaching and research.

Or they can actually do some of teaching and research through the web medium, posting syllabi, research data, photos, blogs, etc. More and more faculty are doing this, though in some cases its behind the password wall of a CMS.

This movement to make our academic activities more visible on the web kind of breaks down the barrier between an external, recruitment focused web presence and a more internal academically focused one.

We've generally thought of the library website as mostly an internal tool, but indeed it must play a role in recruiting students as they explore our virtual presence.

Monday, March 16, 2009

on e books

Our library is dipping its toes into e-books. It's a complex world. A small academic library has a few options for providing e books:
  1. Public domain e books from Project Gutenberg, Google Books, etc.
  2. Licensed packages of ebooks paid with a yearly fee
  3. E book aggregators that sell books by the title, the big ones being EBrary and EBL
This is a good discussion of academic library options regarding e book purchasing.

I'm most comfortable with the licensing option because it doesn't seem like as long term of a commitment as paying full price or more to purchase permanent rights to individual titles on a potentially questionable platform. We recently licensed the ACLS Humanities e book collection and are trialing the Safari digital library and the EBrary platform.

Safari is a good example of providing e book content in a way that makes sense for the web. It is surprisingly pleasant to use. All book content is browse-able as HTML web pages and there are links between relevant sections of materials. Books may be downloaded via PDF and used on mobile devices. New titles in the library are available via RSS feed. It seems like the content used was designed from the ground up to be navigated digitally.

Ebrary presents itself as a clunky web interface plus a reader plug-in that lets you do various things to the book like cut and paste, notes, etc. Seems like sort of a "walled garden" approach. This doesn't strike me as very practical or realistic. It's unlikely that patrons are going to adopt research habits that take advantage of a subset of features only available on some electronic content. Ebrary books feel like old fashioned books shoe-horned into an electronic interface. EBrary has titles from many academic publishers that are availalbe to purchase at full price from library book vendors like Yankee and Blackwell.

The ACLS collection also has a somewhat clunky interface, but at least they are not pushing you to download a reader. It looks somewhat JSTOR inspired. Weirdly, you can download books in PDF format, but only in 5 page chunks. You can also see the plain text, but in a hard to read format.

The Google Books interface has a nice way of flipping between the scanned image and the plain text and is also generally pleasant to use considering it's working with mostly analog-derived content.

Unsurprisingly, the library sector vendors are behind the curve in user interface design. I don't think our patrons will have much sympathy for this.

I hope that we can eventually buy all our e books so that they may be used in a best-of-breed interface. I'm also hoping that we can leverage our regional consortium's buying power with e book packages and individual e books. Right now, purchasing an e book for our library is a bummer because it provides no broader benefit to the consortium collection. A shared e book collection is on the Orbis Cascade Alliance Strategic Agenda, I'm told.

Tuesday, March 3, 2009

at CAA in LA

I had the pleasure of heading down to LA last week with Margo Ballantyne to speak at the College Art Association Annual Conference about accessCeramics. This is the main professional conference for art and art history profs.

The trip started out great on Wednesday evening with mojitos courtesy of Margo at the Figueroa Hotel (a Morrocan themed place that looks better and better as night falls and more mojitos are consumed). Then our panel headed out to the breathtaking Getty Center for the conference's opening reception, which featured the academically fashionable CAA crowd, excellent wine, food, and desserts as well as open access to many of the Getty's galleries and research center.

The next day I was a part of a panel presentation put on by the Visual Resources Association with the theme "You can do it, we can help: building digital image collections together."


The panel started off with an introduction by Maureen Burns from UC Irvine about a shared image collection for teaching that the University of California schools are creating using ARTstor software.


I kept thinking that this would be a perfect collaborative project among NITLE schools. I'll bet someone has already thought of that. Next, Margo and I did our bit on accessCeramics, Lewis & Clark's own collaboratively created digital collection that uses flickr as the underlying digital asset management system.



Next, Alka Patel, a scholar of Islamic Art and Architectual History at UC Irvine described the process of publishing her personal image collection with ARTstor.

We also heard from Ann Whiteside of MIT about the SAHARA project, another collaborative project with ARTstor that will allow scholars of architectual history, librarians, and others to develop a shared collection of archictecture images. It also aspires to be a kind of framework for digital scholarship around the images. Loyal readers of synthesize-specialize-mobilize might recall that I mentioned this project when it was first announced last spring.

Finally, we heard from Cara Hirsch, Assistant General Counsel at ARTstor about the intellectual property issues surrounding collaborative collection building.

ARTstor is clearly interested in distributed collection building and moving into this space on several fronts. Given its network-level software platform, ARTstor is in a good place to do this. With our relatively tiny flickr-based project, Margo and I kind of felt like we were the renegades among the group, though it was clear that our project shared many similarities with the others.

Friday, January 23, 2009

on Google Scholar and 'electricity sucking mosques'

Came across this post by Paul Kedrosky on "Google Scholar Suckitude." He's a financial commentator with a popular blog. It's interesting to read an outsider's perspective on the scholarly information ecosystem. Kedrosky is frustrated by a few things about Google Scholar. From the comments:
But I also find it messes up dates all the time, with recent papers too hard to find. And I'm unimpressed with its authoritativeness measures, with many quack pieces from quack journals making it through the cracks.

More broadly, I'd like it to tell me more clearly if there is a PDF somewhere. Perhaps back-index from authors/paper to author websites and look for the original paper there. Too often I find the piece referenced, and then have to do a second set of searches to find the author's website where there is often a working paper available. That shouldn't be required.
Funny that I was just admiring the addition of the Google Scholar feature that attempts to locate a free copy of a journal article online if available. Another comment on the post argues that academic libraries have shifted from being information disseminators to information gatekeepers:

The facilitation of easily accessible information, such as the good dope contained in publicly funded research, grants and other knowledge-transmitters, the stuff that's found in academic journals and other walled-enlightenment-gardens; in yesteryear fell in the domain and function of the "library".

However, in recent history, the library's role has changed. 180 degrees. Their actions are similar to them RIAA , only in some ways, much darker: their role as spreader-of-our-knowledge-treasures has petrified into the ghostly statuesque remnant that serves as a fortress of knowledge bigotry requiring secret user identifications and passwords.

The library system in America is 1000 times worse than theRIAA .

The Ivy-Leagued elites and celebrities have no problem gathering information. I'm sure that there have been many times when you've bumped into something like the JSTOR hurdle, only to have a colleague or well-intentioned ivy-league aristocrat quietly email you the journal, article or tidbit of enlightenment that you were seeking.

It's not that way for us ignoramus masses. We've got to unGREENy burn oil, use or gas hog cars, travel to the gigantically inefficient electricity sucking mosques that houses the remnants of old, vermin infested and inked-ladened dead tree parchments -- the method of infomatics-gathering used by our candle-burning-horse-and-buggy ancestors.

It's a system structure benefits the Ivy Leagues. They have the access. They have the secret keys, passwords and easy pathways to the knowledge chambers. To withhold the public's golden knowledge, to keep the masses enslaved to illiteracy and darkness -- isn't it unconscionable?

Thursday, January 22, 2009

Less is Moore

This piece in the Economist points out that there is now a movement to use Moore's law to save money rather than add-on new bells and whistles to computers. I've always thought that it was kind of a racket that computers never got much cheaper, they just got more powerful and stayed around the same price. "Net books" seem to counteract that trend.

I just put Ubuntu on a six year old laptop, whose performance in Windows XP (loaded up with virus protection and who knows what other add ons and spyware) had slowed to a crawl. It works great now.

Friday, January 16, 2009

GMail apps

I was just ordering some books sent to me as requests by faculty in over email. Wouldn't it be cool if we could write our own apps that would live inside GMail and could operate on the information that came through in email messages? In the same way that Google Calendar recognizes dates and times in email messages, an application could recognize book titles, ISBNs and do some checking on our catalogs, and eventually help place an order for the title if desired.

Since GMail is web-based, I suppose you could use any kind of browser embedded application like Zotero or LibX to take these kinds of actions, too.

Tuesday, January 13, 2009

the evolution of library discovery environments in the web era

I'm working on an article for OLA Quarterly now about the evolution of library "discovery environments" during the web era. Maybe I'm getting a little too theoretical here, but I'm trying to come up with three distinct phases of evolution. Roughly, they are:

1. Bringing pre-web indexing systems onto the web platform (mid-to late 90s)
2. Systems that increasingly 1.) match the consumer web experience on the wider web and 2) manage (synthesize) online full text content, in both with an increasingly dis-integrated set of tools (early to mid 2000s)
3. Two-way network level systems that benefit from both global scale and local customization (specialize), systems that get better as more people (library staff and patrons) use them. Systems that syndicate (mobilize) resources (late 2000s)

The first phase is sort of Web .5. The second phase involves trying to catch up to Web 1.0. The third phase is Web 2.0 and beyond.

1. Web OPACs, static library websites, A&I databases on the web
2. JSTOR, Serials Solutions, SFX, ContentDM, DSpace, lipstick-on-a-pig library catalogs, Endeca, federated search.
3. Google Scholar, Google Books, WorldCat.org, WorldCat Local, Flickr Commons

I realize that trying to write history as it's happening is hazardous.

Yammering

A group of people in our library are trying out Yammer (NYT coverage), a micro-blogging service designed for organizations. We're the first people on the lclark.edu domain to use it. It's easy for people in a library to get a little bit isolated in their work and we're hoping that this will promote more communication and collaboration among staff

This is my first foray into micro-blogging and I like it. The operative question in Yammer is "what are you working on?"

It would be fun to view a busy Yammer feed from a large and diverse organization like a college. We'll see if it takes off beyond the library at here L&C.