Wednesday, February 27, 2008

broke breakout

I proposed a breakout at code4lib today but I didn't really get enough takers for it to fly. So here are some notes on it, DOA:

The topic was to be on "cloud computing and network level services" a discussion of the Nick Carr Big Switch thesis and its application to library environments; in addition, consideration of what library applications, services, and databases should be provided as network level services.

Cloud computing questions:
  • Is Nick Carr's thesis about utility style computing applicable to the library software world?
  • Do some of our open source projects (eg LibraryFind, Vufind, Scriblio, eXtensible catalog, Evergreen, Koha) miss out on the network effects of a more centralized model of data/services provision?
  • Should open source projects be approached differently from a cloud computing perspective?
  • Who in the library world is positioned to provide utility-level services? OCLC? Talis? Internet Archive? What should they provide?
  • Are there two visions of cloud computing out there, one more commercial another more open?
  • How does the cloud computing model intersect with the semantic web?
  • Are people using utility style computing as they build applications (Talis platform, OCLC web services, bibliocommons, Amazon S3, etc.)

Examples of network level services floating around the conference:
  • OCLC grid services
  • Talis platform
  • LibraryThing
  • OpenLibrary
  • BiblioCommons
  • Zotero 2.0

code4lib 2008 trends

Some observations regarding the first day of code4lib 2008:

There are a couple big players here from outside the library and library vendor worlds that are doing big, important library-related things: The Internet Archive and the Center for History and New Media.

The Internet Archive considers itself a library, a basically altruistic, nonprofit institution. But unlike any library I know, it has ambitions that are as far reaching as Google. It wants to create a digital archive of as much of the human record as it can get its hands on. I gotta say that I'm impressed with what they've done so far with the Internet Archive, and I support their other efforts with the OpenLibrary.

The Center for History and New Media comes at the perspective of research in the digital age from that of the humanities scholar. They've brought us Zotero, which I was impressed to learn has a user base of over 500K already. I'm also eager to here about their digital collections/digital exhibit software Omeka, which I see as a possible replacement for ContentDM at our shop.

It's really evident that there are a lot of competing solutions out there for solving various library technology problems. Throughout the day we heard of several solutions for catalog search/metasearch: the WorldCat API, VUFind, eXtensible catalog, and tangentially, LibraryFind. I can only wonder if there'll be a shakeout here sometime in the near future.

Tuesday, February 26, 2008

Brewster Kahle - code4lib 2008

Just took in the opening keynote at code4lib '08, Portland.

The Internet Archive/OpenLibrary project takes the most open approach to digital collections and digitization projects. Also the most centralized.

They've got big aspirations: archive of all web pages, a catalog with every book, a major digitization initiative.

The overlapping goals of their project with Google Books, WorldCat, and other more localized digitization projects makes you wonder about which projects will last. Or can they all coexist?

Friday, February 8, 2008

professional vs. consumer media

This post by Tim O'Reilly regarding a talk by Reuters CEO at MoneyTech makes some interesting references to professional vs. consumer media, semantic markup and metadata, as well as the concept of "curation", all of which have parallels in the academic information arena.

Basically, Reuters is making the case that their professional-level information products for the financial industry will deliver value above and beyond consumer financial information products, through semantic metadata. His points as summed up by Tim:
  1. The impact of consumer media on professional media. As young people who grew up on the web hit the trading floor, they aren't going to be satisfied with text. Reuters needs to combine text, video, photos, internet and mobile, into a rich, interactive information flow. However, he doesn't see direct competition from consumer media (including Google), arguing that professionals need richer, more curated information sources.

  2. The end of benefits from decreasing the time it takes for news to hit the market. He describes the quest for zero latency in news, from the telegraph and early stock tickers and the news business that Reuters pioneered through today's electronic trading systems. (Dale Dougherty wrote about this yesterday, in a story about the history of the Associated Press.) As we reach the end of that trend, with information disseminated instantly to the market via the internet, he increasingly sees Reuters' job to be making connections, going from news to insight. He sees semantic markup to make it easier to follow paths of meaning through the data as an important part of Reuters' future.
I think you could make the argument that these two points apply to the types of "professional" information resources that academic libraries provide to their patrons.