For too long–decades now, really–reference publishers have pumped out a cascade of books (and now databases) but done very little to address a fundamental problem: discoverability. Reference books have always required a conduit–the librarian–to be used properly and fully, because their contents don’t show up in any card catalog. A student writing a paper about the Battle of Gettysburg has no idea that the multivolume encyclopedia buried away in a far corner of the library has wonderful information that can tell her everything she needs to know, unless a librarian is there to help her, and unless that librarian himself is familiar with that set. As a result, as studies have shown, print reference sections in all libraries have been gathering dust, day by day, year by year, decade by decade. The familiar library convention discussion group topic–”Is Print Reference Dying?”–is both mordantly funny and also terrifyingly legitimate. The truth is that lots of print reference is still published and bought, but most of the new stuff joins its ancestors–it sits on a shelf, unused.
The situation is only modestly better with electronic reference. Tech-savvy students may indeed be more likely to stumble upon resources that they can use in this setting, but “stumble” is still the operative word. First, they have to navigate a myriad of unique, siloed databases, with inscrutable names and idiosyncratic search interfaces. Then, they have to be careful enough to pick the search results gems from what may be a torrent of hits.
Adam Hodgkin reflects on London Bookfair as well as thoughts about the meaning of the Google Book Agreement (ExactEditions):
If we think of this rather large and hangar-like hall being occupied by the books that are currently the focus of the commercial market for books, we can also imagine a skyscraper of 30 or perhaps 40 stories being built above the Earls Court Stadium. The stacked stories of this skyscraper will each contain another 300,000 mostly older books, but this time all of them ordered, regimented and deployed in total silence and precise obedience with no noisy haggling or discordant trading. Such a skyscraper would be a serious obstacle on the flight path for planes approaching Heathrow, but its towering shadow does give us an idea of the relative scale of the Google Books Search project as set against the current (this year, last year) output of publishers in the English language. The 10,000,000+ books that Google will have in its arsenal when the Google Book Search library goes live in a year of two will completely dwarf the current activity. The 40 odd stories of the Google Books skyscaper will not need the traditional tools and mechanisms of the book trade. The transactions, accessibility, searchability, and reading of these millions of books will all be a matter of database and web-driven activity. Commercial arrangements will be settled by the Books Rights Registry or the publishers' agreements with Google and the commercial transactions and access rules will be executed by Google or its contracted distributors. There will be very little need for human intervention, except at the periphery. When authors, agents and publishers decide to put things into the system, or, at the consumer edge, when readers, searchers, librarians or consumers decide that they wish to have some form of access to the repository. Of course Google will also not need a skyscraper at all. The few hundred terabytes, possibly by then one or two petabytes, that may be needed for the Google nearly-complete libary in 2012 will comfortably fit in the confines of the whirling, bladed and racked systems, housed in a single standard freight container. We should add a few more trailers to cope with the bandwidth of a billion users, but it is all fitting nicely in the underground loading bay that they have at Earls Court. The efficiency and reliability of the Google system does not require large physical infrastructure. Push on a couple of years, and by 2014 I think one can be sure that Google will have most of the world's published literature in the Google database. How will new books then be working in relation to the 50, 60, 70, 80 stories high skyscraper of previously published but now completely databased and universally accessible digital books?
Brian O'Leary had some thoughts on curation as a method to disperse the increasing clutter of information (Magellan Partners):
But that’s not quite a business model. As my colleague Mac Slocum noted, “The world definitely needs a clear-headed curation advocate, particularly one that links it directly to revenue (this labor of love stuff only goes so far ...).”
Historically, reviewers got paid by newspapers, who relied on some mix of advertising and subscription revenue to create and deliver a comprehensive product. It’s believed that few people read everything, but the “one size fits all” model was accepted by readers and advertisers alike.
Booksellers are paid for curation when they sell something. Unfortunately, the small, independent bookstore whose title mix reflects a niche, a neighborhood or a community is hard to sustain. This curation question came to a head two weeks ago, when Amazon appeared to delist titles with gay and lesbian content.
Joe Wikert posted some video from the CEO Roundtable at the TOC conference earlier this year. It features, Bob Young (Lulu), Mike Hyatt (ThomasNelson), Tim O'Reilly and Clint Greenleaf (Greenleaf Publishing). (2020 Blog)
Back on the curation theme Clay Shirky talks about "filter failure" and how collaboration may be the answer to finding the best information and news content (Pub 2.0)
The #swineflu hashtag on Twitter serves as a good point of reference for what Clay Shirky called “filter failure.” The problem is not that there’s a wild abundance of useful information, overloading us with detail, facts, and commentary; the problem is that we don’t have the proper filtering system set up to separate trusted sources and reliable resources from rumors, jokes, misinformation, and ephemera. If those seeking to provide links to reliable information started using a hashtag such as #therealswineflu, it would likely be overtaken — quickly — by tagged content with less value, whatever its source.
So how do we solve filter failure?
We depend on humans to serve as our filters. We do this all the time, when we ask a friend a question, or talk with someone we know who happens to be an expert on a given topic. (I imagine the world’s epidemeologists are fielding a huge number of Facebook messages from old friends this week.)
When it comes to reliable sources for news that breaks on a massive scale, our best sources are likely to be Wikipedia for facts, and journalists for explanation, clarification, context, and meaningful analysis.
Tim O'Reilly discusses a new book project written in (on?) Powerpoint (Radar)
Of course, modularity isn't the only thing that publishers can learn from new media. The web itself, full of links to sources, opposing or supporting points of view, multimedia, and reader commentary, provides countless lessons about how books need to change when they move online. Crowdsourcing likewise.
But I like to remind publishers that they are experts in both linking and in crowdsourcing. After all, any substantial non-fiction work is a masterwork of curated links. It's just that when we turn to ebooks, we haven't realized that we need to turn footnotes and bibliographies into live links. And how many publishers write their own books? Instead, publishers for years have built effective business processes to discover and promote the talents of those they discover in the wider world! (Reminder: Bloomsbury didn't write Harry Potter; it was the work of a welfare mom.) But again, we've failed to update these processes for the 21st century. How do we use the net to find new talent, and once we find it, help to amplify it?
Kassia Krozser expresses some concern over privacy that may engender if the Google Booksettlement goes ahead (Booksquare)
If the settlement is approved, then Google owns lots and lots of readers. We’re locked into the Google service if we want the best possible search results. Yet our concerns were not addressed in the settlement. One such worry is the privacy factor.
Every move we make online is tracked and traceable. Generally, this is not a concern; so much data is being crunched that individuals are rarely singled out for close examination. But this audit trail can be used against us, and I hadn’t really considered the implications of my online activity in light of GBS until I read a recent call from the Electronic Frontier Foundation.
Physical libraries have long held firm against law enforcement seeking to use customer records against individuals (and it’s just one more reason to love librarians!). What we read should remain private to us. However, once we, as a society move beyond the physical into the digital, new rules seemingly apply. Now is the time to ensure that the GBS includes consumer privacy protections.
No comments:
Post a Comment