Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Friday, November 15, 2013

Judge Chin Decides and Everybody Wins

About eight years ago I was one of the group at the Association of American Publishers that voted to file suit against Google for the unauthorized copying of upwards of 10million books (and other bound stuff) from the collection of five large academic libraries.  It wasn't long after the vote that I wished I had missed the meeting.

If nothing else, the passage of time since that suit was filed has proven that Google's activities were not the real enemy.  Google wanted to expose content locked away on shelves (off the network) to the same readers, researchers and other content users that the very same publishers were interested in selling to.  The risk publishers anticipated - that all content would be devalued and royalties due publishers would be circumvented - looks with hindsight to have been a red hearing.  The real problem for publishers with respect to the value of intellectual property quickly became apparent from the more obvious culprit: Amazon.com, which has successfully fought off the publishers, Apple, Google and everyone else and has won the battle over value.  After eight years, the (maybe) final end to the Google Books legal wrangle is a side show to the very real problems publishers have regarding their business models.

In ruling that the scanning program at Google was indeed fair use (and so emphatically that you have to wonder what took the guy so long) we may see some new products come out of this content library.  For example, I wrote of a subscription database product that Google could launch, others have spoken about the analysis of language, semiotics and culture that could be facilitated by this database and there will no doubt be other products.  What should be clear, is that access this ruling enables will enhance our knowledge and understanding about the content itself and the evolution of the printed word.  The next eight years might actually result in something useful.

There are four tests for fair use and Judge Chin as ruled as follows on these:

On purpose and character of use:
Google's use of the copyrighted works is highly transformative. Google Books digitizes books and transforms expressive text into a comprehensive word index that helps readers, scholars, researchers, and others find books. Google Books has become an important tool for libraries and librarians and cite-checkers as it helps to identify and find books. The use of book text to facilitate search through the display of snippets is transformative.
On the 'nature of the copyrighted work':
While works of fiction are entitled to greater copyright protection, Stewart v. Abend, 495 U.S. 207, 237 (1990), here the vast majority of the books in Google Books are non-fiction. Further, the books at issue are published and available to the public. These considerations favor a finding of fair use.

On the "amount and substantiality of the portion used":
Google limits the amount of text it displays in response to a search.
Lastly, on the "effect of the use upon the potential market for or value of the copyrighted work":
...a reasonable factfinder could only find that Google Books enhances the sales of books to the benefit of copyright holders. An important factor in the success of an individual title is whether it is discovered -- whether potential readers learn of its existence. (Harris Decl. ¶ 7 (Doc. No. 1039)). Google Books provides a way for authors' works to become noticed, much like traditional in-store book displays.
In his concluding comments the Judge states:
In my view, Google Books provides significant public benefits. It advances the progress of the arts and sciences, while maintaining respectful consideration for the rights of authors and other creative individuals, and without adversely impacting the rights of copyright holders. It has become an invaluable research tool that permits students, teachers, librarians, and others to more efficiently identify and locate books. It has given scholars the ability, for the first time, to conduct full-text searches of tens of millions of books. It preserves books, in particular out-of-print and old books that have been forgotten in the bowels of libraries, and it gives them new life. It facilitates access to books for print-disabled and remote or underserved populations. It generates new audiences and creates new sources of income for authors and publishers. Indeed, all society benefits.
The wider relevancy of this opinion on fair use may well extend the law and could result in implications for other media and content businesses.

Tuesday, July 02, 2013

Fair Use Google?

Judge Denny Chin doesn't move so fast. His current colleagues on the second circuit, of which one is considered a fair use scholar, may have forced him to push the Google Books case a long a little quicker than he has in the past.

As you may have heard by now, the second circuit has determined that it was premature of Judge Chin (now a colleague) to confirm class status to the case The Author's Guild had brought against Google as a result of their book scanning program. This is clearly Google's victory for several reasons. Firstly, they will get to argue the fair use case which - aside from settling - is what they've always wanted to do and where they expect an outright win. Second, the Author's Guild now has far less expectation for a large pay out but probably crippling legal expenses should they continue.

Copy of the Second Circuit decision
Putting aside the merits of Google’s claim that plaintiffs are not representative of the certified class
—  an argument which, in our view, may carry some force — we believe that the resolution of Google’s fair use defense in the first instance will necessarily inform and perhaps moot our analysis of many class certification issues, including those regarding the commonality of plaintiffs’ injuries, the typicality of their claims, and the predominance of common questions of law or fact, (denying plaintiffs’ request for class certification “because of the fact-specific inquiries the court would have to
evaluate to address [defendants’] affirmative defenses [including fair use of trademarks]”); 
....
Moreover, we are persuaded that holding the issue of class certification in abeyance until Google’s fair use defense has been resolved will not prejudice the interests of either party during the projected proceedings before the District Court following remand. Accordingly, we vacate the District Court’s order of June 11, 2012 certifying plaintiffs’ proposed class, and we remand the cause to the District Court, for consideration of the fair use issues
Michael Boni, the lead lawyer for The Authors Guild was quoted by Reuters as saying they are going to litigate fair use and 'that is the shooting match'.  Good luck with that.

In March 2011, Judge Chin dismissed a proposed settlement between publishers, authors and Google which was valued at $125mm saying it raised (or didn't address) some important copyright issues and questions.

So, this decision bounces the adjudication back to the district court so they can address the fair use issue and Judge Chin who currently sits on the second circuit but held on to this case when he moved up.  Will he continue to hold on to the case or will be finally decide to hand it over?

Thursday, November 08, 2012

Pearson's Blue Sky Project

At lunch with a media banker several weeks ago, she had me thinking differently about the proliferation of aggregated content platforms such as Deepdyve, Credo Reference and others (particularly in medical).  In education, there's a rush for content going on and big incumbent education companies such as Blackboard and Pearson are starting to take notice. While these companies are big and retain influence they may not find it easy to establish agreements with partner publishers and content providers unless they can prove mutual benefit and trust. It could be a long process which is why my banker friend suggested there is a lot of M&A activity in this area where the decision point is on the buy versus make side of the analysis.

I was reminded of this conversation by Monday's Pearson announcement of Project Blue Sky which will allow faculty to search for and include open resource collection content into a custom Pearson textbook.

From their website:
Project Blue Sky allows instructors to search, select, and seamlessly integrate Open Educational Resources with Pearson learning materials. Using text, video, simulations, Power Point and more, instructors can create the digital course materials that are just right for their courses and their students. Pearson’s Project Blue Sky is powered by Gooru Learning, a search engine for learning materials.
Noting that there is so much open source content out there that it is impossible to ignore, the inclusion of this content became an imperative according to Don Kilburn, vice chairman of Pearson's higher ed division. To my mind this may be a slight smoke screen; after all, what took them so long?  Secondly, it seems only logical that they would be planning to add more content from more publishers to expand the universe of content available to their faculty users. Adding content beyond the Pearson materials will expand the options available to faculty, and faculty are already seeking easy access to multiple publishers content and don't want to be force-fed "off the shelf" products that diminish their ability to teach their class the way they want.  Their ability to build their own learning products is in process and inevitable and publishers and education providers like Pearson understand this.  That's what Project Blue sky is about.  Take a look at their video.

The inclusion of Gooru in this effort is also interesting since it suggests that indexing, collating, organizing and presenting open resource and publisher content is no easy get. Even Google is listed as a partner of Gooru. Either this implies the effort was too great even for a company like Pearson or they wanted to get out quickly into a market they feel they should dominate. Perhaps both issues are at play here.

Pearson and other large education publishers have direct relationships with the faculty who buy their books via their large sales forces.  What they don't want to see happen is something like the Amazon experience where a faculty member defaults all their activity to a common provider of both content and their user experience.  Imagine if a faculty member could go to one site to select and build their course content using content from every source available mimicking their current trade book experience.  Once they tried that there would be no going back which is the scenario the incumbent education publishers want to avoid - unless they are that platform that is.

Monday, September 17, 2012

MediaWeek (Vol 5, N 38): Career & For Profit Education, Saylor's Free Content, Google Ed Software + More

If employers won't value for profit education what's the prospect? (WaPo):
Education researchers have actually conducted a number of studies about this. As of a few years ago, the findings were pretty bleak for the industry. A literature review in 2009 found that ”all scholarly research to date has concluded that the ‘gatekeepers’ [human resources managers, executives, etc.] have an overall negative perception about online degrees.” But online teaching has gotten a lot better in the past three years, and the results are starting to show up on surveys of employers. One study found that half of executives viewed MBAs earned online as no different from ones earned in person. That’s still substantial stigma, though. If half of employers don’t think your degree is worth as much as those of other people applying for the same position, that’s not a great position to be in.
Can on-line higher education be free? Saylor Foundation thinks it can (IHeD)
This fall will be Saylor’s launch, for all practical purposes. Although the foundation has gotten some notice among higher-education reformers, the fleshed-out majors make the concept tangible. Based in a sleek but noisy office in the Washington's ritzy Georgetown neighborhood, the foundation’s 20- and 30-something employees are working with faculty members to put the finishing touches on courses.

Angela Bowie is one of those faculty members. Based in Philadelphia, Bowie has worked as a lobbyist and teaches political science and history, mostly at community colleges or regional public universities. She saw a job ad for Saylor, and tossed her hat in the ring. Bowie said the foundation put her through the wringer, asking for information on every course she’d taught over the last decade.

“They did a very thorough vetting,” Bowie said. “More than any other college I’ve ever worked for.”

Saylor has a training program for faculty members, which Bowie also described as thorough. In particular, she praised the foundation’s focus on learning outcomes. Saylor's faculty are paid on an hourly basis, a foundation official said. And Saylor is a side gig for most, who work as professors or adjuncts at traditional universities.
Google unveils an open source education software (PC):
"The Course Builder open source project is an experimental early step for us in the world of online education," Norvig said. "It is a snapshot of an approach we found useful and an indication of our future direction. We hope to continue development along these lines, but we wanted to make this limited code base available now, to see what early adopters will do with it, and to explore the future of learning technology."

In addition to offering a new platform for empowering educators, the effort is also a unique opportunity to connect with Google's research team. Over the course of the next two weeks Google plans to directly interact with Course Builder users via Google Hangouts. The Course Builder support site is already live and the free software download has already received its first update. For those unsure about their level of skill as it relates to the possible use of the software, Google's Course Builder Checklist offers a reassuring primer on exactly what to expect and how to get started.
Peter Osnos in The Atlantic on the Cruel Paradox of Self-Publishing (Atlantic):
And therein is the essential fact about self-publishing: Digital and print-on-demand technology has made the manufacture of books and their distribution through the Internet vastly more accessible than the traditional publishing model. But for every instance of a self-published work that gains meaningful traction because its author succeeds in finding an audience for it, the overwhelming majority of books do not. There is a menu of services available from companies like Author Solutions, including editing, design, and basic marketing that can cost up to about $5,000 and will give the book a qualitative boost. But with so many books pouring forth, gaining any attention is a formidable challenge. In its sale announcement, Author Solutions said Bowker Market Research, which is a primary source for how many books are published, reported that 211,000 self-published titles were released in 2011 in print or e-books, an increase of almost 60 percent over 2010. Presumably, that number will grow substantially again by the end of 2012.

Very few books that come from self-publishing companies end up on the shelves of bookshops, but as the percentage of the brick and mortar market gradually declines, the attractions of selling through online retailers and other e-book distributors grows. Amazon, covering all bases, has a thriving self-publishing business and now is also an acquirer of books by established authors.
From Twitter this week:

Chart: Top 100 iPad Rollouts by Enterprises & Schools (Updated Sept. 15, 2012)
Today's is the best Doonesbury ever (and education related):

Monday, August 20, 2012

MediaWeek (Vol 5, No 34): Copyright, Digital Public Library, Colleges & Big Data + More

University of California at Berkeley professor Dr. Pamela Samuelson in The Chronicle from a few weeks ago on Copyright reform and the creation of a comprehensive digital library (Chron):
The failure of the Google Book settlement, however, has not killed the dream of a comprehensive digital library accessible to the public. Indeed, it has inspired an alternative that would avoid the risks of monopoly control. A coalition of nonprofit libraries, archives, and universities has formed to create a Digital Public Library of America, which is scheduled to launch its services in April 2013. The San Francisco Public Library recently sponsored a second major planning session for the DPLA, which drew 400 participants. Major foundations, as well as private donors, are providing financial support. The DPLA aims to be a portal through which the public can access vast stores of knowledge online. Free, forever.
Initially the DPLA will focus only on making digitized copies of millions of public-domain works available online. These include works published in the United States before 1923, those published between 1923 and 1963 whose copyrights were not renewed, as well as those published before 1989 without proper copyright notices, and virtually all U.S.-government works. If a way can be found to overcome copyright obstacles, many millions of additional works could be made available.
It's no secret that copyright law needs a significant overhaul to adapt to today's complex information ecosystem. Unfortunately the near-term prospects for comprehensive reform are dim. However, participants at a conference last spring at Berkeley Law School on "Orphan Works and Mass Digitization: Obstacles and Opportunities" believe that modest but still meaningful reforms are possible.
Her comment about the institutional license for the Google database reminded me of the analysis I completed in 2010.

The Atlantic takes a more detailed look at the Digital Public Library (of America):
The DPLA is the most ambitious entrant on the digital library scene precisely because it claims to recognize this need for scale, and to be marshaling its resources and preparing its infrastructure accordingly. With hundreds of librarians, technologists, and academics attending its meetings (and over a thousand people on its email listserv), the DPLA has performed the singular feat of convening into one room the best minds in digital and library sciences. It has endorsement: The Smithsonian Institution, National Archives, Library of Congress, and Council on Library and Information Resources are just some of the big names on board. It has funding: The Sloan Foundation put up hundreds of thousands of dollars in support. It has pedigree: The decorated historian Darnton has the pages of major publications at his disposal; Palfrey is widely known for his scholarship on intellectual property and the Internet; the staging of the first meeting on Harvard's hallowed campus is not insignificant. Ideally, the consolidation of resources—specialized expertise, raw manpower, institutional backing and funding—means that the DPLA can expand its clout within the community, attract better financial support, and direct large-scale digitization projects to move toward a national resource of unparalleled scope and functionality. "We believe that no one entity—not the Library of Congress, not Harvard, not the local public library—could create this system on its own," Palfrey says. "We believe strongly that by working together, we will build something greater."
The Economist takes a look at an exhibition at the British Library on the life of Shakespeare (Econ):
Shakespeare is such a global brand that the man himself almost disappears. The aim of “Shakespeare: Staging the World”, at the BM until November 25th, is to make the playwright specific and particular, to root him in his time, 400 years ago. The exhibition summons his physical world with an array of culturally evocative objects, many of which were used in “Shakespeare’s Restless World”, a splendid BBC radio series presented by the BM’s director, Neil MacGregor, earlier this year.
The show unfolds in a dark circular space, with curving rooms that wind from one to the next, each subtly lit and discreetly atmospheric of its contents: arrow slits in the room about the history plays, a hint of trees to suggest Warwickshire and the Forest of Arden, a touch of charring on black walls for the gunpowder and witchcraft of James I’s reign (when Shakespeare wrote “Macbeth”), and finally a pale dawn for the Americas, the “brave new world” of “The Tempest”. All this sits within the embrace of the old Reading Room, its shelves and dome dimly glimpsed through gaps here and there. This globe within a globe, as it were—one full of artefacts, the other of books—glances at the play between word and object that underlies the exhibition.

Fascinating (potentially spooky) article at the NYT on how colleges are beginning to use "big data" to manage student performance and even play match maker (NYT):
Data diggers hope to improve an education system in which professors often fly blind. That’s a particular problem in introductory-level courses, says Carol A. Twigg, president of the National Center for Academic Transformation. “The typical class, the professor rattles on in front of the class,” she says. “They give a midterm exam. Half the kids fail. Half the kids drop out. And they have no idea what’s going on with their students.”

As more of this technology comes online, it raises new tensions. What role does a professor play when an algorithm recommends the next lesson? If colleges can predict failure, should they steer students away from challenges? When paths are so tailored, do campuses cease to be places of exploration?
“We don’t want to turn into just eHarmony,” says Michael Zimmer, assistant professor in the School of Information Studies at the University of Wisconsin, Milwaukee, where he studies ethical dimensions of new technology. “I’m worried that we’re taking both the richness and the serendipitous aspect of courses and professors and majors — and all the things that are supposed to be university life — and instead translating it into 18 variables that spit out, ‘This is your best fit. So go over here.’ ”
From the Twitter this week:

ALA Releases Report on Library E-book Business Models

Media Decoder: Google to Buy Frommer's From Wiley Publishing

Wednesday, June 27, 2012

Making Your Metadata Better: AAUP Panel Presentation - M is the New P

(Video of this presentation here)

The last time I was asked to speak at an AAUP meeting was in Denver in 1999 and naturally the topic was metadata. As I told the audience last week in Chicago, I don’t know what I said at that meeting but I was never asked back!  I am fairly confident most of what I did say in Denver still has relevance today, and as I thought about what I was going to say this time, it was the length of time since my last presentation that prompted me to introduce the topic from an historical perspective.

When ISBN was established in the early 1970s, the disconnect between book metadata and the ISBN was embedded into business practice.  As a result, several businesses like Books In Print were successful because they aggregated the collection of publisher information, added to this some of their own expertise and married all this information with the ISBN identifier.   These businesses were never particularly efficient but, things only became problematic when three big interrelated market changes occurred.  Firstly, the launch of Amazon.com caused book metadata to be viewed as a commodity, Second, Amazon (and the internet generally) enabled a none too flattering view of our industry’s metadata and lastly, the shear explosion of data supporting the publishing business required many companies (including the company I was running at the time, RR Bowker) to radically change how they managed product metadata.

The ONIX standard initiative was the single most important program implemented to improve metadata and provided a metadata framework for publishing companies.  As a standard implementation, ONIX has been very successful but the advent of ONIX has not changed the fact that metadata problems continue to reside with the data owners.

More recently, when Google launched their book project a number of years ago it quickly became apparent that the metadata they aggregated and used was often atrocious proving that little had changed since Amazon.com had launched ten years earlier.  When I listened to Brian O’Leary provide a preview of his BISG report on the Uses of Metadata at the Making Information Pay conference in May, I recognized that little progress had been made in the way publishers are managing metadata today.  When I pulled my presentation together for AAUP, I chose some slides from my 2010 BISG report on eBook metadata as well as some of Brian’s slides.  Despite the 2-3 year interval, the similarities are glaring.

Regrettably, the similarities are an old story yet our market environment continues to evolve in ever more complex ways.  If simple meta-data management is a challenge now it will become more so as ‘metadata’ replaces ‘place’ in the four ‘p’s marketing framework.   In traditional marketing ‘place’ is associated with something physical: a shelf, distribution center, or store.  But ‘place’ is increasingly less a physical place and, even when a good is only available ‘physically’ - such as a car, a buyer may never actually see the item until it is delivered to their driveway.  The entire transaction from marketing, to research, to comparison shopping, to purchase is done online and thus dependent on accurate and deep metadata.  “Metadata” is the new “Place” (M is the new P): And place is no longer physical.

This has profound implications for the managers of metadata.  As I wrote last year, having a corporate data strategy is increasingly vital to ensuring the viability of any company.  In a ‘non-physical’ world, the components of your metadata are also likely to change and without a coherent strategy to accommodate this complexity your top line will underperform.   And if that’s not all, we are moving towards a unit of one retail environment where the product I buy is created just for me. 

As I noted in the presentation last week, I work for a company where our entire focus is on creating a unique product specific to a professors’ requirements.  Today, I can go on the Nike shoe site and build my own running shoes and each week there are many more similar examples.   All applications require good clean metadata.  How is yours?

As with Product and Place (metadata), the other two components of marketing’s four Ps are equally dependent on accurate metadata.  Promotion needs to direct a customer to the right product, and give them relevant options when they get there.  Similarly, with Price, we now rely more on a presumption of change rather than an environment where price changes infrequently.  Obviously, in this environment metadata must be unquestioned yet rarely is.  As Brian O’Leary found in his study this year, things continue to be inconsistent, incorrect and incomplete in the world of metadata.  The opposite of these adjectives are, of course, the descriptors of good data management.

Regrettably, the metadata story is consistently the same year after year yet there are companies that do consistently well with respect to metadata.  These companies assign specific staff and resources to the metadata effort, build strong internal processes to ensure that data is managed consistently across their organization and proactively engage the users of their data in frequent reviews and discussions about how the data is being used and where the provider (publisher) can improve what they do.

The slides incorporated in this deck from both studies fit nicely together and I have included some of Brian’s recommendations of which I expect you will hear more over the coming months.  Thanks to Brian for providing these to me and note that the full BISG report is available from their web site (here).

Thursday, January 19, 2012

An Apple for the Teacher

To the average Apple aficionado, today’s spectacle on Apple’s entry into the education space would have seemed just par for the course.  Hype is about everything Apple does in these coordinated announcements and today was no different.  To the average textbook publisher today’s hype would have seemed of another world; that another entity – Apple yet? - would view the staid, traditional and, let’s face it, relatively small textbook publishing business as an opportunity to do big things must seem odd. I also thought, there was some irony today because a company I recognize as possibly one of the first brands ever to enter my consciousness - KODAK - declared bankruptcy.  KODAK sold film and processing and their cameras (hardware) were secondary to the revenue they got from film.  In context of today’s announcement Apple’s model is the opposite and it is unlikely textbook companies and other content providers will see themselves as the ‘KODAK’ in this relationship.  How healthy that approach may be for publishers only time will tell.

According to the National Association of College Stores the average price of college textbooks approaches $70.  Many people make value judgments about college textbook pricing but how does this very real data point jive with the desire of Apple to sell much cheaper college textbooks?  According to the presentation today, Apple plans to hold pricing of k-12 texts at less than $15.  Why would traditional publishers participate in a model that undercuts their business model to such an extent?  While I admit to assuming their pricing strategy will be consistent from K-12 to college, my conclusion is based on a primary premise of their presentation which was that textbooks are expensive.  In education today, what is clear is that there’s significant interest in ‘reinventing’ education from the content perspective and who can argue with the attention that a company like Apple can bring to a slow moving business like textbook publishing?  How sustainable their attention will be is another matter entirely.  Let’s not forget that the hype that preceded the Google bookstore and the Apple iBook store did not result in any appreciable changes in the business.  Apple’s motivations are also suspect: They are motivated by selling more hardware (unlike KODAK) and are only interested in content to the extent that it is cheap and plentiful and helps to drive hardware sales.  A content model with imposed low pricing, just like iTunes, will help sell more units.  I don’t believe Apple’s 30% cut of $14.95 will ever be significant relative to their sales of iPads at $500 a pop (give or take).

While the twitter feed this morning was overwhelming at times it was interesting that some very critical questions were quick to arise regarding Apple’s new self-publishing platform.  Fundamentally, the platform does (will) attempt to address a desire by educators for more control in the content they assign for their students.  That’s a good thing.  This desire/need of faculty is very much in nascent form; however, this may have more to do with expectations - what they see as their options given the concentration of content around less than five large publishers – than true desire to create their own material.  Motivated faculty will want to build their own content for their classes and if they can do this cost effectively and easily so much the better.  Whether they will do this on the Apple platform remains to be seen.

Offering an ‘open’ platform for faculty and others to build their content poses many potential problems.  (Since this is an Apple platform technically it isn’t ‘open’).  The functionality of the platform could be very impressive but, what’s the guessing the ‘store’ becomes full of crackpot theories, spurious pedagogy and denier/revisionist historians unless Apple ‘censors’ (a more polite favorite word could be ‘curate’) the content.  And, if they censor, what will the basis of the censorship?  I am a believer in self-publishing, and it often throws up gems on the trade side, and there is no reason why it won’t on the education side; however, the cost may be a lot of dross (or worse).  Frustration will simply drive people way.

One of the other items quickly identified as a potential problem was the management of copyright; specifically, what’s to stop someone uploading content that they don’t have rights to?  The user will be contravening copyright in using material if they haven’t cleared it appropriately and plan to pay the copyright owner for use.  Most publishers price their content at a level that inclusion in a product that can’t be priced above $15.00 makes the (legal) inclusion uneconomical.  So, that begs the question: Who will clear content properly if they categorically won’t make it up in volume?  If there is a check on copyright what will the process be that enables the use of this content which at the same time doesn’t cause the entire self-publishing process to unravel?

There are more questions over this Apple initiative in addition to those I raise above; however, I believe the only objective Apple wants to achieve is to sell more hardware.  The Education initiative is a smokescreen and we shouldn’t forget that Apple sold a school management platform named PowerSchool that they developed to Pearson in 2006.  At the time, they didn’t find the education market attractive enough even though PowerSchool was considered an impressive tool.  Interestingly, some commentators on the twitter felt the Apple education announcement represented an attack on Blackboard and other LMS platforms.  I’m not sure I see that either.

I believe the involvement of Apple in education will be a good thing for all the players in education and will spur new investment and new initiatives.  This is a dynamic space at the mmoment with technology companies, content companies and wholesalers and distributors all vying for advantage.

Tuesday, December 20, 2011

MediaWeek (Vol 4, No 51): ChromeBooks, Durrell eBooks, Hitchens & Dogs, Unbound & Vogue

Google's Chrome Lending program is set for a series of tests (DigitalTrends);
Google has been working with public libraries recently in order to circulate its Chromebook concept. At least three libraries have been working towards lending out Chromebooks to patrons for a period of time.
Most notably, the Palo Alto, California Library will begin making Chromebooks available for loan in January; patrons will be able to check-out the Google devices for up to one week. The pilot project is a first-of-its-kind, though the library had previously made Windows laptops as well as Chromebooks available to patrons in the Downtown, Main and Mitchell Park libraries for two-hour checkouts with library cards.
Along with Palo Alto, September brought Chromebooks to New Jersey’s Hillsborough Library where patrons were allowed to use the netbooks for four-hour time slots, with an additional two-hour renewal period. Also, Wired points out the Multnomah County Library has been testing 10 Chromebooks at five libraries in Portland, Oregon, though patron’s access has been limited and supervised.

I had to read a Gerald Durrell book in middle school (in Oz).  News his titles are being released in eBook format (Telegraph):
Pan Macmillan has launched a new digital imprint offering 10 Durrell titles as e-books, with five to follow in the New Year. The mixture of fiction and non-fiction includes Beasts In My Belfry, Catch Me A Colobus and Ark On The Move - the latter inspired a television series of the same name.
The advent of e-books could be a godsend for authors whose books are no longer in print. While reissuing backlists as physical books is a costly process, reviving them for Kindles and other e-readers is comparatively cheap.

Pan Macmillan is billing its new imprint, Bello, as a means of “reviving 20th century classics for a 21st century audience”. Other authors on the launch list include Vita Sackville-West and DJ Taylor.
Christopher Hitchens in quotes from The Telegraph: My favorite:
“[O]wners of dogs will have noticed that, if you provide them with food and water and shelter and affection, they will think you are god. Whereas owners of cats are compelled to realise that, if you provide them with food and water and shelter and affection, they draw the conclusion that they are gods.” 
Is Unbound books the next big thing?  (Or was it based on all those stories last year?)  Here's the crux of the issue from the Observer:
So far, the company has had nine books funded (of which only Jones's and Fischer's have actually been published), with another 10 in the pledging phase, including a sci-fi novel by Red Dwarf star Robert Llewellyn. Traffic has been impressive: last month, the site attracted more than 200,000 unique users. Pollard reports that interest from authors has been "huge". And, surprisingly, agents have been enthusiastic.
If you are into fashion then perhaps you would want to subscribe to the new Vogue content database (NYT)
There are roughly 2,800 issues in the archive (Vogue was published weekly until 1912, and has been monthly, with the exception of some war years, only since 1973) and so it holds the potential for endless examination. The entire contents are searchable, so it is possible, for example, to see all of its Cher covers at once. (There were five, all published between 1972 and 1975.)
The covers alone provide a window into the evolving design of Vogue and its distinct looks under different editors: the elegant, iconic and occasionally abstract or surreal covers of Edna Woolman Chase; the frosted confections of Diana Vreeland; the peppy close-ups of models’ faces from the Grace Mirabella years; the celebrities in lavish settings from Anna Wintour.
Vogue, which developed the site with the trend-forecasting company WGSN, has positioned it for professional use, with an annual subscription price of $1,575. (Vogue provided temporary access for review purposes.) For designers or scholars researching fashion history, or, paradoxically, for those nostalgic for the way magazines used to be before the Internet, it may be worth the price. I could tell you more, but I am currently distracted by an article from Nov. 15, 1949, called “When I Entertain,” by Wallis Windsor.
From Twitter:

Georgia O'Keeffe's visit to Hawaii

Cal Senate President pro Tem Darrell Steinberg proposes slashing textbook prices via legislation.LINK

OCLC Report: Libraries at Webscale, by Michael Cairns

Wednesday, October 05, 2011

Judge Dismisses UCLA Copyright Case: Could Impact Authors Guild/HathiTrust Case

The Chronicle reports on a case brought against UCLA for breaching copyright in using streaming video.  James Grimmelmann, an NY Law School professor believes the dismissal of the case could have repercussions for the recently filed Authors Guild/HathiTrust case.  Gimmelman is quoted as saying "If the HathiTrust suit were to be decided tomorrow by the same court, it would be dismissed.”  (Grimmelmann has been a close follower of the Google books digitization program from it's inception on his blog).

From the article: 
But U.S. District Court Judge Consuelo B. Marshall found multiple problems with their arguments. Among the most important: He didn’t buy the plaintiffs’ claim that UCLA had waived its constitutional “sovereign immunity,” a principle that shields states—and state universities—from being sued without their consent in federal court. The judge also held that the association, which doesn’t own the copyrights at issue in the dispute, failed to establish its standing to bring the case.
The decision means “universities will have a little more breathing room for using media,” says James Grimmelmann, an associate professor at New York Law School.

Sunday, September 11, 2011

MediaWeek (Vol 4, No 36): Amazon Digital Library, Piracy, Newspaper Disruption, Private Blackboard + More

Jeff Trachtenberg of the WSJ reports that Amazon may be looking to launch a digital lending library for books.
It's unclear how much traction the proposal has, the people said. Several publishing executives said they aren't enthusiastic about the idea because they believe it could lower the value of books and because it could strain their relationships with other retailers that sell their books, they said. Amazon didn't immediately respond to requests for comment Sunday. The proposal is another sign that retailers are looking for more ways to deliver content digitally, as customers increasingly read books and watch TV on computers, tablets and other electronic devices. Seattle-based Amazon makes the popular Kindle electronic reader and is expected to release a tablet to rival Apple Inc.'s iPad in coming weeks, people familiar with the gadget have said
Spotting The Pirates from The Economist and like most things piracy isn't simple:
Media piracy is more common in the developing world than in the rich world (see chart). The most piratical countries are places like China, Nigeria and Russia, where virtually all media that is not downloaded illegally is sold in the form of knock-off CDs and DVDs. But there is also great variation among rich countries. Piracy is far more widespread in the Mediterranean than it is in northern Europe, including Britain. America may be the least piratical country of all—oddly, since Napster was born there.
...
The result is that big labels have pruned their Spanish operations. Universal Music has shed a third of its Spanish staff. Max Hole, who runs Universal’s businesses outside America, says the firm is “holding out” in Spain, but largely in the hope that it will discover an artist who appeals to Hispanics in the United States. Mr Kassler says EMI is spending five or six times as much in Germany, a low-piracy market where music sales are declining more gently—by 11% between 2006 and 2010.
Again from The Economist their last essay in their recent special section on the Newspaper industry (Econ):
In the past decade the internet has disrupted this model and enabled the social aspect of media to reassert itself. In many ways news is going back to its pre-industrial form, but supercharged by the internet. Camera-phones and social media such as blogs, Facebook and Twitter may seem entirely new, but they echo the ways in which people used to collect, share and exchange information in the past. “Social media is nothing new, it’s just more widespread now,” says Craig Newmark. He likens John Locke, Thomas Paine and Benjamin Franklin to modern bloggers. “By 2020 the media and political landscapes will be very different, because people who are accustomed to power will be complemented by social networks in different forms.” Julian Assange has said that WikiLeaks operates in the tradition of the radical pamphleteers of the English civil war who tried to “cast all the Mysteries and Secrets of Government” before the public.
News is also becoming more diverse as publishing tools become widely available, barriers to entry fall and new models become possible, as demonstrated by the astonishing rise of the Huffington Post, WikiLeaks and other newcomers in the past few years, not to mention millions of blogs. At the same time news is becoming more opinionated, polarised and partisan, as it used to be in the knockabout days of pamphleteering.
Not surprisingly, the conventional news organisations that grew up in the past 170 years are having a lot of trouble adjusting. The mass-media era now looks like a relatively brief and anomalous period that is coming to an end. But it was long enough for several generations of journalists to grow up within it, so the laws of the mass media came to be seen as the laws of media in general, says Jay Rosen. “And when you’ve built your whole career on that, it isn’t easy to say, ‘well, actually, that was just a phase’. That’s why a lot of us think that it’s only going to be generational change that’s going to solve this problem.” A new generation that has grown up with digital tools is already devising extraordinary new things to do with them, rather than simply using them to preserve the old models. Some existing media organisations will survive the transition; many will not.
From New York magazine and Boris Kachka on Why Debut Novels are Big Business Again:
But a funny thing happened on the way to austerity: growth. Book-publishing revenue is up almost 6 percent since 2008, and the monster advances are back, too—seven for first novels in the past year. “They’re coming back in a big way,” says Eric Simonoff, an agent at William Morris who sold a debut for more than $1 million in February. It may be that overspending isn’t such a luxury after all but a publicity pre­requisite, the only marketing gesture anyone believes anymore. HarperCollins’s Jonathan Burnham, who bid in the Harbach auction, often makes news by dropping seven figures on debuts, like Jonathan Littell’s The Kindly Ones, which posted disappointing sales, and S. J. Watson’s Before I Go to Sleep, which was more of a success. Burnham acknowledges the power of “measured overpaying”—“if it’s the right fit for the list, and it makes the right statement,” he says. “It creates a sort of sense of destiny, and in most cases, that’s a huge advantage. It becomes a source of gossip and excitement in the trade. Everyone’s twittering away about it—in the old-fashioned sense of twitter
Kachka refers to a Slate article entitled MFA vs NYC written by Chad Harbach (Slate)
The IHE hosted Digital Tweed blog has poses interesting set of speculations regarding the consequences of the privatization of Blackboard. This clip is from the intro of a long post: (DigitalTweed);
It’s been 20 days since the announcement that Providence Equity Partners would acquire Blackboard. The acquirer became the acquired in this transaction: Blackboard, which has spent more than a half a billion dollars over the past five years to buy a range of technology firms that focus on the education market (Angel Learning, Elluminate, iStrategy, NTI, Presidium, WebCT, and Wimba, among others) agreed to be purchased for approximately $1.64 billion.

Not surprisingly, the message from Blackboard, as reflected in a public letter from Blackboard CEO Michael Chasen, a blog post from Blackboard Learn president Ray Henderson, and an accompanying set of FAQs – is, in essence, that the sale to Providence will have little impact on the company’s relationship with clients. Chasen’s letter announced that he and the current management team “will remain in place and we do not anticipate any changes [in Blackboard’s] day-to-day operations.” Henderson’s blog provided some additional rationale for the financial deal, suggesting that private ownership frees management from the quarterly pressures to report continuously rising revenue and profits: “private equity now provides an alternative ownership model that’s more agile…firms like Providence include very long-term investors…willing to take longer-term perspectives.” Henderson also proclaimed that that the new owners “share our vision of improving the education experience for our clients, and of the mission critical role [Blackboard’s] platforms play in teaching and learning today.”
Repost from The Chronicle of Higher Ed: An Era of Ideas.
To mark the 10th anniversary of the September 11 attacks, The Chronicle Review asked a group of influential thinkers to reflect on some of the themes that were raised by those events and to meditate on their meaning, then and now. The result is a portrait of the culture and ideas of a decade born in trauma, but also the beginning of a new century, with all its possibilities and problems.
From the twitter this week:

Google to Buy Zagat:

Elsevier's Scopus Data Drives Visualization of International Collaborative Research Networks for Max Planck MarketWatch

HathiTrust full-text index to be integrated into OCLC services [OCLC]:

Amazon’s Future Is So Much Bigger Than a Tablet | Epicenter | Wired.com

Tuesday, August 02, 2011

Beyond the Book Visits The GooglePlex

A new “Beyond the Book” podcast from Copyright Clearance Center in which CCC’s Chris Kenneally gets a special tour of the Googleplex and speaks with Steven Levy, a senior writer at Wired and author of In the Plex, his book about Google. Levy began writing about Google in 1999 and for his upcoming book, he spent two years inside the Googleplex. “It’s really hard to imagine a company that has a bigger effect on us in our daily lives than Google does.”

In the book, Levy had to keep Google’s social network Google+ under a codename “Emerald Sea” because it wasn’t released yet. “… people at Google told me that they felt that this was the one epic fail…they’d done a lot of things right over the year…but one thing they didn’t do that would have been a smart thing to do would be to master the social networking aspect- to build people into their product, as they put it.”

Levy goes to describe the sales people in relation to the engineers. “Google had this odd relationship to its sales force -… they [knew they] were necessary, but didn’t consider them, in a way, equal to the engineers…the engineers are in the center of it. But they see the ads section as also an engineering center. They think they’re making innovative products in engineering.”

Right. Well, you know, of course, the company was founded, begun at Stanford, and many of the early employees came from that environment – that elite campus environment where, I guess, they work hard, they play hard is – is that about what it’s like at Google?

A: Yeah, it is. There’s two strains, I think, in the culture of Google that are really important. And one of them, as you say, is the university idea there. And it is very much campus-like, not only in the amenities but in how they argue things. The job interviews, some people say, are almost like defending your dissertation. You’ll go on there. And there’s a lot of colloquy and back and forth, and they try to keep it based on data.The other strain in the culture is – comes from, I think, the fact that both founders –Larry and Sergey – were Montessori kids. So there’s a streak of irreverence that go on there. They question authority. And it’s OK to ask any question of anyone –even the founders. And once a week they have a meeting – an all-hands meeting –where anyone can attend and ask any question that they want.

And of course Google has a system where people can submit things online and vote them up and down, but they can also ask questions directly. And sometimes the questions are quite pointed, straight at Larry and Sergey, and they don’t take it personally. They’ll try to answer the questions very straightforwardly, and no one really worries about whether you’re offending Larry or Sergey when you ask a pointed question. I remember I was at a meeting once – at a TGIF meeting – where someone asked, why does our CFO get so much money? Why do we have to spend so much money to get a chief financial officer? And Sergey thought for a minute, and he said, well, you know, basically we looked at it and we felt that this was the going rate for CFOs, and we wouldn’t get a good one otherwise. And the other person sort of stood down and said, OK, that makes sense and, you know, that’s a good answer there. And they go on to the next thing.

The podcast is available here:

Wednesday, March 23, 2011

Taking it on the Chin

Maybe Google tried too hard to stir the ocean with a swizzle stick. Way back in time, Google believed that indexing printed content that they had scanned represented a fair use of that content and compatible with the method they used to index web pages upon which they (and many other companies) had built search businesses. Judge Chin’s ruling that the court would not approve the Google book settlement is a set back for Google and the AAP/AG but it is not the end and, actually Google still hasn’t “lost” in the sense that their original issue has been adjudicated.

Who knows where this case will go next? One possible scenario is that the parties will agree a new settlement that removes the problematic issues that Chin identified but the core issues related to the extent of fair use and the ‘orphan’ works problem are likely to remain unaddressed both by this case and by anyone else. Congress is not going to provide a solution but in the unlikely event that they did address these issues the outcome would probably be detrimental to the public interest as has their prior work on copyright in recent decades. Remember also that the ‘orphan’ problem is not restricted to books but pretty much all media from watercolors to video. It is a complicated and expansive problem: A veritable ocean of competing and conflicting interests that is unlikely to be concluded anytime soon. One result of an approved settlement which I thought possible was had it been approved then Congress may have been more likely to act and the fact of the settlement might have established a framework Congress could use in enacting a law.

Google has moved the copyright peanut forward. Most commentators interviewed prior to 2005 would not have known or understood what an ‘orphan’ was in this context and at least many of us now conceptually understand what or who they represent. So with so many people now educated about the issue perhaps the on-going conversations can serve to move the industry(ies) in a positive direction. Unfortunately, we still don’t know the size of the potential problem in books – maybe it really doesn’t matter – and while my analysis was widely discussed and cited no one really disputed the conclusions which either means I was right on (unlikely) or no one else could be bothered to drawn different conclusions.

Over the past three years, the Book Rights Registry (run by friend of the blog Michael Healy) has been quietly building a repository of data on claimants. As many predicted, the number of claimants that have come forward is small in comparison with the number of titles that have been scanned and the database may not shed any greater light on the Orphan problem than we already know: Orphans either don’t exist, don’t care or don’t know. Whether we will ever see any data out of this collection effort isn’t clear but if it were provided the information might represent another opportunity to educate us on the ‘orphan’ community. More data will help understand the issue and help with a possible solution. The BRR could still be an effective clearing house for copyright claims and few would dispute we need that but who would pay?

Setting aside that the settlement was by no means perfect and represented an agreement between parties of ‘questionable status’ appeasing ‘bad behavior’, if your position in this fight was that the settlement enabled a vast collection of content available for the greater good then I don’t think you’ve ‘lost’ just yet. What happens next and particularly what happens to all the scanned archives at libraries across the US will be broadly discussed in the coming weeks. Some resolution we be announced in the short term: As this decision took longer and longer to come the parties must have been thinking about alternative scenarios. One significant issue however is that two of the main protagonists in this agreement - Richard Sarnoff from Random House and Dan Clancy of Google have moved on to other things. How that will impact the proceedings hereon is hard to determine although it might not be negative.

Meanwhile, Mr. Healy and the Book Rights Registry will probably continue to wait out further resolution but how professionally satisfying this will be to him and his team would be a question. Without doubt by now the BRR was supposed to be operational and adjudicating and licensing and effecting use of a vast archive of human knowledge. It will be hurry up and wait again at the BRR and libraries across the world.

Friday, July 16, 2010

Silos of Curation - Repost

Originally posted on April 29, 2009.


Publishers curate content but they don’t really do it well. And that’s a shame, because curation will be a skill in high demand as attention spans waver, choices proliferate and quality is mitigated by a preponderance of spurious material. That’s why companies such as LibraryThing, Goodreads, Shelfari, weread, BookArmy and others like them may be positioned to leverage communities of interest that mimic the ‘siloed’ offerings that larger and more mature publishing companies have successfully been offering their customers in law, tax and education for many years.

Trade ebook sales are only 1% of total revenues and, while they are growing rapidly, they may take as long as five years to reach 5%. In other publishing segments, this growth curve would be viewed as a failure as information and education publishers actively manipulate the market to their advantage to force faster adoption of e-content. In pushing adoption of eBooks, trade publishers do not have many of the advantages that information and education publishers have. Generally speaking, information content is more transactional: Looking up a reference, seeking a specific citation or definition or article. Educational content is modular with material that is created for specific purposes but which can also be “rebuilt” for other purposes. In both cases, the producers of these products exert some control over their delivery and have impressed on their markets a platform for delivery. (I have discussed this platform approach here).

There are several reasons information and educational publishers are able to pull off the platform approach. One is that they have successfully aggregated content around discrete segments: law, tax, financial, higher ed, k-12, etc. This, in turn, has enabled them to clearly identify their markets and build solutions that match their customers’ needs precisely. That is not the case in trade publishing.

Trade publishing can be anarchic and it is not uncommon for a publisher of primarily gardening and lifestyle books to publish three or four mysteries as well. While aggregation into silos of content – becoming the Science Fiction or the Christian publisher (as West and Lexis became the legal content silos, for example) – is possible it may not be likely. While this strategy may appear logical, I don’t believe there is a sense of purpose within trade as there is in Information and Education. And as publishers continue to be preoccupied with their own corporate brands, the desire to focus on content silos becomes less apparent. Since content is the foundation of the ‘platform’ approach evidenced in the other segments, a similar strategy will not apply in trade.

Something similar to the platform approach may take shape in a different way with intermediaries playing the role of curator. This is an approach that companies such as Publisher’s Weekly or The New York Review of Books might have adopted if they had been more prescient. The capability to guide consumers to the best books, stories and professional content within a specific segment (without regard to publisher or commerce) may come to define publishing in the years to 2020. (See Monday’s post). Expert curation can simplify the selection process for consumers, aggregate interest around topics and build homogeneous markets for commerce. As an added benefit to these intermediaries’ customers, publishers will chose to focus intensely on each segment and offer specialized value-adds particular to that segment. As content provision expands – witness the delivery of all the books in the Google Book project – readers will become increasingly confused and looking for help. It seems inevitable that intermediaries between publisher and e-commerce will meet that need.

BookArmy was launched by Harpercollins earlier this year while the other companies have been around for a while. BookArmy has taken a (thus far) universal approach and lists all books rather than those only published by Harpercollins. BookArmy may or may not be successful, and it is intensely difficult to launch a site like librarything or Goodreads that grips the imagination and passion of its audience, but that is what makes these sites ideal incubators for new thinking and new approaches to publishing. In their current states most (all) of the curation is provided by the community and tends to be post-publication focused. Having said that, it would not be too difficult to see a new ‘layer’ of curators emerging who could provide direction and recommendations to readers on forthcoming titles. And, in addition, these curators could manage their subject “silo” to help readers better understand and explore their subject without regard to the publisher. Experiments in this area have started with, for example, LibraryThing working with publishers to provide a reviews program for forthcoming titles.

Readers don’t really care how many books are published in a year but they do care about knowing which titles they should read based on their interests. Increasingly help will be on the way but it is most likely to be presented as agnostic of publisher and curated around logical subject classifications.


See also Brand Presence - some earlier thoughts on publisher branding.
Reblog this post [with Zemanta]

Thursday, April 22, 2010

A Database of Riches: The Business Model Behind The Google Book Settlement

Late last year, I wrote an analysis of the possible size of the orphan work issue that was core to the initial criticism over the Google Book Settlement. That analysis was a subset of a wider analysis of the business opportunity that the Google Book Settlement represented once the database was cleared for sale.

Following is that further analysis: In this report, I have estimated the market opportunity that the Google Book database could represent and I have organized my review based on which customers are likely to purchase the product, how much and to what degree customers will purchase and I also explore how Google might go about selling and marketing the product. I have excerpted the management summary section below and the full report is available below.

Anyone interested in discussing this report in more detail is encouraged to set up a conference call or meeting with me and I can review in more detail my methodology and explore the options available to Google as they roll out this product.

Introduction:

Almost five years ago, Google embarked on the most ambitious library development project ever conceived: To create a “Noah’s Ark” of every book ever published and to start by digitizing books held by a rarefied group of five major academic libraries. The immediate response from US publishers was muted, until the implications of the project became clear: That Google proposed no boundaries to the digitization effort and initiated the scanning of books both in and out of copyright and in and out of print. Adding to publisher’s concerns, Google planned to display “snippets” (small selections) of the book’s content in search results. Despite some hurried conversations among publishers, author groups and Google, Google remained convinced that what they were doing represented a social ‘good’ and the partial display of the scanned books was legally within the boundaries of fair use.

From the publisher perspective, this was a make-or-break moment, and the implications were more acutely felt by trade publishers who saw the potential for their business models to be obliterated by easy and ready access to high-quality content via a Google search over which they would exert little or no control. Even worse was the fear that rampant piracy of content would also develop – a debated and contentious point - given the easy access to a digitized version of a work that could be e-mailed or printed at will. The publishers determined that if Google were to ‘get away with it’ without challenge, then anyone would be able to digitize publisher content and possibly replicate what has been going on in the music and motion picture industries for almost ten years. In mid-2005, prompted by a law suit filed by The Authors Guild, the Association of American Publishers (AAP) led by four primary publishers filed suit against Google in an effort to halt the scanning of in-copyright materials. (The Authors Guild and AAP ultimately combined their filings).

The initial Google Book Settlement (GBS) agreement, given preliminary approval by a court in October 2008, generated a vast amount of argument both in support of the agreement and in challenges to it. A revised agreement was drafted after the Federal District Court of Southern New York and Judge Chin agreed to delay the adjudication and final arguments which were heard in late February 2010. To date, Judge Chin has not given a timetable nor an indication of when and how he will decide the case.

From the perspective of the early leading library participants, Google’s arrival and promise to digitize their purposefully conserved print collections looked like a miracle. Faced with forced declines in the dollars spent on monographs and the ever-rising expense of maintaining over 100 years of print archives, the Google digitization program provided a possible solution to many problems. All libraries believe they hold a social covenant to collect, maintain and preserve the most relevant materials of interest to their communities but maintaining that covenant becomes a challenge in an environment of increasing expenses while also enduring the challenges of migrating to an on-line world(1).

The library world is typically segmented into public and academic institutions and while these often varied ‘communities’ may differ in their philosophy towards, for example, collection development or preservation, they do share some common practices. Most importantly, all libraries are committed to resource sharing and while materials use has historically and primarily been ‘local’ to the library, every institution wants to make its collections available to virtually any patron and institution who requests them. In short, these library collections were always ‘accessible’ to all regardless of geography or copyright: First US Mail, FedEx, e-mail and then the Internet progressively made this sharing easier but, until Google arrived with their digitization program, any sharing beyond the local institution was via physical distribution(2) . In effect, it could be argued that the Google scanning program simply makes an existing practice vastly more efficient.

Even though, the approval of the Google Book Settlement (GBS) hangs in the balance under review by Judge Chin of the Federal District Court of Southern New York, an Executive Director has been named to head the Book Rights Registry (BRR) (3) and is preparing the groundwork to establish the organization (BRR) in advance of approval. This report represents an attempt to analyze the market size opportunity for Google as it seeks to exploit the Google Book Settlement.

Following are our summary findings which are discussed in more detail in the ensuing pages of this report.

Summary Findings of the Report:

  • Libraries will see tremendous advantages – both immediate and over time - from the GBS, although concerns have been voiced (notably from Robert Darnton of Harvard)(4)
  • Google’s annual subscription revenue for licensing to libraries could approach $260mm by year three of launch
  • Over time, publishers (and content owners) will recognize the GBS service as an effective way to reach the library community and are likely to add titles to the service(5)
  • Google will add services and may open the platform for other application providers to enhance and broaden the user experience
  • The manner in which the GBS deals with orphan works will provide a roadmap for other communities of ‘orphans’ in photography, arts, and similar content and intellectual property
[1] It is important to acknowledge that, initially, the GBS may have been seen as a solution to libraries’ conservation and preservation needs; however, subsequently, libraries have determined that they need to develop their own preservation options in which The Hathi Trust is a clear leader.
[2] Resource sharing and improvements in the ‘logistics’ provided by OCLC (WorldCat) or via consortia such as OhioLink has made physical distribution effective and comparatively efficient.
[3] The BRR is the management body tasked with administering the GBS and representing the interests of authors and publishers once approval has been granted by the court.
[4] Robert Darnton, NY Review of Books
[5] The settlement doesn’t provide for adding content prior to 1/5/09; however, we are suggesting that, by mutual consent, additional published content may be added as an expedient method of reaching the library market.



Monday, November 23, 2009

CCC Podcast on Google Book Settlement Revision

Copyright law expert Lois Wasoff discusses the most important changes and revisions to the Google Book Settlement in a PodCast (here). Some of her main points include:
  • the underlying structure of the agreement and many of the economic terms of the agreement have not changed
  • the revised proposal makes it more difficult for Google to simply decide a work is not commercially available and start to use it for display uses
  • procedurally, the parties really have taken a step back by asking the court for preliminary approval of the settlement and of the class, which is something that they had gotten for the prior version a year ago. However, coupled with that request is a proposal for a fairly aggressive timeline moving forward, keeping this agreement review and approval process moving
A complete transcript can also be seen here.

Monday, November 02, 2009

Revisiting Scan this Book

I was clearing out my file draw this weekend - 'back in the day' I actually used to save print articles of interest - and I came across the Kevin Kelly article about the Google Book Scanning project which he wrote in May, 2006. Still an interesting read as he concludes (NYT):

Search opens up creations. It promotes the civic nature of publishing. Having searchable works is good for culture. It is so good, in fact, that we can now state a new covenant: Copyrights must be counterbalanced by copyduties. In exchange for public protection of a work's copies (what we call copyright), a creator has an obligation to allow that work to be searched. No search, no copyright. As a song, movie, novel or poem is searched, the potential connections it radiates seep into society in a much deeper way than the simple publication of a duplicated copy ever could.

We see this effect most clearly in science. Science is on a long-term campaign to bring all knowledge in the world into one vast, interconnected, footnoted, peer-reviewed web of facts. Independent facts, even those that make sense in their own world, are of little value to science. (The pseudo- and parasciences are nothing less, in fact, than small pools of knowledge that are not connected to the large network of science.) In this way, every new observation or bit of data brought into the web of science enhances the value of all other data points. In science, there is a natural duty to make what is known searchable. No one argues that scientists should be paid when someone finds or duplicates their results. Instead, we have devised other ways to compensate them for their vital work. They are rewarded for the degree that their work is cited, shared, linked and connected in their publications, which they do not own. They are financed with extremely short-term (20-year) patent monopolies for their ideas, short enough to truly inspire them to invent more, sooner. To a large degree, they make their living by giving away copies of their intellectual property in one fashion or another.

Sunday, October 18, 2009

MediaWeek (Vol 2, No 42): GBS Frankfurt Panel, Libreka, FTC

In the waning Friday of the Frankfurt bookfair there was a contentious, apparently somewhat 'anti-google' discussion of the Google Book Settlement as reported by Richard Nash on the fair's blog:

The impact of the Google Book Settlement, in whatever form it might eventually take, promised to be one of the most controversial panels at this year’s Fair and the participants, especially Prof. Roland Reuss, author of the Heidelberg Appeal, a vehement critique of the Google scanning project, did not disappoint. He denounced as “garbage of hysterical propaganda” the claims by Google that they were enhancing access, maintain that “if you want to finance production, you have to shelter the ones who produce,” not those that consume, and that moreover any student who is completely dependent on the Internet for “must be stupid.” .... Reuss was largely unmoved. “It has always been possible for scholars to get the information,” he said, “since the 5th century.” He believes that the focus on access is inappropriate, “fetishistic,” and that the true issue with scholarship is to produce, not to access.

Reuss' comments seemed to be as much against the internet as against the issue of copyright, nevertheless there appeared to be some in the audience who applauded his commentary. The panel discussion sits neatly as a bookend to Chancellor Merkel's per-Frankfurt oration in the perils of the Google Book Settlement and the institution of German copyright. Curiously not a subject I would have expected a head of state to draw attention to but then perhaps the subject was thematic with respect to the opening of the fair. Richard noted the German Bookseller and Publisher supported site Libreka which was launched three (possibly four) years ago (PND) to great fan fair and has managed to amass 120,000 books available for full-text search. Libreka was created to provide a platform for German published full-text content and continues to announce content and publisher deals. Through the significant discussion of Merkel's comments - where they valid, where they informed for example, no one mentioned Libreka which speaks to its' irrelevance and lack of traction. A review of Libreka's web traffic report seems to support the last point. The Börsenverein is both the operator of the Frankfurt bookfair and the 'publisher' of Libreka and perhaps this relationship suggests a more practical motivation for Merkel's copyright comments. The Interactive Ad Bureau has asked the FTC to rescind their recent statement on blogger disclosure statements saying (Reuters),
"What concerns us the most in these revisions is that the Internet, the cheapest, most widely accessible communications medium ever invented, would have less freedom than other media," said Mr. Rothenberg, "These revisions are punitive to the online world and unfairly distinguish between the same speech, based on the medium in which it is delivered. The practices have long been afforded strong First Amendment protections in traditional media outlets, but the Commission is saying that the same speech deserves fewer Constitutional protections online. I urge the Commission to retract the current set of Guides and to commence a fair and open process in order to develop a roadmap by which responsible online actors can engage with consumers and continue to provide the invaluable content and services that have so transformed people`s lives."
Google launched or re-launched their on-line bookstore that will initially contain 500,000 titles. Some commentators have gone so far as to suggest that Amazon.com - absurdly - is smoke. (Guardian):

Editions is set to launch in the first half of 2010, potentially giving readers in America and Europe access to around half a million titles including best-sellers and back catalogue books. Crucially, the store will be compatible with a number of devices - including mobile phones, computers and ebook readers - that could allow it to market services to millions of people worldwide.

Under Google's plans, readers will be able to download texts straight from Google Books website, or from the websites of book retailers or directly from publishers who choose to work with the Silicon Valley company. Executives said they are targeting partnerships with major retailers such as WH Smith and Blackwell - many of which already have existing partnerships with the site.

Tuesday, October 06, 2009

BISG Webcast: ONIX 3.0 & Metadata for E-Books

In April 2009, EDItEUR announced the release of a major new version of the ONIX for Books standard: ONIX 3.0. This release of ONIX is the first since 2001 that is not backwards-compatible with its predecessors and, more importantly, provides a means for improved handling of digital products.

During this FREE 60-minute BISG Webcast, David Martin from EDItEUR's ONIX Support Team and Brian Green, Executive Director of the International ISBN Agency, will focus on how ONIX 3.0 provides new support for digital publishing, along with requirements for identifying ebooks in our industry's complex new supply chain. Along the way they will answer four key questions about ONIX 3.0:
  • How does ONIX 3.0 provide new support for digital publishing?What are the requirements for the standard identification of ebooks in the complex new supply chain?
  • What are other important benefits of ONIX 3.0?
  • How should publishers and other ONIX users respond to the new release?
Wednesday, October 07, 2009

11:00 AM to 12:00 PM



BISGispleasedtopresentthisFREE60-minuteWebcastinpartnershipwiththeInternationalDigitalPublishingForum.
Registertoday!