Showing posts with label ISBN. Show all posts
Showing posts with label ISBN. Show all posts

Tuesday, January 30, 2024

Time for A Publisher ID?


 

In the 1870s R.R. Bowker began publishing The American Catalog which collected publisher titles into one compendium book. The first edition of this book was surprisingly large, but its most useful aspect was that it organized publisher books into a usable format. The concept was not sophisticated: The Bowker team gathered publisher catalogs, bound and reprinted them so that they were more or less uniform. In subsequent years Books In Print became three primary components: The Subject Guide, Author Guide and Publisher Index (or PID). Each was separated into distinct parts, but it was the PID which held everything together.

When a user found a title in the author or subject index they would also be referred to the PID index to find specific information about the publisher including (the obvious) how to order the book. At some point, Bowker began applying an alphanumeric “Bowker Id” to Publisher names so that the database could be organized around the publisher information.

In the late 1970s and early 1980s, the ISBN was introduced to the US retail market and Bowker was (and still is) the only agency able to assign ISBN numbers in the US. Included in the ISBN syntax was a “publisher” prefix such that a block of numbers could be assigned specifically to one publisher. The idea, while good in concept, did not work well in practice. For example, in an effort to encourage adoption of ISBNs the agencies assigned some large publishers a small two digit publisher prefix which resulted in a very large block of individual ISBNs (seven digits plus the check digit). Even after 50 years, many of these blocks are only partially used (and wasted) because the publisher output was far less than anticipated. A second problem was that publishers, imprints and lists were bought and sold which made a mess of the whole idea. (In the above image the prefix is 4 digits).

At Bowker, we recognized that our Publisher Information Database was a crown jewel and a key component of our Books In Print database. Despite many requests we never licensed this data separately and this was a significant reason retailers such as Barnes & Noble, Borders, Follett and others licensed Books In Print. Because the information was so important, we spent a lot of time maintaining the accuracy and the structure of the data.

Publishers who acquired ISBNs from the Bowker agency were a key input to this database – beginning in the 1980s but continuing to the present. Not all new ISBNs go to small independent publishers and there remains consistent demand from established publishers for new numbers even today. To be useful, this publisher information needs to be structured and organized accurately and is only possible with continued application of good practice. During my time at Bowker, the editorial team met regularly with publishers to both improve the timeliness and accuracy of their book metadata but also to confirm their corporate structure. We wanted to ensure that all individual ISBNs rolled up to the correct imprint, business unit and corporate owner. This effort was continuous and sometimes engaged the corporation’s office of general counsel and was frequently detailed and time consuming.

A few years after I left Bowker, one of my consulting clients presented me with a proof of concept to programmatically create a publisher id database. In concept it looked possible to do; however. I pointed out all the reasons why this would become difficult to complete and then to maintain. They went ahead anyway but after a year or so abandoned the work because they could not accurately disambiguate publisher information nor confirm corporate reporting structures.

Today there is no industry wide standard publisher id code but the idea comes up frequently as one the industry should pursue. As with many new standards efforts it will be the roll out and adoption of the standard which will prove difficult. Establishing an initial leap forward could represent a promising start by using data which might already be available or available for license.

Bowker (and all global ISBN agencies) are required to publish all new publisher prefixes each year and this information could also be a useful starting point. Bowker is not the only aggregator with publisher data (we were just the best by a significant margin) and another supply chain partner might be willing to contribute their publisher data as a starting point. This could establish a solid foundation to build on, but realistically any effort will fail if the maintenance aspect of the effort is not understood and recognized, and a strong market imperative isn’t widely agreed and supported.

When (I)SBN was launched in the UK in the late 1960s it succeeded because the largest retailer (W.H. Smith) enforced the strong business case for its adoption. Globally ISBN has gone on to become one of the most successful supply chain initiatives in (retail) history and the entire industry is dependent on this standard. (It has even survived Amazon’s cynical ASIN). If there is a business case for the publisher id this needs to be powerful, obvious and accord universal benefits: Mutual interest and money can be powerful motivators but having a policeman like W.H. Smith will help as well.


More:

The ISBN is Dead

ChapGPT "thoughts" on the history of identifiers.

Note: I ran R.R. Bowker for a while and was also Chairman of ISBN International.

Sunday, March 03, 2013

MediaWeek (Vol 6, No 9: ISBNs, Books & Commuting, Course Guides, Music Money + More

The Economist readers in the group may have seen that the lowly ISBN made it into the newspaper this week. It wasn't a particularly good article and I said so. (Economist):
This is a curious article: In some cases, it misses the point and, in others, it misinforms the reader about how the publishing industry currently works.

There is no doubt that the ISBN--as a global standard for the identification of physical product--is facing, or will soon face, a challenge as physical books become eBooks but its irrelevance is still a fair distance off. A mix of formats (electronic and paper)is likely to exist for many years (particularly with the variability in markets around the world for adoption of the eBook) and the use of the ISBN is long and deeply embedded in all significant publishing systems from editorial to marketing to royalty accounting.

Further, it is hard to agree with your statement that the ISBN hampers small publishers when the past ten years have seen the most significant growth in small- and medium-sized publishers in history. Both Bowker and Nielsen report these numbers each year for the US and UK markets. One circumstance you allude to is that in 'olden times'--when we had more than two significant bookstore chains (in the US)--there was no question as to whether to obtain an ISBN; however, a publisher today could make a perfectly valid decision not to acquire an ISBN and simply sell their book or eBook through Amazon . . . and they could do okay with that. But why would any publisher with a book offering legitimate sales potential want to exclude all other retailers? That would be hard to understand.

Assigning an ISBN to a book never guaranteed 'mainstream' publication - it's not clear what you mean by that. Certainly, retailers would not (do not) accept a book without an ISBN but, by the same token, B&N won't accept your book simply because it has an ISBN. There's a little bit more to it than that. I wrote about the prospects for the ISBN back in 2009 and reflected on the ASIN situation. It's not new and it was never altruistic. Here it is, if interested: http://personanondata.blogspot.com/2009/08/isbn-is-dead.html

The other identifiers you note are interesting but don't really apply or fit with the requirements of the book (e- or p-) supply chain. There's no question the industry needs to think differently about identifiers but I don't think that's a point you end up making. Even if a book can be easily downloaded and paid for, someone still has to do the accounting and make sure the right publisher gets the right payment so they can the pay the author and contributors their share. Individuals and small publishers could possibly do without an ISBN but, in doing so, they may only be limiting their opportunities.
Commuting on the Underground: John Lanchester rides the London Underground (Guardian):
This is an academic finding that hasn't crossed over into the wider world. I've never seen a film or television programme about the importance of commuting in Londoners' lives; if it comes to that, I've never read a novel that captures it either. The centrality of London's underground to Londoners – the fact that it made the city historically, and makes it what it is today, and is woven in a detailed way into the lives of most of its citizens on a daily basis – is strangely underrepresented in fiction about the city, and especially in drama. More than 1bn underground journeys take place every year – 1.1bn in 2011, and 2012 will certainly post a larger number still. That's an average of nearly 3m journeys every day. At its busiest, there are about 600,000 people on the network simultaneously, which means that, if the network at rush hour were a city in itself, rather than an entity inside London, it would have the same population as Glasgow, the fourth biggest city in the UK. The District line alone carries about 600,000 people every day, which means that it, too, is a version of Glasgow. 
There are quite a few novels and films and TV programmes about Glasgow. Where are the equivalent fictions about the underground? New York has any number of films about its subway – The Warriors, the John Carpenter movie from 1979, is one of the best of them, and explicitly celebrates the network's geographical reach across the whole city, from Van Cortlandt Park in the Bronx to Coney Island. New York also has Joseph Sargent's The Taking of Pelham 123, an all-subway-located thriller, among many other cinematic depictions. Paris has the Luc Besson film Subway, and plenty of other movies. London has next to nothing. (Let's gloss over the Gwyneth Paltrow vehicle Sliding Doors – though not before noting that the crucial moment when she either does or doesn't catch the train is on the District line, at Fulham Broadway. Spoiler alert: in the version in which she rushes and successfully catches the train, she dies. A District line driver would call this a useful reminder that this isn't the national rail network, and there will be another one along in a minute.) There's a wonderfully bad Donald Pleaseance movie from 1972 called Death Line, set entirely in Russell Square underground station; there were some episodes of Doctor Who in the 60s, which seemed scary at the time, about the tube network being taken over by robot yetis. To a remarkable extent, though, that's it. London is at the centre of innumerable works of fiction and drama and TV and cinema, but this thing at the heart of London life, which does more to create the texture of London life than any other single institution, is largely and mysteriously absent.
American Public University and their course guides is an interesting project (CampusTech)
The online course guides project is an award-winning academic technology initiative to match every one of APUS's online courses with an online library course guide, a new approach to offset the high cost of traditional print text books. Now that the project has successfully completed guides for a little over half of the university's course offerings, further practical metrics may be applied to the initial statistical analytic framework to widen the project's focus from course guide completion rates to higher levels of quality assurance and sustainability.
Analysis on data reported on the music industry indicates that some music artists can make money (Atlantic):
Last month, Northwestern University law professor Peter DiCola released the results of a fascinating survey that tried to discern exactly how much income most working musicians make off of people actually paying for their recordings (or in some cases, their compositions). His very broad answer was between 12 and 22 percent, depending on whether you counted pay from session playing (shown as "mixed" below). If that doesn't sound like real money to you, consider how you'd react if your boss suddenly said you were getting a 10 percent pay cut tomorrow.

DiCola's study isn't perfect. It analyzes answers from roughly 5,300 musicians who volunteered for the survey, meaning it lacked the element of random sampling that most social science work strives for. The participants were overwhelmingly white (88 percent), male (70 percent), and old (the largest demographic was 50-to-59-year-olds). Almost 35 percent were classical musicians, and another 16 percent were jazz artists. In short, this isn't going to offer a crystal clear financial portrait of your up-and-coming Pitchfork darling.

Nonetheless, the results do offer insight into how workaday guitarists, saxophonists, singers, songwriters, and timpani players -- 42 percent of the group earned all of their income from music-related work -- earn a living. And music sales (or streams) are usually a small but by no means insignificant piece of the picture.
Do we own our eBooks? Covering old ground at Salon:
Switching devices presents another headache for readers. Late last year, independent booksellers made a deal with Kobo, an e-book retailer that also sells its own e-reader devices. The indies now sell both the devices and Kobo e-books. People who want to support their local independent bookstore might contemplate switching from the Kindle to the Kobo, but if they do they’ll have to leave their (DRM-protected) Kindle books behind on their old device. If you are an early e-book adopter who wants to keep and reread the books you bought for your Kindle, you’re locked into the Kindle platform.

Tablets like the iPad are slightly different. The tablet’s owner can install numerous proprietary apps to read a variety of e-book formats, but the titles have to stay in their own walled gardens. You can’t move your Kindle books into your iBook library, for example. This is a minor annoyance, but annoying all the same! When I got my first iPad, I mostly bought Kindle e-books because Amazon’s app was more versatile. Since then, iBooks has outstripped the Kindle app, especially when it comes to working with books used for research, and I would much rather read and organize all my e-books in iBooks. I can’t. Given such restrictions, it’s debatable whether or not I truly own them.
From my twitter feed this week:


PressBooks Goes Open Source To Let Authors Create Book Sites In Seconds
Not the Same Old Cup of British Tea Watch. 
RR Donnelley results hit by $1bn impairment charge
OCLC and ProQuest Collaborate to Enhance Library Discovery.  
What the Library of Congress Plans to Do With All Your Tweets  

In sports:
Lancashire County Cricket sign path-breaking 10-year deal PND Senior in the news - Congrats & Great News!
 




Wednesday, June 27, 2012

Making Your Metadata Better: AAUP Panel Presentation - M is the New P

(Video of this presentation here)

The last time I was asked to speak at an AAUP meeting was in Denver in 1999 and naturally the topic was metadata. As I told the audience last week in Chicago, I don’t know what I said at that meeting but I was never asked back!  I am fairly confident most of what I did say in Denver still has relevance today, and as I thought about what I was going to say this time, it was the length of time since my last presentation that prompted me to introduce the topic from an historical perspective.

When ISBN was established in the early 1970s, the disconnect between book metadata and the ISBN was embedded into business practice.  As a result, several businesses like Books In Print were successful because they aggregated the collection of publisher information, added to this some of their own expertise and married all this information with the ISBN identifier.   These businesses were never particularly efficient but, things only became problematic when three big interrelated market changes occurred.  Firstly, the launch of Amazon.com caused book metadata to be viewed as a commodity, Second, Amazon (and the internet generally) enabled a none too flattering view of our industry’s metadata and lastly, the shear explosion of data supporting the publishing business required many companies (including the company I was running at the time, RR Bowker) to radically change how they managed product metadata.

The ONIX standard initiative was the single most important program implemented to improve metadata and provided a metadata framework for publishing companies.  As a standard implementation, ONIX has been very successful but the advent of ONIX has not changed the fact that metadata problems continue to reside with the data owners.

More recently, when Google launched their book project a number of years ago it quickly became apparent that the metadata they aggregated and used was often atrocious proving that little had changed since Amazon.com had launched ten years earlier.  When I listened to Brian O’Leary provide a preview of his BISG report on the Uses of Metadata at the Making Information Pay conference in May, I recognized that little progress had been made in the way publishers are managing metadata today.  When I pulled my presentation together for AAUP, I chose some slides from my 2010 BISG report on eBook metadata as well as some of Brian’s slides.  Despite the 2-3 year interval, the similarities are glaring.

Regrettably, the similarities are an old story yet our market environment continues to evolve in ever more complex ways.  If simple meta-data management is a challenge now it will become more so as ‘metadata’ replaces ‘place’ in the four ‘p’s marketing framework.   In traditional marketing ‘place’ is associated with something physical: a shelf, distribution center, or store.  But ‘place’ is increasingly less a physical place and, even when a good is only available ‘physically’ - such as a car, a buyer may never actually see the item until it is delivered to their driveway.  The entire transaction from marketing, to research, to comparison shopping, to purchase is done online and thus dependent on accurate and deep metadata.  “Metadata” is the new “Place” (M is the new P): And place is no longer physical.

This has profound implications for the managers of metadata.  As I wrote last year, having a corporate data strategy is increasingly vital to ensuring the viability of any company.  In a ‘non-physical’ world, the components of your metadata are also likely to change and without a coherent strategy to accommodate this complexity your top line will underperform.   And if that’s not all, we are moving towards a unit of one retail environment where the product I buy is created just for me. 

As I noted in the presentation last week, I work for a company where our entire focus is on creating a unique product specific to a professors’ requirements.  Today, I can go on the Nike shoe site and build my own running shoes and each week there are many more similar examples.   All applications require good clean metadata.  How is yours?

As with Product and Place (metadata), the other two components of marketing’s four Ps are equally dependent on accurate metadata.  Promotion needs to direct a customer to the right product, and give them relevant options when they get there.  Similarly, with Price, we now rely more on a presumption of change rather than an environment where price changes infrequently.  Obviously, in this environment metadata must be unquestioned yet rarely is.  As Brian O’Leary found in his study this year, things continue to be inconsistent, incorrect and incomplete in the world of metadata.  The opposite of these adjectives are, of course, the descriptors of good data management.

Regrettably, the metadata story is consistently the same year after year yet there are companies that do consistently well with respect to metadata.  These companies assign specific staff and resources to the metadata effort, build strong internal processes to ensure that data is managed consistently across their organization and proactively engage the users of their data in frequent reviews and discussions about how the data is being used and where the provider (publisher) can improve what they do.

The slides incorporated in this deck from both studies fit nicely together and I have included some of Brian’s recommendations of which I expect you will hear more over the coming months.  Thanks to Brian for providing these to me and note that the full BISG report is available from their web site (here).

Wednesday, December 07, 2011

BISG Policy Statement on ISBN Usage

The Book Industry Study Group after long deliberation and incredibly astute consulting has announced its policy recommendation for the use of ISBNs for digital products (Press Release):
This BISG Policy Statement on recommendations for identifying digital products is applicable to content intended for distribution to the general public in North America but could be applied elsewhere as well. The objective of this Policy Statement is to clarify best practices and outline responsibilities in the assignment of ISBNs to digital products in order to reduce both confusion in the market place, and the possibility of errors.

Some of the organizations which have indicated support of POL-1101 include:

  • BookNet Canada
  • National Information Standards Organization (NISO)
  • IBPA, the Independent Book Publishers Association
CLICK HERE to download
 Close readers of this blog will recall the work done by the identification committee of BISG:
In the spring of 2010, BISG's Identification Committee created a Working Group to research and gather data around the practice of assigning identifiers to digital content throughout the US supply chain. "The specific mandate of the Working Group was to gather a true picture of how the US book supply chain was handling ISBN assignments, and then formulate best practice recommendations based on this pragmatic understanding," said Angela Bole, BISG's Deputy Executive Director. "Around 60 unique individuals and 40 unique companies participated in the effort. It was a truly collaborative learning process."

Noted Phil Madans, Director of Publishing Standards and Practices for Hachette Book Group and Chair of the Committee in charge of developing the Policy Statement, "It was quite a challenge to bring some measure of consistency and clarity to what our research revealed to be so chaotic and confused that some even reported thinking ISBN assignment should be optional--a 'nice to have'. This, clearly, would not work."
The initial consulting report was discussed publicly about 12mths ago and I summarized that presentation in this post from January 17, 2011.

These were the summary conclusions from that presentation:
There is wide interpretation and varying implementations of the ISBN eBook standard; however, all participants agree a normalized approach supported by all key participants would create significant benefits and should be a goal of all parties.

Achieving that goal will require closer and more active communication among all concerned parties and potential changes in ISBN policies and procedures. Enforcement of any eventual agreed policy will require commitment from all parties; otherwise, no solution will be effective and, to that end, it would be practical to gain this commitment in advance of defining solutions.

Any activity will ultimately prove irrelevant if the larger question regarding the identification of electronic (book) content in an online-dominated supply chain (where traditional processes and procedures mutate, fracture and are replaced) is not addressed. In short, the current inconsistency in applying standards policy to the use of ISBNs will ultimately be subsumed as books lose structure, vendors proliferate and content is atomized.