Monday, November 19, 2007

Five Questions on Global Data Synchronization

That's a title likely to induce a narcoleptic attack in all but the most ardent followers of bibliographic matters but it is nevertheless an important topic for all managers of book information. Industries other than publishing also battle data reliability and timeliness and, over the years led by umbrella groups such as UCC and EAN (now combined into one organization named GS1), they have developed programs to embrace supply chain efficiency and its' co-relation data integrity. Data Synchronisation (GDSN) is such a program which I have noted a few times in the past (Post). The objective of the GDSN is to ensure that all trading partners are working with the same set of product details that are simultaneously synchronized at a network level and in transaction details such as purchase orders and shipping details. The benefits of synchronised data can extend from 'simple' efficiency improvements in the ordering and receipt process to higher effectiveness in marketing and promotions programs.

As I mentioned earlier on this topic, BookNet Canada is embarking on a test of data synchronization and I asked Michael Tamblyn, President of BookNet Canada my five questions.

  1. Firstly tell us about how BookNet Canada got started and what you have achieved thus far. What are your current priorities?

    BookNet Canada came into being in 2003 when Canadian retailers, publishers and the federal government decided that there should be a central not-for-profit agency to coordinate technology and supply chain innovation for the Canadian book market. While that sounds about as thrilling as a three-day lecture on HVAC engineering, we have been able to move the industry very quickly in some very exciting directions. Canada benefits from relatively small size, a general tendency towards collaboration, being a cross-roads country with ties to the U.S., UK, and EU, and a community of quite forward-thinking retailers and system vendors. It lets us get things rolling quickly, gather feedback early and often, and push the envelope a bit more than larger markets.

    Some examples: from a somewhat stagnant state in '03, B2B e-commerce now accounts for 85-90% of all business documents; EDI invoices and ASNs are fully supported through the publishing community; even independent retailers do EDI-based receiving. BNC SalesData, our national sales tracking service, launched in 2005, tracks sales, stock position, orders outstanding on every title, without modeling or estimation, on 70% of the book market. In a nice mouse-eats-elephant story, our Canadian Bibliographic Standard was adopted more-or-less in its entirety as the BISG Metadata Best Practice Guideline for the U.S. market for both ONIX and Excel.

    Then there is the more forward-looking work: collaborative sales data mining for independents, backlist optimization and forecasting research, industry cost analysis on returns, digital publishing trends, our annual Technology Forum. And on it goes.

  2. You announced a Data Synchronization initiative in mid-summer. Can you give us some back-ground on this project, the status and where you see the initiative going over the next six months? What is your time table and what is your hoped for end result?

    We approached GDSN with a set of assumptions that looked something like this:

    1. Publishers already have a data sharing standard -- ONIX -- that they have embraced and invested in. They shouldn't have to learn another one. Make it simple!
    2. GDSN currently serves a small part of the retail sector, but if it became ubiquitous at some point in the future, how can we protect publishers from a price perspective?
    3. Think global, start local. The G in GDSN is there for a reason, so let's not assume that we're just making a Canadian service for Canadian publishers and retailers.

    Those first principles have guided our efforts over the past few months. We have been working with Commport, our GDSN data pool partner, on the construction of an ONIX-to-GDSN bridge that is now in testing. Timelines are very dependent on the retailers and wholesalers involved, but we hope to be well into the pilot six months from now. From our perspective, the "pilot" itself doesn't begin until we are moving active title data from a real publisher to a real retailer who is actually going to use that data in purchasing and POS systems. That's an important point: the challenge isn't getting the data into, or out of, the pool; it's getting that data into retailer systems so that it supplants the current mix of spreadsheets and paper forms. Until then, the conversation hasn't changed. Data conversion is easy, adoption is hard.

    In parallel, we are preparing a draft submission to GS1 regarding additions to the GDSN Global Data Dictionary to make it more relevant to the book trade. That will certainly spend some time in the loving embrace of the BISG Metadata Committee before heading to GS1.

    From a pricing standpoint, I think we've come up with the best possible model for publishers. A free 1-year pilot, unlimited upload and publishing to the global network, with the clock starting when the data goes into production with retailers (i.e. not when testing starts, but when it ends). Then very low per-SKU fees that are capped at a shockingly low rate, just in case this breaks out of the mass market and into the trade.

  3. What issues have you encountered that were unexpected? Given that the book industry is ‘fitting’ in to a set of standards that have been developed for other industries how much of an issue has it been trying to marry the existing data structures with our industry?

    Always lots to learn, which is part of the fun. Some highlights:
    * The extent to which GDSN stands to benefit independent wholesalers, many of whom have never really grasped ONIX in their relationship with publishers, and who have to serve a retail community who couldn't care less about book-industry-specific standards.
    * We've talked to several publishers who have, because of various retail relationships, been required to submit to GDSN data pools over the past five years. None of them have seen their data make it into production systems. It's safe to say that there are some data pools out there who have been less than candid about where GDSN data really gets used and by whom.
    * Current GDSN costs per SKU or ISBN have been absolutely egregious! $25 or more per SKU per year? That's great if you are in consumer packaged goods with 100 SKUs worth $50M each, not so great if you're a mid-sized publisher with a line of DIY books selling into general retail. Time to fix that, I think...

    In terms of fitting in, there are definitely some things that need to be improved in the Global Data Dictionary if books are going to find a happy home in GDSN. GPC Product Forms aren't perfect for books. You can't pass along an author, just a title. Things like that. It's workable for mass market applications, but I think the goal should be to get the Canadian/US standard fields well-represented and then build from there.

  4. Is there an on-going relationship with GS1 here? Do you anticipate the publishing industry will be exposed to best practices and perhaps learn from the GS1 community?
    If so, where do you see the greatest potential benefit?

    We'll be working with GS1 on the standards and data dictionary issues, but we have avoided a relationship with 1Sync, their data pool service provider. When we started watching this space, we realized that one of the great things about GDSN is that it's an open, certified standard, which makes the data pool game an excellent market for aggressive fast-followers. We selected a vendor with a strong track record in high-volume data processing who has made a name for themselves enabling whole industries on GDSN but who was also willing to toss out the rulebook on GDSN pricing to meet the needs of the book industry.

    In terms of who learns from who, I think that GS1 has a lot to learn from the book industry. "Industry-With-Lots-of-Low-Price-Point-SKUs" is still reasonably new for GDSN, and nobody does massive numbers of discrete, non-variant SKUs like the book industry. They are working with Music and DVD now, which should help, but Music and DVD aren't nearly as sophisticated as the book industry regarding product data (much to the dismay of any retailer who has ever sold both!) I'd argue that we have spent more time and effort working out the issues related to standardization of rich product description metadata than any other industry, so I think the conversation is going to be "Here's ONIX, which we know and love. Let's figure out how much realistically needs to be in GDSN." With any luck, we can extend the data dictionary accordingly.

  5. Will your Data Synchronization initiative influence similar initiatives in the US and UK.

    Will those markets make full use of your path finding or more to the point will they have to develop their own initiatives? Getting GDSN off the ground is going to require the concerted effort of several national markets. The GS1 data vetting process requires broad support to propose changes to the Global Data Dictionary. We are happy to lead the charge, but we want to make sure that this meets the needs of the larger book market as well, so plenty of collaboration is required.

Michael can be reached at BookNet Canada: mtamblyn(At)booknetcanada.com

No comments: