Recently developed this short discussion document to start discussions about blockchain and the possible development of 'proof of concept' ideas. Get in touch if you are interested in a similar discussion.
Monday, July 16, 2018
Financial data is highly transacted and Thomson Reuters counts as clients most financial companies of any consequence. As a result of this implementation to AWS, their clients will gain increased flexibility in how they use data and how clients can develop new products based on this data. Not only do clients not have to build and maintain their own data centers (obviously they can if they want) but they can build their own applications on AWS which in turn will allow them to be more flexible in how they service their customers. The model Thomson Reuters is establishing could be revolutionary in the manner in which users manage financial data and service customers.
"The enhancement to the Elektron Data Platform will initially provide access to real-time data on the secure and scalable Amazon Web Service (AWS) Cloud in North America, with plans to expand to Europe and Asia later this year. With the cloud API, data can be consumed natively on AWS, directed to applications based in other cloud environments, or to an on-premise environment.
As a simplified, conflated real-time service, the real-time in the cloud service can power up to three client applications at three updates per second across 50,000 instruments at the same time, which can be selected from the full universe of over 70 million instruments covered by the Elektron Data Platform."
No less important from this announcement is that by using AWS, Thomson Reuters will be buying in to a set of standards and protocols which will encourage application development, experimentation and likely broader usage. This will lower the barriers to entry for many existing and new customers.
As the sheer amount of data increases and complexity grows, Thomson Reuters have taken the view that making data accessible can reduce complexity and help companies focus more on the delivery of analytics, machine learning applications and other innovations. Enabling this without a cumbersome back end technical architecture will be the strategy all data managers will begin to execute.
Friday, July 13, 2018
A report from OCLC takes a look at voter perceptions of public libraries.
In 2008, OCLC published From Awareness to Funding: A Study of Library Support in America, a national study of the awareness, attitudes, and underlying motivations among US voters for supporting library funding. The research, which was led by OCLC with funding by the Bill & Melinda Gates Foundation and conducted by Leo Burnett USA, dispelled long-held assumptions and provided eye-opening insights about who supports public library funding and for what reasons.
A decade later, OCLC has partnered with the American Library Association (ALA) and its Public Library Association (PLA) division to investigate current perceptions and support among US voters and how they may have shifted in the intervening years. The partners re-engaged Leo Burnett USA and revisited the survey instrument used in the original research
Monday, July 09, 2018
“Understand the business strategy” is frequently one of the first tasks on my project workplans, usually undertaken in the first week or weeks of an engagement. But this essential exercise can also be one item that generates push-back from clients, who see it as something a consultant should undertake on their own. Obviously, embarking on a consulting project without an understanding of the business you are engaged to help is unprofessional and displays disinterest (both of which are justifications for dismissal in my view). However, no amount of a consultant’s second- and third-party research can substitute for first-hand insight (on business challenges and strategies) from senior members of the management team. These inputs are critical to the development of a baseline understanding of the business, which is often one of my first deliverables and also serves to record clearly what management told the consultant.
Depending on the scope of the engagement, it may only take two or three days during the engagement’s first week to conduct senior management team interviews, review strategy documents and other proprietary materials. But I’ve also conducted engagements where almost the entire project scope consisted of examining the business strategy and its relationship with business execution and took several months to complete.
This first phase also yields other benefits that will inform the rest of the project. (At PriceWaterhouseCoopers (PWC), this phase of our methodology was termed ‘Engage’ for obvious reasons and I logged many hours with new consultants as a certified instructor in the PWC project management methodology). As a consultant, you will have researched the business before delivering your proposal and that research will give rise to a set of initial questions for the senior team. During these interviews, you will have the opportunity to validate your research and note any changes and/or differences. You will also have a chance to ‘test’ your next stage interview questions and assess if you are focusing on the right issues and business drivers. Compiling a set of relevant questions for the detailed interviews you will undertake in the next phase is (obviously) critical to the eventual success of the project.
Most importantly, during these meetings with the senior management team, you will have the opportunity to define the ‘success criteria’ for the project on which you are about to embark. I have often found that the objectives of one or two executive team members are opposed to or prioritized differently from other executive members which, as a consultant, you need to manage from the early interview stages to the final report. It can also be the case that the CEO may be unaware of some of these project priorities and/or differences of opinion, which is why I try to arrange the CEO meeting last. During that meeting, it is important to address these conflicts head on to avoid any future project problems. As an aside, if you are told you do not need to speak to the CEO (or business unit head) at this stage it is wise to push back on this ‘advice’ to get that meeting.
While it is incumbent on the consultant to do their company research and digest what they hear during the proposal process, there is no substitute for detailed discussions with management about the business strategy and project objectives. Without exception, this team will be more receptive to you (and more open one-on-one) once you are officially retained and their input will inform how you organize the rest of the project. Ultimately, the creation of the interview guide and/or the workshop program(s) for the detailed interview phase can sabotage the whole project if it’s not on point. That’s why I tell every client that the initial ‘engage’ phase should not be eliminated or truncated because it’s an investment in the success of the project. And it’s also an opportunity for the CEO to understand how effectively he or she has communicated the project priorities to the team: The consultant represents a ‘trusted third-party’ who often has the ability gather intelligence often not shared with the CEO.
However you term it, the initial stage of a project can frequently define the success or failure of an engagement. Economizing here can prove detrimental to the later phases of a consulting project and my advice is to push back hard if your project sponsor believes this activity to be unnecessary. As they say, penny wise and pound foolish.
Thursday, July 05, 2018
News of yet more executive turn-over at Barnes & Noble reminded me of similar CEO musical chairs at Borders as they were on their gradual and then precipitous decline. A series of non-book experienced executives tried to reinvigorate and rebuild the chain but a combination of the speed of digital change and the basic lack of real book and publishing knowledge resulted in these executives only making the situation at Borders worse than when each arrived.
In 2007, somewhat new CEO George Jones outlined his strategy to shareholders. I thought his effort was vapid and penned a version of my own.
While more than 10 years have transpired there are still some points here that B&N might think about. Read the whole post here.
No telling what the new management of Borders has in mind.Read the whole post here.
Last week George Jones, the recently appointed CEO of Borders Stores, Inc. released his strategic vision for the next three years. There was little in the document to inspire, and it was replete with suggestions that the route to success for Borders was to travel the road already trod by their stronger competitors rather than develop a set of bold new ideas. Coupled with this mediocre set of objectives was a time frame that seems embarrassing given the critical issues Borders and the retail book industry are facing. Borders sales per store and per square foot which lag their competition are declining, they have embarked on a diversification program that continues to draw attention away for the core products and they propose to withdraw from the international market that appears to produce 50% more revenue per store than the domestic business. What then might George Jones have said.....
Tuesday, July 03, 2018
This is something the book retailer will not need: Another search for a CEO in a volatile retail environment.
From the Reuters report:
From the Reuters report:
Barnes & Noble said Parneros will not receive any severance payment and he is no longer a member of the board. Parneros, who joined the company as chief operating officer in November 2016, became its CEO in April 2017.
The company said it would begin search for a new chief executive, and a leadership group would share the responsibilities of the CEO till a suitable candidate is found.
The company’s board was advised by law firm Paul, Weiss, Rifkind, Wharton & Garrison LLP on Parneros’ removal.
This week the European Union will vote on a set of revisions to copyright legislation which critics say could wreck the internet as we know it. Of particular concern are two clauses in the new legislation which passed through committee and will be voted on July 5th.
- Article 11 – termed the “link tax” – grew out of examples in Germany and Spain where platforms like Google and Facebook were obliged to pay a tax to support local publishers. The ‘link tax’ will force anyone to get a license from the publisher first before using content.
- Article 13 proposes to make gate-keepers out of platforms and police content uploaded by users. Piratically, they could do this only by building expensive and expansive filtering systems to catch content uploaded inappropriately. Only the largest platforms will be able to comply with this burden thereby marginalizing the smaller players.
Article 11 has been presented as a way for content owners to recapture lost revenue from platform providers which some – Rupert Murdoch – have castigated for building very large revenue streams from content they don’t pay for. While there is some truth to this assertion, the manner by which the EU proposes to counter this impact via this legislation has failed in earlier iterations.
In Germany and Spain, rather than boost local publisher revenues, research indicates the law cost local publishers as much as $10mm in lost revenue. Google News pulled out of Spain entirely. It seems logical that the more important a platform is to a content owner, via the traffic they drive to the content owner, the more leverage the platform will have to negotiate any broad content license. That leaves the smaller players out in the cold with limited resources to negotiate and a lack of standing in the benefit they provide to the content owner. This implies that the larger platforms will only gain in power as more content is concentrated on their delivery platforms.
It is article 13 which has received the most negative reaction since it places the already powerful platform providers in the role of internet police. While the legislation doesn’t specifically propose the implementation of filters, there is no other realistic solution to fulfill the requirements of this rule. Here again, it will be the larger, most embedded platforms which will have the technical capability (and money) to build the robust filters necessary to comply. More negatively, it is likely these solutions will remove content first and then place the burden of defending the upload on the user/consumer/member and, we all know how easy it is to communicate with most of these platforms when something goes wrong.
If this legislation does go forward there may be more motivation to get the technology right but recent examples of this type of filtering technology have been mixed.
The true negative impact of this legislation is difficult to determine given the murkiness of some of the definitions embedded in the draft. It is also unclear to what extent some of the copyright protections which already exist are taken further because of this this legislation. For example, the link tax clause (Article 11) could apply to headlines which goes significantly further than providing the full text of an article which is already covered by existing copyright legislation. Arguably the existing law work but the revision will materially stymie sharing and creativity. Additionally, the definition of ‘commercial website’ is also debatable. The parsing of commercial and non-commercial is to identify the ‘bad guys’ who make money by serving up content they don’t pay for versus those which don’t. However in its broadest interpretation “commercial” could mean everything from Kickstarter to Facebook which is why Wikipedia Italy went dark today in protest.
The EU parliament is scheduled to vote on this legislation July 4-5 with a final vote later in the year. Opposition is mounting against this legislation but the irony here is that only weeks after putting in place the GDPR legislation to protect individual privacy the EU is presenting copyright legislation which may empower the mighty at the expense of the individual.
UPDATE: Back to the drawing board for this legislation. From the Guardian:
Google, YouTube and Facebook could escape having to make billions in payouts to press publishers, record labels and artists after EU lawmakers voted to reject proposed changes to copyright rules that aimed to make the tech companies share more of their revenues.
The proposed new rules, which have been going through the European parliament for almost two years, have sparked an increasingly bitter battle between the internet giants and owners and creators of content, with both sides ferociously lobbying their cause.
More from The Verge
Friday, June 29, 2018
Here is my most recent selection of media and publishing articles on Flipboard.
View my Flipboard Magazine.
View my Flipboard Magazine.
Tuesday, June 12, 2018
Many publishers already use natural language processing to provide semantic enrichment and content recommendations for users. For example, using AI which can read and understand full text, publishers no longer need to rely on (limited and dated) metadata to identify their titles. At the conference Yewno, a vendor in this space, demonstrated their work with MIT Press using the Yewno tool to interrogate full text book content. The resulting analysis has thrown up significant new information about subjects and concepts specific to these titles which, in turn, informs editorial staff about new ways they can describe titles and relate titles to each other. The result – in this initial, limited test – is improved search results, relevancy and cross selling opportunities. Yewno and MIT plan to extend their collaboration into other segments of the editorial value chain including content acquisition and UI design. This work is particularly relevant to improving access to backlist titles, which often suffer from shallow descriptive and out of date metadata.
My other panelists at SSP (Storyfit, Elsevier, Unsilo, Molecular Connections) also described their activities using AI/ML to improve publishing workflows and discovery. Molecular Connections demonstrated how the American Institute of Physics (AIP) has used their technology to improve the experience for users seeking specific interrelated content. Unsilo is working with Taylor and Francis and the OECD to support better content discovery and collection development. Storyfit has a short video that shows how their technology works providing a good overview for potential customers.
Each of the AI/ML presenters at SSP demonstrated how artificial intelligence, sophisticated algorithms and deep analysis can be implemented to interrogate large datasets and corpus. This technology can also be highly leverageable to extend the capabilities of existing staff and to support, strengthen and expand the company’s product portfolio. The latter is especially true in the development of article and book collections.
Additionally, as an editorial tool these AI solutions can improve the accuracy of edits while also reducing revision cycles and cutting production costs. While we didn’t see that particular aspect of AI technology in these presentations, there is no doubt that the application of AI to the full editorial process will have a significant positive impact on workflow, staffing and cycle time. MIT's experience shows there is significant value to subjecting newly acquired titles to an AI filter to structure, add metadata and concepts and improve the submission before a human editor even looks at it. And it goes without saying that these activities will only take minutes to complete saving many hours of labor. Done correctly, the application of AI/ML tools can improve the overall productivity of existing staff.
The interest at SSP is just a hint of the general interest in AI and ML across all markets. A recent report by Pharus Advisors suggests that investment dollars are pouring into companies in the machine learning space. Here is their take:
As the Internet of Things expands and the amount of data being generated and collected continues to grow exponentially, Machine Learning is becoming a crucial part of managing and analyzing that data. Artificial intelligence can allow companies with high-volume data processes to vastly improve efficiency and productivity. However, growth in the AI / machine learning space has been slow until recently. McKinsey reported that in 2017 companies spent $39 billion on AI, three times more than in previous years. Furthermore, there has already been 70% growth in business value in AI during the first quarter of 2018, reaching $1.2 trillion (Forbes).
According to a recent survey, 61% of organizations most frequently picked machine learning / artificial intelligence as their company’s most significant data initiative for next year (Cloud-computing News). While market growth was slow for the past several years, 2018 is poised to be an explosive year for AI and machine learning investment.We are going to see and hear a lot more about AI/ML in publishing over the coming years, and many notable companies (Elsevier, AIP, Taylor & Francis) are investing aggressively to achieve cost and efficiency benefits, support improved customer experiences and develop new products. There are already enough examples and citations to make an informed decision about your strategy: If you are not on the AI ship soon, you will be left at the pier.
Wednesday, June 06, 2018
As an advisor to Zapnito, I've been impressed by the way the company has been able deliver innovative, technically advanced solutions to the academic and scholarly community. In this case study, they describe the application of their "expert network" community platform with one of their first customers Springer Nature. (Case Study)
From the post:
From the post:
What can we learn about the knowledge sharing by looking at the structure of the Springer Nature network?
Some structure is immediately obvious, notably the fact that there are clear, distinct modules for the different communities, and with strong links between some communities (which is discussed below). In terms of size, some communities are clearly bigger than others, which is likely due to a combination of maturity and topic popularity. Some communities are three years old, while others are only a couple of months old, and some research topics are especially popular, while others are relatively niche.
Communities tend to be dominated by a small number of individuals, who are likely to be experts and prime contributors in terms of knowledge. The bigger communities also have many “second tier” contributors with strong connections to the prime contributor.
This is particularly evident in the Microbiology and npj Science of Learning communities, and may be a result of “preferential attachment”, where newer community members prefer to connect with already well-connected individuals who are in a strong position to influence others.
The network density (that is, the level of connectedness between people) within each community is characteristic of the Zapnito “expert network” use case, where there is a moderate degree of connectedness. This is in contrast to other use cases such as content hub or peer-to-peer learning, which tend to be sparsely connected as they are focused on more traditional publishing models.
Wednesday, May 30, 2018
Friday, May 25, 2018
Thursday, May 24, 2018
As we know, Amazon's ability to mix, match and smash business models has no limit, but one thing is clear: Amazon is king of the subscription model. Recent numbers revealed by Jeff Bezo's about the number of Prime subscribers were, on the one hand, astounding but also entirely believable given the power with which Amazon has asserted itself in the retail channel.
The company continues to enhance their Prime subscription model and recently announced a Children's Book Box product. Currently in test, the product sells for $22.99 per box and guarantees a 35% discount on the list price of the books included in each box. Customers can decide how many boxes they want, on what schedule and which age tier they are most interested in. There are currently four age group tiers from age 'I can't stand up yet' to 12.
Currently, the offering appears to be hard cover (no eBooks) titles and from the announcement the selections will be a combination of classics and new 'selected' titles. Amazon has kindly made a cute video:
This book box features some of the functionality of existing amazon subscription services which includes confirmation of the shipments in advance, the ability to delay or pass on a scheduled shipment and the ability to substitute selections before they are shipped. Amazon may be interested in gathering even more information about their audience via this product. Children, generally speaking, don't make purchases therefore the data on their purchasing behavior gets mixed with many other purchases. A subscription product like this could provide potentially new data to Amazon.
Aside from independent retailers which face yet another threat from Amazon, there's another market segment which may suffer if this product proves successful and that is the library market. Both public and school libraries offer curation benefits and subscribe to services such as the Junior Library Guild to supply them will curated titles. Is it possible that the suppliers and/or libraries themselves could be disintermediated by this type of Amazon service? Of course, libraries provide other benefits in addition to the curation of children's books so the question isn't so simple, but those curation services cost money (and the books are sold full price). If Amazon can provide a better service at a good price then they may take market share in the library market as well.
That said, one final point. There is a substitution factor here. It may be more likely that the target market for this Book Box product would never step foot in a library. It may be the case that subscribers to the Book Box buy more (or the same) because the subscription service is so easy and convenient. Libraries and other retailers may only suffer at the margins. Regardless, Amazon again shows its' power and scale in being able to spin up a well crafted service like this with relative ease.
Related: WAPO on the best book box subscriptions
Wednesday, May 23, 2018
One of the companies mentioned in my blockchain post is opening the development floodgates with the launch of their developer platform. From Techcrunch:
It also announced the first company to launch an app out of the lab called Inkrypt. It’s an application designed to provide a way to publish content in a distributed fashion, meaning the article doesn’t live on any particular server. That makes it nearly impossible for sensors to block it.Dicker announced the launch on their web site:
“The first project to build on Po.et is Inkrypt, a global decentralized system providing a censorship-resistant solution for journalism hosting and delivery that will render journalism content permanent and immutable,” Po.et CEO Jarrod Dicker wrote in a blog post announcing the launch of Po.et Development Labs
Po.et is building the protocol for creators. When we talk about creators, we’re not just referring to those who will use the many applications built on top of the Po.et protocol for their work. We’re also talking about those who are building the applications on top of the protocol itself. Fostering an ecosystem for creators built by creators will enable people to own and value their content beyond anything that has been achieved on the internet. Po.et’s success will not only be a technological advancement, but it will also redefine, philosophically, what it means to be a creator. The era of permanence and reputation will transform the processes of building as we know it, in both how creators produce and the tools in which they use to do it.Po.et is a platform of decentralized systems built for attribution, discovery and monetization of digital content. Today, Po.et is building a trustless, verifiable and more reputable version of the web. It’s aim is to be the platform for the world’s creative assets, enabling creators, media companies and alike to build and leverage value by using blockchain technology (immutability, reputation and access).
Monday, May 21, 2018
As one of the last session’s panelists at the Publisher’s Forum in Berlin stated, “it is easy to go mad if you allow yourself to be consumed by all the threats and opportunities to publishing and it is far better to focus on the opportunities.” And, for these panelists speaking on the challenge growth poses, there are plenty of opportunities to contemplate. The predominant theme of the conference’s discussions was that ‘change is here’ and publishers can no longer choose to ignore it. This panel on Innovation and Growth was moderated by David Worlock with panelists Fionnuala Duggan from Informa plc, Joerg Rheinboldt from Axel Springer and Joseph Evans from Enders Analysis.
As publishers strategize their growth plans, Duggan noted they are beginning to experiment with new models and market-entry opportunities. One company discussed at the forum was the education company Alison, which brought to the self-teaching learning and life-long education market a ‘freemium’ model. Alison’s model requires great scale but it’s a global business and the company is going into areas where there are large populations of ambitious, underserved people such as Nigeria. Cengage has also challenged the traditional pricing model for educational materials with the launch of their content subscription model which has upended the textbook pricing model and may well become an industry standard model. In seeking growth, new business model innovation can have a profound impact on your business.
Axel Springer took another approach by first decoupling the primary business silos into which the company had been organized for decades. As a traditional news company, they offered news, display and classified advertising and services. This ‘siloed’ structure has been broken apart over the past 10 years into over 200 companies which, in the aggregate, do the same thing but across a wider number of operating entities. This structure, they believe, allows the businesses to be more creative and innovative in their relationships with customers and in new product development. Rheinbold commented, “Basically, we do the same thing: news, advertising, classified but way bigger and more international. Running the company is now less like a symphony and more like a jazz band.”
Axel Springer is looking at derivative products which can be developed from their content. Over the past two years, the company started to think more strategically about how their network of companies could be ‘repackaged’ to provide products and services to other companies. Out of this initiative they recently established a joint venture agreement with Porsche. The JV is in its initial stage but the combined team has already spoken a lot about when cars become self-driving and how that will affect the driver experience. Will this change generate ‘new time’ blocks into which Axel can deliver content? Other strategic opportunities may arise out of these changes: For example, Rheinbold noted that when cars need to be charged each day, customers will be putting charging devices into their houses which could create ‘ride along’ opportunities to deliver content for the joint venture company. Being creative about developing derivative products could represent significant opportunities for growth.
Customer knowledge is key to delivering successful products but one of the areas where publisher actions seem to impede customer activity is in the scholarly and academic market. According to David Worlock, scholars will continue to find it difficult to maintain complete knowledge of their fields as long as the major databases they use remain siloed. Even in the face of intensive competition from Google, Apple, Facebook and Amazon (GAFA), publishers remain hesitant to collaborate with their competitors who all have the same issue. In some situations, your competitors might be your best collaborators for growth.
Publishers should think differently about themselves – not as a single publishing brand with no customer identity but as an umbrella for many brands with consumer share of mind. Axel Springer was a newspaper company, but they are now a ‘media company’. Textbook publishers are now ‘education companies’. Science publishers are now ‘knowledge companies.’ Transforming your identity like this may make defining who to partner with a different exercise and lead to partnerships you may never have thought of. There is a lot of legacy baggage in the way publishers define themselves but adopting the customer-centric view should help shed that for the better.
Another growth opportunity is to embed your products in your customers ‘workflow’. Once your products are entrenched they become ‘sticky’ and difficult to replace. Additionally, indispensable products lead to close relationships with customers which will, in turn, generate new product development opportunities (further supporting renewal rates and price increases). If you have products which are delivered to customers via third-party license, you may want to consider sunsetting these relationships and reap the benefits of direct contact with customers. When I launched our first web products at Bowker, letting these third-party agreements expire was one of the first proactive things we did.
Some other avenues to growth mentioned during the conference included:
- Leveraging audiences to build advertising models and sponsorship programs, especially around franchise products
- Sponsored marketplaces which bring together seller and buyers
- Creating communities of like-minded consumers: Podcast networks, author brands, subject specific areas (similar to Zapnito’s applications)
It’s considered a given that innovation provides growth. But true innovation is often difficult to achieve and requires that customers, prospects, internal staff and other stakeholders participate in the process. In my experience, innovative products require deep understanding of customer needs and behaviors. Technology is vital, but it is a lever of change but not the change itself. Once a solution has been identified to solve a problem the business model can be determined, and detailed work conducted to determine whether the product is competitively defensible and can be used as a platform for further enhancement.
In the end, though, this session demonstrated that the best foundational strategy for growth is hiring the best people. Your employees and culture will frequently determine the success or failure of your growth plans. I recently spoke to a new University Press director who brought a consultant onsite to help take staff through a change management seminar. This, the director recognized, must be the first step before embarking on any new growth or transformation strategies. This type of seminar will also inform the leadership team regarding the abilities of existing staff to carry out change effectively. You may find you need new staff who have the innovation mindset.