Thursday, June 24, 2010

SS United States - August 1968

USS United States, August 1968
A weekly image from my archive.

The way the trim line arcs down from prow to stern, the funnels swept backwards as though pushed back by the wind, indeed the SS United States was a stylish ship. The liner held the trans-Atlantic record for many years and in this image you can easily imagine its sleek contour slicing through the sea. On the ships stern gold lettering proudly proclaims the ship's name and home port. Pretenders would have caught this parting message – “Sure, we're the United States and we're from New York!”

My parents were probably on a Circle Line tour in August 1968 and the liner was docked and preparing for a return to Europe. The ship’s owner was already in financial difficulty: My parents flew over on a 707 and jet travel killed the luxury liner. The United States was effectively taken out of service a year later.

Looking at the NY skyline almost 40 yrs later, the skyline is still familiar despite the significant changes. On the right of this image is the Sheraton Motor Lodge West Side, which opened in 1962 and boasted a roof top pool and “off street parking”. Rooms in 1968 went for $21. The Sheraton is now the Chinese consulate having first become run down as a Travelodge (I think) and then renovated by imported Chinese workers. Just to the left of the Sheraton and to the right of the first water tower you can just see the New Yorker hotel sign which is still a night time beacon on the skyline. There's an unobstructed view of the Empire State Building and to the far left is the Pan Am building with the Chrysler building in front.

The 2010 panorama has changed considerably but I don’t think my parents, on a 1968 Circle Line tour, would have thought their child would be able to view the Empire State Building out of his living room window forty years later.

Entire 1968 Set

Tuesday, June 22, 2010

The Curator and the Docent

Walking around a vast museum can be interesting and, sometimes serendipitous, but often it is an incomplete experience. Items are organized in specific groups yet not always in a manner that encourages exploration of the most important items. Presented with a gallery full of amphorae, it can be difficult to recognize the single important item while on your own and without a guide. Surfing the web for information and knowledge can offer a similar experience: Access and proximity is no guarantee you will happen on relevance. 

Museums and libraries are good proxies for the concept of “curation,” which we’re hearing a lot about at the moment. Private equity (for one) has found its next buzz word and funding vultures are lacing their presentations with references to ‘curation’ in an effort to gain financial support for their new business ideas. But curation is an old concept: Television networks, newspapers, magazines, journals and other media have all practiced a form of content curation for hundreds of years. We’ve just recently latched onto the idea of curation as though it were something new. 

The need for curation in the old media world wasn’t as obvious as in the internet world because, on the web, ‘everything carries the same weight’ and the average user has difficulty discerning good content from bad. Indeed, as content on the web exploded over the past fifteen years, users accepted the “good enough” concept – free content was plentiful – and were content to ‘satisfice’ either knowingly or obliviously. User behavior and expectations are changing and investors are now chasing businesses that profess to actively curate content and communities of interest. In recent years content curation has emerged out of the wild, wild, west of ‘mere’ content. Sites such as The Huffington Post, Red State and Politico all represent new attempts to build audiences around curated content. While they appear to be successful, at the same time there are other sites (such as Associated Content and Demand Media) contributing to the morass of filler content that can plague the web users’ experience. 

The buzz word ‘curation’ does carry with it some logic: As the sheer amount of information and content grows, consumers seek help parsing the good from the bad. And that’s where curation comes in. The amount of content available to consumers – much of it free of charge, but scattered across thousands of websites – is growing exponentially every day. At the same time, consumers are increasingly doing independent research and attempting on their own to source important information to support their increasingly complicated lives. Questions or information relating to healthcare, finances, education and leisure activities represent a small sample of the range of topics on which consumers look for accuracy and relevance, yet encounter an immense sea of specious or outdated content. 

In many ways, the web - in its entirety - is the new dictionary, directory or reference encyclopedia, but users with specific interests are increasingly beginning to understand they need to spend as much time validating what they find as they do consuming their research. In the old days, it was as simple as pulling the volume off the shelf and, while the web offers a depth and accuracy of content that far outstrips any from the old days, finding content of similar veracity can be a challenge. 

For the past two years, I was working on a project with Louis Borders at Mywire.com in an attempt to build a curated news and information service we called Week’sBest. For a variety of reasons we put the project on hold in February, but the concept was simple: Identify experts that can curate content on a range of specific topics and build a community of interested subscribers around the content. Our model was to find expert ‘content producers’ who retain unique knowledge and understanding of a specific topic and would filter content from across the web specific to their topic of expertise. Mywire.com built a unique editorial tool to make this process almost routine by pre-selecting topic-specific content from both brand name sources and from across the web. Our experts - the content producers – logged on each day and selected from this pre-sorted list only those items they considered the best content. 

Consumers interested in each of these topics subscribe to a free weekly email digest of the material selected. Our revenue model was based on turning a subset of our free email subscribers into paid subscribers who would gain access to high-quality content – such as content from Oxford University Press. While we were unable to execute as we expected, we did gain validation of our concept from both the publishing and the private equity community. 

Publishers, who we were chasing to be our ‘content experts’ liked that there was a low cost of entry for their participation and liked the editorial platform we had invested in. The equity community liked the ‘curation’ model, the people involved in the project and the investment that Mywire had made in the platform. However, we suffered the ‘prove it’ syndrome. Both publishers and equity partners wanted to see the model work before they committed and we ran out of time and resources. Mywire.com continues to invest in other curation type models. I remain convinced that applying technology to the selection of useful, valid and appropriate content is only part of the solution. 

At Mywire, we used a text mining tool as part of the editorial process and on simple news items – which are increasingly generic – placing content items into subject/topic groupings was relatively easy. The process isn’t perfect and requires frequent ‘fine tuning’ but while the tools are improving, human intervention is still required. Earlier this month we learned that even Google was applying some human filtering to their news site. There is a real debate whether consumers will pay for real expertise and knowledge: I believe they will, just as they paid for specialist magazines, journals, cable channels and similar media in the analog centuries. 

The atomization of content has complicated matters in that it has taken the proverbial covers off the print limitation of the traditional magazine. While a reader or subscriber will buy into the expertise of ‘Glamour’ or ‘Men’s Health,’ they now expect all important and relevant content and not just the content prepared by the magazine’s writers. After all, there is a low hurdle in the user’s ability to search for content on their own and it is silly to ignore this ability. Acting as a ‘content producer,’ the editors of ‘Glamour’ should be able to provide their paying subscribers with a collective representation of all content that’s important and relevant to their readers even if the content is produced by Glamour’s competitors. This is an important service and doesn’t limit the ability of Glamour to produce their own content; rather, it enhances it because they are able to view in detail the interests of their subscribers and produce applicable content to match. 

In the above example, generic news is never going to be the basis for paid subscriptions. For example, the news that suntan lotion causes skin cancer is a hyped news story. In the Glamour example, this news story would always remain in the free section of their site; however, available to subscribers would be a curated selection of in-depth content including reference material, added to over time, with commentary and discussion from their ‘expert’ editors and advisors about the real issue of sun protection products. 

With a brand such as Glamour, the number of expert curated topics made available to subscribers could easily exceed fifty and over time would be likely to grow. Strongly associated with this approach would be the development of communities around each topic, leading in turn to additional business opportunities such as ad programs, events and special publishing programs. The interest of consumers across a wide variety of subjects and topics continues unabated and the internet has only facilitated that interest, although our expectations have been reduced or marginalized due to undifferentiated content. 

The consumer is increasingly smarter about the content they consume and they also continue to impress with their ability to seek out and absorb what, in the analog world, was considered too “advanced” for their understanding. There was always an arbitrary wall between “professional or academic” content and consumer content: Increasingly, consumers are making it clear that they want to make the decision themselves whether particular content is or is not too advanced for their comprehension or enjoyment. 

Recently, as I wandered around a museum with overwhelming breadth and depth of content, I was lucky to be guided in my travels by a professional. When she introduced herself to me, she used the term ‘docent’ to describe her function. A docent is a ‘knowledgeable guide’ and the function seems to me to perfectly complement the process of curation. In an online world, where more and more content appears to “carry the same weight,” we will look to and pay for the combination of curator and docent – sometimes the same person or entity – who can organize and manage a range of content and also engage with the user so they gain insight and meaning from the material. 

At Mywire.com, we intentionally approached branded media companies because they were recognized as experts in their segments. These are the companies which should be able to build revenue models around the curation of content to offer subscribers a materially different experience than simply performing a Google search query delivering up generic news and semi-relevant content.

Sunday, June 20, 2010

MediaWeek (Vol 3, No 25): Joyce, Embedded Librarians, Cloud Computing Survey

You may have heard of the iPad & Ulysses controversy - here from the Observer:
Ulysses has what the racing fraternity call "form" in this regard. In 1926, for example, four years after its publication, the Cambridge English don FR Leavis decided that he wanted to quote from the book – which was then banned in Britain – in his lectures. He therefore wrote to the Home Office seeking permission to import a copy. For his temerity, he was then summoned by the university's vice-chancellor, who handed him a note from the director of public prosecutions revealing that the Cambridge police had been monitoring Leavis's lectures, and concluding with a recommendation that he "should be suitably and firmly dealt with". The publishers of Ulysses "Seen" are no doubt feeling relaxed and contented, on the grounds that if you can get round Apple's editorial control-freakery then you can get around anything. There is, however, one further possibly fly in their ointment. His name is Stephen Joyce. He is the grandson of the great man and since the 1980s has been in sole control of his grandfather's literary estate. More importantly, his desire to control the uses of his literary property makes Steve Jobs look like St Francis of Assisi.
Robert McCrumb on a busman's holiday in the Northeast (Observer):

In Washington I went to one of America's great bookstores – Politics and Prose on Connecticut Avenue – a beacon of traditional bookselling, run for the past 25 years by Barbara Meade and Carla Cohen. To the dismay of the locals, they have just announced their retirement and the business is for sale. But it's age (both women are in their 70s) not recession or competition from Barnes and Noble that's driving this decision. Meade told me that their book sales are actually up 30% in 2010.The US and Canada remain an enthusiastic and sophisticated book market. Unlike in Britain, there are hardly any festivals, but book clubs and reading groups make up the deficit, and everyone is a consumer, if not always a reader. A Martian would have to conclude that the thing called "the printed word" was enjoying a bonanza.

Embedded Librarians? (Inside HigherEd):

The model Roderer and her staff are pursuing is distributed not only in the sense that every researcher’s computer can access the library’s website and its vaults of electronic journal articles and e-books, but in that library personnel are embedded in various departments to work with researchers on their own turf. These staffers are no longer called librarians; they are “informationists.” (Roderer did not invent the term, but she prefers it to “librarian,” which she says evokes envoys from a faraway building rather than information experts whose skills are applicable anywhere.) Medical students, clinicians, and professors are loath to trek across campus to the library’s physical plant now that the majority of its collections are available in electronic format through its website, Roderer says. However, that does not mean the library’s staff is no longer of use to researchers, she says — nor does it mean the staff’s interactions with researchers need to be limited to e-mail and text-messaging.
Pew Research on the Future of Cloud Computing (Pew):

The highly engaged, diverse set of respondents to an online, opt-in survey included 895 technology stakeholders and critics. The study was fielded by the Pew Research Center's Internet & American Life Project and Elon University's Imagining the Internet Center. Some 71% agreed with the statement: " By 2020, most people won't do their work with software running on a general-purpose PC. Instead, they will work in Internet-based applications such as Google Docs, and in applications run from smartphones. Aspiring application developers will develop for smartphone vendors and companies that provide Internet-based applications, because most innovative work will be done in that domain, instead of designing applications that run on a PC operating system." Some 27% agreed with the opposite statement, which posited: "By 2020, most people will still do their work with software running on a general-purpose PC. Internet-based applications like Google Docs and applications run from smartphones will have some functionality, but the most innovative and important applications will run on (and spring from) a PC operating system. Aspiring application designers will write mostly for PCs." Most of those surveyed noted that cloud computing will continue to expand and come to dominate information transactions because it offers many advantages, allowing users to have easy, instant, and individualized access to tools and information they need wherever they are, locatable from any networked device. Some experts noted that people in technology-rich environments will have access to sophisticated-yet-affordable local networks that allow them to "have the cloud in their homes."

From the twitter: A good series of notes from John Mark Ockerbloom On bibliographic data and cataloging: #rmti A Failure to Communicate http://shar.es/mysOJ In lawsuit against Georgia St over e-reserves, scholarly publishing faces a defining moment Reader's Digest Moves to Mend http://bit.ly/9ZizvU Less ambition, fewer staff but looking to rebound. And in sports, England beat Australia and the rest is not worth mentioning. Lakers beat Celtics - always good.

Friday, June 18, 2010

Metadata Everywhere

An interesting article in OCLC's NextSpace publication about the increasing importance of meta data. Music to bibliographers and catalogers' ears. (OCLC):
“Metadata has become a stand-in for place.”

So says Richard Amelung, Associate Director at the Saint Louis University Law Library. When asked to expand on that idea he explains, “Law is almost entirely jurisdictional. You need to know where a decision occurred or a law was changed to understand if it has any relevance to your subject.

“In the old days, you would walk the stacks in the law library and look at the sections for U.S. law, international law, various state law publications, etc. Online? Without metadata, you may have no idea where something is from. Good cataloging isn’t just a ‘nice-to-have’ for legal reference online. It’s a requirement.”

Richard’s point is one example of a trend that is being felt across all aspects of information services, both on and off the Web: the increasing importance and ubiquity of metadata. In a world where more and more people, systems, places and even objects are digitally connected, the ability to differentiate “signal from noise” is fast becoming a core competency for many businesses and institutions.

Librarians—and catalogers more specifically—are deeply familiar with the role good metadata creation plays in any information system. As part of this revolution, industries are increasing the value they place on talents and the ways in which librarians work, extending the ever-growing sphere of interested players.

Whether we are tracing connections on LinkedIn, getting recommendations from Netflix, trying to find the right medical specialist in a particular city or monitoring a shipment online, metadata has become the structure on which we’re building information services. And no one has more experience with those structures than catalogers.

Concluding:

“It is clear that metadata is ubiquitous,” Jane continues. “Education, the arts, science, industry, government and the many humanistic, scientific and social pursuits that comprise our world have rallied to develop, implement and adhere to some form of metadata practice.

“What is important is that librarians are the experts in developing information standards, and we have the most sophisticated skills and experience in knowledge representation.”

Those skills are being put to good use not only in the library, but in nearly every discipline and societal sector coming into contact with information.

Bibliographers Shall Inherit...Data Monopolies - Repost

I recently heard Fred Wilson speak and it reminded me of this post from February 5th, 2007:


Fred Wilson is a founder of Union Square Ventures a private equity firm located in NYC. He was also part of Flatiron Partners until he left to start Union Square. He was the key note speaker at Monday’s SIIA Previews meeting and spoke about Content; specifically that content "wants to be free."

He ended the session with a potentially more interesting theme which related to tagging and content descriptions. In answer to a question about the potential power of social nets and the attendant tagging possibilities he suggested that we shouldn’t have to tag information at all; that is, content should be adequately described for us. The questioner stated that ‘publishers are good’ at describing their content. Wilson disagreed, confirming (to me) that publishers are definitely not good at tagging or classifying their content. His comments confirm for me a belief that intermediaries that insert descriptors, subject classifications and other metadata to improve relevance and discovery will play an increasingly important role. Personally, I do not think the battle has yet been joined that will determine one provider of standardized meta-data within specific product or content categories. (Some players have clear positioning, take for example Snap-On tools purchase of Proquest’s Business Solutions unit which opens many intriguing opportunities – if you like car parts).

You may think that books are effectively categorized by Amazon.com and therefore Amazon is the standard. This is untrue: In fact there are several bibliographic book databases and none of them are compatible across the industry. Additionally, while Amazon allows great access to their data, they are not a good cataloguer of bibliographic information. Their effort is enough to serve their purposes. As a seeker of books and book (e)content, I will want to be able to search on a variety of data elements (publisher, format, subject, author) and find what I am looking regardless of the tool I am using. In my view a single source of quality bibliographic information distributed at the element level will solve this problem. Suppliers of content are beginning to understand that it is the description of the content (metadata) that is as important as the content itself.

It is really quite simple: A database provider needs to spend time standardizing their deep bibliographic content, distribute it to anyone who wants it and then figure out how they can make money doing that. Historically, a vendor had to create their own product catalog because either one didn’t exist or they preferred to build it themselves. Look at office products or mattresses. It is nearly impossible to compare items across vendors. Books and other media products are slightly easier but the legacy of multiple databases continues to reduce efficiency. Management of a product database/catalog should never be a competitive advantage unless it is your business.

Fred Wilson stated that if information wants to be free then where is the value in information? Unsurprisingly it is in attention. To quote, "there is a scarcity of attention and narrowing users’ data ‘experience’ to mitigate irrelevance is the future." Furthermore the ‘leverage points’ in the attention driven information model are Discovery, Navigation, Trust – ratings around content (page rank is good example), Governance, Values and Metadata – data about the data. The likes of Google, Yahoo and Microsoft have the first couple of these items well in hand but they will all increasingly need good meta-data that describes the content they are serving up. This is where aggregators/intermediaries step in whether it be tools, tv programs and movies, advertising or books.

He has provided a link on his web site to the presentation from this meeting.

Thursday, June 17, 2010

Photo: Tehran August 1972

PND on the tarmac, Tehran 1972
This will be a weekly series of photos from my archive and I hope you enjoy the selection.


This photo is a good point of embarkation for my series:

Pan Am Clipper Pacific Trader (N744PA) is parked on the tarmac at Mehrabad International Airport in Tehran. That's me in the foreground clutching my Air New Zealand carry on. My father, grand father and I were traveling back to New Zealand after I had spent the summer in the UK and my father had attended a Columbia summer business program.

The 747 jumbo was still relatively new and in those years the upstairs section was designed to be a lounge and dining room. Dad worked for the hotels division of Pan Am and we regularly traveled first class which included the upstairs section. In our experience there was never anyone up there, but there was this wonderful u shaped couch which stretched around the back of the cabin. If you were quick and lucky and the flight attendants allowed, it was possible to grab a section and lie prone from Hong Kong to Karachi. As long as you were strapped in they never woke you.

The 747 was too large for many airports at that time and so passengers tended to be bussed to the terminal. As the plane came to a stop in Tehran a contingent of armed troops came and encircled the plane with each soldier 10 or so yards apart. There they stood facing outward, guarding the aircraft until we departed. To this day I'm not too sure they were ever equipped to protect us.

I never kept track of the planes I traveled on but I was able to identify this 747 from the photo. Curiously, it was loaned by Pan Am to Iran Air about 2 years later. I like the two well dressed travelers at the bottom of the stairs - a sight not often seen now a days.

Photo: Flickr

Tuesday, June 15, 2010

BISG Revising Sales Reporting: Take a Survey

From BISG:

SHORT SURVEY ON DATA MODEL FOR COLLECTING BOOK INDUSTRY STATISTICS

Dear Book Industry Colleagues,

This is an exciting time!

As mentioned in the notice attached, for years AAP and BISG have developed data separately on the size of the market for books—in the aggregate and by market sector. We have used different models and produced different results. Now, we’re working together to develop a new data model to track book industry statistics and to dramatically improve our capacity to estimate the size of market sectors and the industry as a whole.

AAP and BISG have retained Management Practice, Inc. (MPI) to develop a prototype data model. A committee of representatives of AAP and BISG members is overseeing the process.

Now, you're invited to join us!

We expect participants from every sector of the book business will find that the improved accuracy and timeliness of data will lead to better business decisions and more effective public advocacy. We welcome your involvement in the process.

Over the next couple months, AAP, BISG and MPI will be contacting our members through interviews and surveys to determine the usefulness and accuracy of the new data model as it develops. The following brief survey stands at the beginning of this process. Please take the time to let us know what you think about the direction we're going by submitting your response no later than Friday, June 25, 2010 at 5:00 p.m. Eastern.

We look forward to working with you on this extremely important industry initiative.

LIVE SURVEY LINK: http://www.surveymonkey.com/s/P98PS2N

SURVEY DEADLINE: FRIDAY, JUNE 25, 2010 @ 5:00 p.m. EASTERN