Centralizing metadata, content and assets: Paradise Lost and Regained

Posted by Christopher Hill on Feb 23, 2011 5:46:00 PM

I've been working in content management for more than ten years, and thinking back over that time I realized that the dream of a true, standards-based, central repository for all of an organization's assets I naïvely espoused in the late 90s still hasn't become a reality except in the most narrow of applications. When I used to write and teach XML classes I was sure that open markup standards were going to revolutionize the way we created and managed assets. Around 2003 I started to become a bit disillusioned with my vision for content utopia. By 2008 I had all but thrown in the towel. Despite herculean efforts content kept worming its way into proprietary, tactical-level production systems and often was never seen nor heard from again, a victim of legacy of "fire and forget" publishing approaches common prior to the rise of the Internet.

 

Fortunately, just as I had resigned myself to living in a world of content silos, new strategic ways of managing content started to emerge that rekindled my ideals. The idea is more modest than my grandiose vision of pure standards I once embraced, but offers a new, more practical approach that can survive in the real world.

 

Rather than insist that every asset be centralized in a consistent, preferably open, format practicality may dictate that we instead work to build a centralized asset repository that shares common representations for all assets. The actual bits and bytes making up the asset (Word documents, InDesign files, photos, videos, etc.) can still be developed and stored in traditional systems where applicable, but a new system takes on the responsibility of cataloging relevant features and details about the asset in a centralized repository. So instead of insisting that every asset be physically managed in a central repository, we instead insist on the much more modest - and realistic - demand that all assets make relevant, common data and metadata available in a consistent format through a centralized system. This distinction means that rather than try to replace the tactical systems we use to create, manage or distribute content we instead develop a parallel, complementary content management strategy that reflects data in these systems and presents a common, consistent view of the asset regardless of type. 

 

So an image file may exist as a TIFF or PSD formatted file in a production system or on some hard drive somewhere, but the centralized repository maintains a record for this image with all of its relevant metadata and a standard image format readily accessible to any system (i.e. PNG, JPG in thumbnail and applicable preview formats). For a lot of applications, centralized lighter-weight representations of content is enough to create new products without returning to . For example, if I want to rapidly re-use images or stories on a new microsite, I don't have to resort to tracking down all of the content in its silos, but instead rely on these common representations to collect the assets together and send them into my Web CMS for the new microsite. Formats, conversions, and so forth can either be provided to the central system through traditional manual conversion or, preferably, through automated mechanisms built in to existing content workflows.

 

This sort of approach was attempted using search technologies at one time, but lacked an important ability to offer the depth of content management required to not just find the asset but also to be able to use and transform it. It gave us the ability to view the content but not any tools to do anything once we saw it. Search remains important, but a real central repository needs to actually have usable representations of content that can be managed, transformed and distributed as assets on their own. This requires a full content management system.

 

So my new vision of a centralized asset repository is not the end-all be-all "do everything" system that becomes impossible to design and build, it's a "do-some-things" central system that maintains some consistent, common format that can be readily transformed and transmitted and becomes an organization's strategic content reserve. It can answer questions like "what assets do we have about Egypt?" quickly, and serve as a baseline for those assets so that after finding them they can be used in our various tactical systems.

 

To build such a thing, consistent representations are needed. When looking for data standards we of course start with XML. When only a binary will do, ensuring that pointers are accurately maintained to the original assets and appropriate renditions of the binaries are created for things like the user interface of the central repository is an obviously useful model. Even if re-work is required the assets are already under active management. 

 

The RSuite Content Management System happens to be a great foundation for building shared, managed centralize repositories of content. The system is flexible, built on an XML standard database with a metadata model that can not only leverage existing metadata but also be extended in arbitrary ways to adapt to evolving requirements. It is built on open standards and is a good corporate citizen, ready to interoperate with existing systems. The native XML database and pointer management features ensure that consistent representations are available. This approach creates a solid foundation for a strategic, centralized asset repository. 

 

Part of my role as Product Manager for Really Strategies will be to focus on the ways that our existing clients have adopted XML-based content management. I'll be reporting in with our client success stories at building these content repositories here on the blog. 

 

Does your organization have a vision for managing content strategically? It’d be great hearing how others are working to address this challenge.

Topics: content management for publishers, publishing, CMS, best practices, XML, metadata

Metadata lessons from Google Books

Posted by Lisa Bos on Feb 15, 2011 9:24:00 AM

This Salon interview with Geoffrey Nunberg about Google Books' unfortunate use of metadata is fascinating as an illustration of why a publisher implementing a CMS should focus as much (maybe more) on metadata as on anything else. Bad metadata leads to all sorts of problems, and unfortunately it's a self-reinforcing problem - bad leads to worse as users repeat mistakes, act on inaccurate search results, and ultimately come to distrust the system. By "focus on metadata" I mean publishers implementing CMS should take care in:

  • modeling metadata
  • the creation of controlled lists and taxonomies
  • the design of automated and manual tools for assigning metadata
  • the development of automated validation tools to ensure quality
  • the development of search that leverages metadata
  • user interface design to make metadata easily visible in various contexts (browse, edit, search results, ...) to encourage consistent usage and metadata correction/entry whenever it's convenient to the user

Here's Nunberg's original article in the Chronicle of Higher Education from August 2009 and a related blog post. This topic is obviously fascinating at face value as well - as it relates to the usefulness of Google Books for different usages by different users with different expectations. The comments to Nunberg's article/blog posts illustrate effectively that smart, well-intentioned people strongly disagree on the value of metadata or of particular types of metadata as compared to the benefits of "simply" making content available through fulltext search. This basic disagreement often shows up during design projects for RSuite CMS implementation. Leaders within a publisher need to reach agreement about which metadata will truly be of value internally and to readers and about which types of usage are most important to support. They also need to determine the cost/benefit ratio (metadata is often relatively expensive to do right). If they can't reach such agreements, then it's also unlikely they will consistently and usefully build and leverage tools for metadata in the first place - thus leading to a self-fulfilling prophecy on the part of the fulltext-instead-of-metadata advocates.

Of course, there's also a role here for the technology vendor like Really Strategies - we need to make it as easy as possible for publishers to take the steps on the bulleted list at the top of this post, so that the human effort required to make metadata really valuable is also really efficient.

Topics: content management, publishing, metadata, Google Books

Comment below