There has been a steady stream of upbeat, expansive reports on the future of mobile marketing and advertising (most of which—if you track these sorts of predictions—tend to wildly overshoot their mark), coupled with prognostications that the internet as we know it is about to be subsumed by the mobile internet, which is being treated as a completely separate entity. Whether or not you agree with this (I don’t), it does imply a significant and fundamental shift in how content is created and managed.
The numbers for mobile access to the internet are huge; even the most conservative estimates put the number of users in the billions, which means a vast number of consumers are and will continue to look for content that needs to be packaged and delivered in bite-sized chunks. Most content that is consumed on-line is not consumed in its entirety; people are generally looking for specific data points (I may only need a paragraph of information that is embedded in an 800 page document), which means that content management standards such as DITA have been well ahead of the consumption curve for some time. Other technologies such as SMS were created with a haiku delivery format to begin with (a 140 character limitation forces pithiness), which has clearly resonated with the younger end of the consumer market (hence the success of Twitter, which is basically an SMS overlay).
DITA has always been about minimalism and brevity, as has Twitter. The interesting aspect of this is that DITA applied to complex content management has worked extremely well. The question becomes, what happens when the 140 character-set world intersects with the structured content (DITA) world? Complex content has always run in parallel to SMS/Twitter, but that also assumed two separate networks, one network for content heavy access from a fixed point device, the other network focused on mobile access where information is tightly parameterized. With the increasing prevalence of smart devices that whole model is being flipped, and at this point most content being run across Twitter and other SMS based networks does not have any standard for referential integrity the way DITA-ized content does. While this may sound wonky, it’s worth considering that many media companies and consumer-facing companies are becoming increasingly focused on Twitter as a content delivery mechanism without the benefit of content management standards. Or, in other words, the mobile internet runs the risk of becoming an information morass, similar to the way the internet looked in the mid and late 90s.
There appears to be a growing trend among content publishers of using targeting technologies to focus and solidify their relationships with their customers, rather than passing customer data to their ad network ecosystem. Although the benefits of this type of process shift seems obvious, you wonder what took publishers so long to figure this out.
The core driver of a relationship with a consumer is the product or service purchased (and I define this as the entirety of the relationship, the product includes advertising, the product or service itself, any supporting documentation, the website, customer service, basically any interaction between the consumer and the provider is part of the product). In the context of this trend, it refers specifically to publisher content; the parameters and content that defines my search for a specific piece of information or publication speaks volumes about who I am, and is one of the most valuable elements of consumer data available. The consumer’s interaction with the publisher’s content is effectively handing the key to the kingdom to the publisher. The publishers then, oddly enough, immediately toss this data to the ad networks, where it immediately becomes diluted, taken out of context, and resold over and over.
There is a hierarchy of intimacy in any consumer transaction; the most direct connection is when it is just the buyer and seller, the least direct is when multi-channel or advertising networks enter the equation. The more entities involved in the transaction, the more diluted the relationship becomes, because more people are getting their little piece of the consumer. Because publishers are the source, they are in the perfect position to establish pompetus (my favorite term, explained on my post for 1.24.09). Depending on what the publisher is offering (and we’ll skip the most obvious example), they have an unprecedented opportunity to genuinely connect with the customer. Why in the world would a publisher give something that valuable to an ad network? Money? They can make way more money by hanging onto the information and cultivating a profitable long-term relationship on their own. The ad networks are already picking up vast quantities of information as consumer troll the web, the additional data the publishers provide is a relative drop in the bucket to the ad network, but a relative gold mine to the publisher. It looks like publishers are finally starting to figure this out, and given the overall trends in publishing, not a moment too soon.
Mobility trends are clear and undeniable, it is not just that the global market for mobile devices is now measured in the multiple billions, but also that the next generation of mobile devices has gained traction at a remarkable rate. While smart phones only account for slightly more than 15% of mobile devices worldwide, that’s 15% of several billion, which is itself a pretty respectable number. In the long run, it will be interesting to see the dynamics between terminal devices with access to SaaS applications, vs. smart phones that provide processing services on the device itself. In both cases there is a significant content play since the main use of a smart phone or SaaS feature phone is data-centric, rather than voice-centric applications. Content not only needs to be mobilized, it needs mobilization across a broad range of media deliverables. The whole point of a content management system is to control complex information resources (including integration into back-end systems), and deliver the right information at the right time, and more often than not, to a mobile device. For field service personnel that require access to complex rich media information, this implies the content system needs to manage virtualization (doesn’t matter what the access device looks like, the information should always look and act the same), and synchronization (the fact that I’ve accessed and perhaps changed my files while on a mobile device should not result in multiple versions of the same file). This, of course, is in addition to baseline mobile requirements such as security, compliance with IT governance, etc.
The fact that information resources can be meta-tagged and categorized via some form of vertical ontology (that is, tagged and bagged), and have this done independent of the media type, means that when a field worker looks for information, the content management platform becomes a mobile enabler. It doesn’t (and should not) matter what the media type is; video can be tagged and bagged, just like text documents, graphics, .wav files, essentially any resource that has relevant information can be organized and stored at the component level, and assembled on the fly in response to a query (e.g. “what is the proper procedure for replacing the armature platter on an MRI Scanner?”). Having a medical/technology ontology that categorizes tagged and bagged information resources means the field worker receives a full blown rich media response to this query: this can include a text description of the procedure, a video tutorial, graphics that can be exploded and rotated, a voice walk through, and so on.
In this context, a component content management system become a mobilization platform for rich media enterprise applications, and this can also be expanded to include supply chain partners that may feed information resources into an assembled product field guide. The combination of mobility requirements and the drive towards rich media is breathing new life into the content management domain across a broad range of vertical markets.
Once a sale has been made, the customer has a limited range of interaction points with the vendor. There is the product or service itself, there is the documentation that supports the product (and more frequently delivered on-line rather than hard copy), there is the monthly bill, and there is the customer support center.
In most companies all these elements are treated separately from a process and personnel point of view. Customer support rarely interacts with product management, product marketing, documentation groups, billing, or sales, yet all these groups revolve around the same core: the customer. Sales is overtly focused on customers, product marketing focuses on customers, but normally on an aggregate level, product management interacts with select customers to define requirements for product roadmaps (unless they’ve implemented crowd-sourcing, which most companies have not), billing interacts with customers on a monthly basis, documentation rarely interacts directly with the consumer (in spite of the importance of the deliverable), and customer support deals with customers on an exception basis, when something isn’t working the way it’s supposed to.
Every one of these interactions provides product or service vendors with an opportunity to get a step ahead of the customer and anticipate requirements (and in all fairness, most people do take their customers seriously), the problem is that all these groups are probably dealing with the exact same customer, gaining multiple perspectives on incredibly valuable information, yet a holistic and integrated view that factors in all organizational facets in the context of the customer is rarely available. The best example of this is probably the customer support center. Why? Most people are calling because they’re having a problem; the product doesn’t work, the documentation isn’t clear, something is not working, etc. This is an ideal opportunity to gain meaningful insight because the customer is dealing with the vendor on an emotional level; as a vendor you’re actually much more likely to get unambiguous feedback from someone who’s upset, and of all the organizational groups, Customer Support is most likely to take the hit.
In most instances we find the core problem is ambiguity in documentation; complex products that are poorly documented can be a nightmare for the customer, which draws a direct correlation between the documentation and customer support groups, who rarely, if ever, interact. This is another push for a fully integrated rich media deliverable, pictures speak louder than words, moving pictures speak even louder, and providing a direct feedback loop to customer support through the documentation deliverable (which is quite possible in an on-line model) closes the focal gap that keeps most companies guessing about their customer intentions.
When mobile services were first introduced on a global level, one of the deployment issues that received a lot of attention was whether or not to charge for enhanced network services (that is, anything above basic connectivity). Most carriers at the time were looking for rapid adoption, and were adamant about giving everything away. I worked as a consultant at that time, and my consistent push back was no, you have to charge something, even if it’s only a symbolic amount. If you create an initial mindset that enhanced services are free, you’ll never be able to charge consumers for anything in the future. And now, several years later, the on-line and mobile content space is having the same conundrum.
There have been a lot of business models developed and deployed over the years to try and commercialize the vast amounts of on-line content that is continuously generated by professional authors, the news media, and consumers. Most commercialization models are either ad hoc pricing (popular with the analyst communities), subscription pricing (popular with the news media), advertising based content delivery (also media as well as User Generated Content), as well as, of course, free content (like this blog).
One of the core challenges content creators face is driven by the need for utility, uniqueness, and monetization. If I’m a market research professional with a large organization, and I need a report on a very specific topic in a hurry, I can trot over to an analyst site and cough up $3000 for a copy of exactly what I need. The information has high utility, it is unique, and I have the means to purchase it. However, if I’m doing research for myself, then that $3000 is coming out of my own pocket, and given the opportunity cost of $3000 (for example, a 65 inch LCD monitor), I am much more inclined to take my time and try to find a free version of the same material. If I’m lucky enough to find the free version, it doesn’t matter if it’s supported by advertising; clicking on the ads are optional, the important thing is that I’ve found what I want at a much lower price point.
This utility/uniqueness model works when the information is a reflection of extensive research and analysis done by experienced professionals. The problems facing the news media is that most of what they cover is current events, with limited analysis. If you want a quick and dirty overview of swine flu, you can find unlimited sources of information without having to pay a penny; therefore anyone trying to charge for it is one click away from being out of luck. They have utility, but they lack uniqueness.
User generated content faces an even higher hurdle; the barriers to entry are essentially flat, the rate of content generation is staggering, and as I’ve mentioned in prior blogs, very little of the information begin created is organized or tagged, and is therefore going to be very difficult to find or syndicate. This ecosystem is then further muddied by folks who are trying to create device specific readers such as Kindle, or the recent announcement by News Corp that they are looking to create a delivery device specifically for their own content. The default access device for any network based content is going to be a smart phone; you can build devices like the Kindle, but if you can get the same content on an iPhone, why bother with a new device that only serves a single purpose?
The only area that is a solid bet for direct monetization is high value information written by experienced professionals (utility and uniqueness). User generated content and news can generate revenue through advertising, but it s a secondary effect (that is, people are not paying for the content directly). Content creators in this group will still make money, but it requires a much broader reach since the click through rates are a small percentage of people who view the content. The more interesting challenge is how to apply monetization schemes to the 7th mass media channel (see the post from 3.3.09 for more detail). I will address that in my next post.
The mobile internet has been defined as the 7th mass media channel. For those unfamiliar with the expression, the prior six mass media channels are print, recordings, cinema, radio, television, and the internet, which is distinct from the mobile internet. What makes this particularly interesting are the usage numbers; 900 million personal computers in use at the end of 2007, 1.3 billion internet users, but over 3.3 billion mobile subscribers (including 798 million WAP users- the mobile version of the internet, and 2.4 billion people using their phones for SMS texting). Not only the usage numbers for mobile internet far larger, they are growing far faster than the numbers for the traditional (PC-centric) internet.
Why do these numbers matter? Because they indicate a permanent shift in how people receive and send information. It’s a reasonably safe assumption that if you’re reading this, you have a PC somewhere, which you access frequently. It’s an ironclad assumption you have a cell phone, which is always with you, and always on. Is your PC always with you and always on? Unlikely, even if it’s a small laptop.
In addition to the always on/always with you convenience of mobile devices, the other core influence for the mobile experience is the size of display real-state on a mobile device; the small footprint forces efficiency in visual communications. Combine that with text limitations of 140 characters per SMS message, and you have literally billions of people who are evolving to a lifestyle where they only receive information in bite-sized chunks.
Because mobile devices are now the dominant information tool for the mass-market, there is also a corollary shift underway in how information is created, managed, and delivered. This is one area where rich media component content management systems are actually ahead of the curve; these systems were designed against standards that demand a minimalist efficiency (such as DITA), and are set up on the assumption that fast access and pithy delivery are the key drivers.
Similar to the social sites need for a hierarchical rich media content management infrastructure, the mobile internet requires structured access to broad stores of information, but delivered with a more condensed payload, a faster cycle time, and lots more potential for re-use and syndication. Traditional CMS systems are going to find themselves in a world of hurt with this new model, while component content management vendors are going to be facing a near Greenfield opportunity.
One of the good news/bad news developments playing out with social networks revolves around the vast amount of data being created and uploaded every minute. On one level the model works; sites like Facebook, MySpace, Hi5, etc. are pulling in members at enviable rates, but more importantly, the members are active users of a broad range of rich media technologies. User generated content is the key driver for success for social networks, and it is being generated in staggering volumes. That’s the good news.
The not so good news is that this content is poorly organized; the vast majority of people uploading rich media files onto social networks haven’t got the slightest idea of what metadata or vertical taxonomies are, much less how to classify what is being uploaded. While taxonomies or metadata may sound like wonk-speak to most people, they are a core requirement if anyone plans to find anything on a social network website.
By comparison, most content generated in a corporate setting is created by professionals who categorized the information, either manually, or using applications delivered by content management systems. This works because most corporations have a vertical taxonomy that is specific to their use of language; pharmaceutical companies, chemical manufacturers, medical device manufacturers, etc. all use language that is specific to what they do. The information is categorized according to the organizational rules for that taxonomy, on the assumption that easy access is the key deliverable for any content generated.
This model works fairly well for text-centric content in a structured corporate setting, but less so in an unstructured social setting, and even less so for rich media such as videos, audio files, ad hoc web pages (think of anything on Facebook). The social scenario is further exacerbated by the fact that users create rich media content, upload it to their computer, then upload it again to a (e.g.) photo site like flicker or photobucket, where is then shared far and wide across a broad range of applications and networks, and/or is subsequently syndicated.
So the challenge here is how can rich media be categorized in a semi-automatic fashion, using tools that are easy enough to use that any Facebook user will intuitively start categorizing their data, ideally without even knowing they’re doing it? And this only covers the search angle within the first place the data lands after it leaves the user’s computer. How about all those folks trying to syndicated videos, where there are multiple layers of use and re-use? Using distribution tools like RSS feeds to syndicate data across a broad range of integrated social networks is like firing into the dark.
And finally, who is in the best position to drive the development and implementation of a standard to define categorization of rich media? It won’t be the end users; they’ll just move on if things don’t work the way they’re supposed to. Standards bodies are a viable choice, several like OASIS are already driving initiatives across a broad range of content schemas like DITA; this would be a natural fit for them. However, the sector that really has its neck stuck out are the social networks; the development of categorization standards for social networks goes beyond basic exchange of information (for example, Open Social), and needs to focus on core value deliverables such as search and syndication. Social network’s value is in their content, that’s the whole point of the network. If millions of users can’t find anything, and can’t find a graceful way to distribute what’s been uploaded across all the multiple social sites to which most of them belong, the entire thing will eventually collapse under its own weight.
There is a shift underway in how Facebook users communicate with each other; specifically with the increased use of one-to-one, or one-to-a-few video communications. Video as a base concept has already received strong traction on-line, but as anyone who has killed time on YouTube knows, the model so far has been one to many. This is effectively entertainment video, which is not the same thing as communications-centric video. A good corollary would be the introduction of enhanced network services on the wireless network a few years back, the most obvious example being integrated voice mail that is part of the service delivery of any wireless carrier. Voice messages are not left for entertainment purposes (most of the time), but to provide specific information to the recipient. The increasing use of video within social networks is likely to follow a similar pattern, with the exception that because there is an additional, significant dimension, the overall behavior of users is likely to shift. As an example, sit down with someone and ask them a few simple questions, then do the same thing with a video camera pointed at them. People are way more self-conscious when on camera, and as a result they behave and communicate differently.
I think once people become acclimated to transactional communications in a video format, the self-consciousness will start to ease, and this will just become another evolution in network based social communications. Now of course, the real question for the folks who provide the technology and enabling infrastructure is, how do we make money at this? While YouTube has been wildly successful in terms of usage, the company is still struggling to monetize its vast content repository, and this is likely to be even more the case for one-to-one video communications, since it is not entertainment oriented (does anyone want to watch a video of my wife telling me what to pick up at the grocery store? Heck, I don’t even want to watch it.).
The value of any network based service is driven by how many people use it; hotmail is a great example of this. The value of any one hotmail user to generate revenue is limited, but the value of the aggregated hotmail installed base is worth millions or billions. For social networks like Facebook who are sticking out their neck and offering video messaging, the same thing applies; don’t worry about the monetization aspects yet, instead focus on delivering an intuitive, high-value service. If Facebook (or others) can create a vast network of transactional video communicators, it will become worth multiple billions within a (relatively) short period of time.
For years the content and data worlds co-existed in relative isolation from each other. Content was the province of authors, reviewers, editors, people who were responsible for communications in written form. The data analysts, architects and developers operated in their own little esoteric world, and rarely came in contact with the content folks. The sudden rise of the internet triggered a fundamental shift in the content model, which has accelerated with the expansion of integrated rich media applications driven by meta-data management. Because of the increasing prevalence of application frameworks such as XML, the content world is finally catching up to the data world in terms of creation, distribution, and manipulation of their operational models.
Data-centric models have always had a huge advantage over content-centric models because of the level of granularity and manipulation they afforded end users. Now that content can be reduced into snippets that still maintain context and relevancy, these content elements can be stored in an object database and manipulated by ontology-driven tools. It appears the content world has finally caught up to the data world in terms of developing a fine-tuned grasp of it’s underlying information.
The implications of this are significant; for decades the advertising and marketing industries have been limited to a one-size-fits-all consumer outreach model, even now the best alternative offered by behavioral targeting firms is a cluster than numbers in the thousands and still only manages a response rate of less than 2%. Content needs to be architected, just like data; this has nothing to do with the narrative or creative process, it has to do with how information will be managed so that it can be reused, repurposed, and targeted to a much finer level of execution. When the content folks finally figure out what the data folks have know for years, you’ll start to see response rates on marketing initiatives climb steeply, because the customer experience has become much more relevant, or as I prefer to say, we can now target a cluster of one.
How do you deliver against a core motivation? The first step would be to define the specific motivation in the context of a delivery framework. There are a lot of companies out there that provide demographic segmentation services, grouping consumers into cute-sounding clusters for purposes of selling lists to retailers who can then target them as a whole. This is an adequate solution, if your smallest acceptable level of granularity is measured in the thousands, but even with this level of analysis and filtration, most retailers are lucky to get a response rate in excess of 2% (or as I prefer, a 98% failure rate). Why does this continue to happen, and why are retailers willing to accept such a dismal response rate? One option is to limit your initiatives to the 2% that you know are going to buy, which, if you can sort them out of the larger cluster, jacks your response rate up into the 90+ percentile. You actually end up with the same new/existing customers, but you haven’t wasted time and effort trying to get the attention of people who do not and are unlikely to care. It’s actually possible to have a cluster of one (I’ve worked with start-ups who’ve achieved this level of granularity), the problem-at that time-was the lack of technology to create a message for that one person cost effectively. A few years later, while working at another start-up, we developed the ability to create content at a granular enough level that micro-messages could be crafted on the assumption that the end destination was a cluster of one. So how do you link these two concepts? That’s one of the things I am working on now, and so far the results look very interesting. More on this later.
I caught a quick snippet on the radio recently of Steve Martin’s “Let’s get small” comedy routine, and it triggered a thought process on applying the concept of “getting small” to content management. Just how small is small, defined in terms of being useful, and how would you use something really small? Early content management solutions focused on managing large document sets, measured in the hundreds or thousands of pages. As the underlying technology has improved, the unit of reference has continued to become more granular, to the point where we can now comfortably manage sentence fragments across a broad range of deployments. Why does this matter? Because the contextual use of “small” has become more relevant with the rise of mobile media; a two inch screen presents a different set of challenges for delivering an advertisement to an end user. This would not have been possible a few years ago, and there would have been no pressing need for it. Now there is a need, and it is also possible to deliver micro-ads in context, driven to a great extent by the rise of Component Content Management systems. But how do you pull this together and deliver? I’ll go into more detail on the next posting.
SDL’s acquisition of Idiom confirms and accelerates some core, strategic shifts within the content management ecosystem. More than anything, it appears to be another in a series of market validations that Component Content Management (CCM), driven by the widespread adoption of DITA/XML is becoming an increasingly viable strategy for both the SMB and Fortune 1000 markets. This is an area where Astoria Software has not only been a long-time advocate of both DITA and SaaS, but has been steadily pushing the market towards the concept of Component Content Management. SDL’s most recent acquisition is a welcome endorsement of these core content management concepts; not only is SDL’s focus on this area becoming much sharper, as evidenced by its recent string of acquisitions, the Idiom acquisition is another step towards SDL’s longer-term goal of being at the top of the food chain when it comes to CCM.
|<< <||> >>|