Connections

M&E Journal: Smart Content Drives Smart Asset Usage and Engagement

By John Price, Product Marketing Manager, OpenText

I’m sure we’ve all had them. Those “how did they know that?” moments when online, be it on a website, social media, or doing something in an app. One of those moments where you are presented with just the right information, or the right image, that means something to you personally and is related to the task at hand.

At times it may seem a little creepy, but it’s something that we are all demanding. Today we all, as both customers and consumers, demand more and more personalized, relevant and value driven experiences. It is now more and more important to ensure that we see the most meaningful media assets at the right time in our digital journey.

Developing “smart content” is fundamental to delivering those experiences.

So, what is smart content?

Smart content is content that is designed to be read and consumed by humans, but is also tagged in a way that computers can organize, display, and interpret it.

Most of the content we deal with on a regular basis is not very smart. Most documents are just plain text, often without any underlying structure, and image files just tend to be identified by a file name and nothing more. While such unstructured content is relatively useful for humans to consume it is almost impossible for computers to decipher any underlying meaning in a consistent way.

When structure is applied to content, usually with some sort of mark-up schema, a degree of intelligence is added, as chunks of content have start and stop markers making them identifiable easily able to be recognized, demarcated, and broken up into self-contained components for reuse, rendering, or display.

Adding structure allows a degree of automation to be used to store, and retrieve the content components.

Smart content takes the idea of structured content further by adding semantic naming to the various components, as well as the inclusion of metadata and attributes that allow it to be processed by humans and machines alike. This allows for even more powerful automation, increased reuse, and perhaps most importantly the dynamic content assembly needed to deliver a personalized experience for the content consumer.

The inclusion of metadata also enhances search results as it often includes pointers to what the content is about, where it came from, where it can be used, its intended audience, and how it relates to other content.

But the development of smart content comes at a cost. The more structure and intelligence that you apply to your content the more complicated the content creation and management. The system development will be equally complex. More intelligence needs more investment and therefore should be linked to real business goals. The business drivers will help define the level of intelligence to be added and the rigor needed to maintain and deliver that.

Practical applications of smart content

So, in practical terms how does this apply to something like an image asset? How can you make the pictures or video you have more intelligent? It really falls into three areas — what do you need the image to do, the semantic tagging, and the metadata. Here’s an example of how my team approached this when we were developing the content marketing strategy for a large equipment manufacturer.

First, we defined the business need. We’d had a significant amount of feedback about wrong images being displayed.

A website that was targeted at customers in arid desert regions, for instance, had photos of equipment with snow-ploughs attached. In another case we posted a nice hero-shot on a regional website without realizing that it included something that was fine for the U.S. market but was a safety violation in that country.

As we were redesigning our online presence and digital customer experience to be more geographically relevant we needed to make sure that the images we were using were culturally, and environmentally, appropriate.

Then came a little bit of content engineering. Instead of tagging the image on the webpage with a generic name like , we developed a schema that allowed for more informative semantic naming such as , , , etc. This allowed us to more quickly refine, deliver, and render the right sort of image at the right place on the website of app.

Metadata was where the real magic happened. Historically, all the images we had in our old database had metadata from the photo shoots, such as photographer, date, location, etc. But those were things related to how the image was produced, not how it would be consumed. When we implemented a new Digital Asset Management (DAM) system we switched our thinking about the metadata through one-hundred and eighty degrees.

What would we need to identify what image to use when? Remember the business driver was to make sure that images were appropriate for the market they would be seen in.

We kept the production metadata as we still needed that for internal processes and auditing, but added information about environment, culture, the type of machine, the activity, the people in shot, regions where the shot could be used, and just as importantly where it couldn’t be used. We also added a note as to which customer profile, or persona, it would best match.

The result from this effort was that with over a million assets in the new DAM, the search was quicker and more efficient as it was easier to narrow down search criteria. More images could be found, pulled, and rendered through automation. The engagement on the regional websites went up, and the notes about mismatched images stopped.

Making the images into smart content components meant that we were delivering what the customers needed to see at the time they needed it, in a way that it was culturally and operationally relevant to them as individuals.

—-

Click here to translate this article
Click here to download the complete .PDF version of this article
Click here to download the entire Winter 2018 M&E Journal