DITA

Gilbane Advisor 11-15-17 — news value, implausible AI, software & CMS 2.0

Scoring news stories is hard? Frederic Filloux dives into some research and unique challenges the News Quality Scoring project faces. A worthy project to benefit producers and consumers, the NQS “is aimed at assessing the value-added deployed by a media for a given news coverage in terms of resources, expertise, thoroughness of the process, and […]

This post originally published on https://gilbane.com

Categories: DITA

Who you’ll meet at Gilbane Boston

Dear Reader: Join us in Boston in 3 weeks to network with your peers and learn how they are building successful next generation content strategies and digital experiences for customers and employees. Here is just a sample of who you’ll meet… Starwood Hotels & Resorts ? Elisa Oyj ? State Street Global Advisors ? KrellTec ? Commonwealth of MA […]

This post originally published on https://gilbane.com

Categories: DITA

How Structured Content Makes Chatbots Helpful

The Content Wrangler - Tue, 2017-11-07 08:19

Remember when context-sensitive help was the revolutionary way to deliver the right content to the right people at the right time in the right way? Just a few years ago, many technical communication teams did nothing but create context-sensitive documentation for software products. They aimed to provide contextually relevant, helpful content based on what the customer was doing in the software at any given moment.

These forward-thinking teams deconstructed large technical documents into discrete chunks, which they then hooked into the product interface. Customers no longer had to paw through a fat user manual or poke around in an online portal to seek answers to their questions. With a click of the F1 (help) key, they got the information they needed on the screen right in front of them.

Oooh. Ahhh. Contextual relevance had arrived in the digital world.

Today, savvy consumers simply expect digital content to be contextually relevant. What’s more, “context” now means more than location in a user interface. “Context” includes many factors: user-profile data, geographic location, product model, version number, preferred language, time zone, interaction history, the device’s capabilities, and so on.

Providing contextually-relevant content today is no trivial matter. It’s challenging, especially for teams that have not adopted advanced practices and tools for developing and managing information.

In his recent Content Wrangler webinar, The Fifth Element: How Structured Content Makes Chatbots Helpful, Alex Masycheff, structured-content expert and co-founder and CEO of Intuillion Ltd., discussed how emerging delivery technologies can take advantage of structured technical content to deliver contextually relevant content via conversational user interfaces, such as chatbots.

Alex delved into the following:

  • How chatbots improve context-sensitive assistance
  • Five elements of a helpful chatbot
  • When chatbots bring the greatest value
  • Why structured content is critical to chatbot success

Read on for some highlights from Alex’s talk. For the details, go to his webinar and listen to the whole hour’s worth for free.

Single source publishing today

Single source publishing has evolved since the early days of context-sensitive help. It adapts to a range of channels. People might access it through a customer portal, through a chatbot on Facebook that provides a conversational UI, or through an augmented-reality application that applies a visual layer of information over physical objects.

Content may have to adapt also to align with business rules that determine how it gets processed. Depending on the user’s goals and preferences, access rights, and other criteria, a set of business rules can be applied, on the fly, to any content to make it deliverable to the user in a way that fits the situation.

Further, we’ve broadened our notion of context sensitivity. In the early days of context-sensitive help, context meant “the user’s location in the UI.” Today, the user context has many facets. Examples:

  • Goals
  • Skills and abilities
  • Current activity
  • Profile
  • Product
  • Geographical location
  • Interaction history

Five elements of chatbot helpfulness

Alex’s webinar title starts with “The Fifth Element” in reference to the movie The Fifth Element. In that movie, four stones represent various elements in nature. A fifth stone brings them together and activates their powers.

Alex’s fifth element—structured content—brings all the others together and activates their power to create human experiences that just might qualify (depending on the human) as helpful.

Here are his five elements:

  1. User’s context
  2. User’s intent
  3. Entities of the user’s intent
  4. Knowledgebase
  5. Structured content

Element 1: User’s context

The first requisite element of chatbot helpfulness is an ability to capture info about the user’s context. The system can capture some of the contextual info (for example, the user’s location and basic profile data) automatically. The chatbot then kicks into conversation mode to “unveil” other key bits of contextual info (the user’s goal and so on).

Chatbots can gather information about people’s context by asking questions. Based on the answers they get, they can then offer advice, as shown in this conversation between a chatbot and a maintenance engineer:

You could think of this robot as a chatty version of the old F1 key.

Element 2: User’s intent

To efficiently suss out the user’s intent—the thing someone wants to know or do in a given moment—a chatbot must keep the conversation within a narrow domain of information. Here’s an example of a domain that might support conversations between a chatbot and maintenance engineers:

While chatbot designers can’t control what the human will toss out (ever amused yourself by messing with Siri?), they can and must define the scope of machine’s side of the conversation. Presuming that the person stays within that scope—by asking something like “Do I need to lubricate the XZ-135?”—the conversation has a chance of satisfying the user’s intent.

Element 3: Entities of the user’s intent

To understand the user’s intent, chatbots need info about the parameters, or entities, that make up the user’s intent. Here’s what such entities might look like for our maintenance conversation:

To find out which entities go with each intent—to “fill all the required slots,” Alex says—chatbots must ask questions. For example, in the earlier conversation, after the chatbot learns that the first entity is the ZX-135, it asks a question to fill in the slot for the next entity:

When the chatbot has filled all the entities of the user’s intent, it can proceed to offer help.

Element 4: Knowledgebase

Chatbots pull their content from a knowledge base. As content professionals, our challenge is to organize that knowledge base so that the chatbot can find and deliver the content chunks that will satisfy users’ intents.

How do we make this happen? Here’s the critical behind-the-scenes insight: Just as we have learned to structure content in standalone modules (granules), so too must we structure CONTEXT.

Aha!

Here’s how Alex illustrates a possible structure for context granules:

Creating a chatbot is a game of matching context granules with content granules. The chatbot pulls content from the knowledge base according to that matching.

Element 5: Structured content

Structured content—our fifth element—unites the other elements (context, intent, entities, and knowledge base) and, as Alex put it, “activates their powers.” Structured content is granular content. In other words, it’s made up of topics (or “chunks” or “units”) that can be “managed and processed independently,” Alex says.

Without structured content, he adds, a chatbot can’t create helpful experiences.

Here’s how Alex illustrates structured content:

To enable a chatbot to find and process the right topics at the right time, each topic must be associated with metadata that identifies applicable user contexts and user intents. Example:

You might wonder why we need to bother with structure, why we can’t “just let artificial intelligence do the work.” Here’s why in Alex’s words: “We’re not there yet. Understanding human language is still a challenge.”

Watch the full webinar

For the rest of what Alex has to say on this topic—including his insights into the role of artificial intelligence, deep learning, speech recognition, image recognition, natural language processing, machine translation, metadata auto-identification, and scalability—watch the full webinar here.

The post How Structured Content Makes Chatbots Helpful appeared first on The Content Wrangler.

Categories: DITA

Integrating Multilingual Content into Operations and Growth

Featured session: Integrating Multilingual Content into Operations and Growth As global content becomes more mainstream there is increasing pressure for broader and more efficient integration with corporate functions and strategies. Both presentations in this session address some ways to accomplish this: one focused on the multilingual content supply chain and API integration, and one focused […]

This post originally published on https://gilbane.com

Categories: DITA

Brand Content Strategies

Featured session: Brand Content Strategies “Content strategy” covers a lot of territory, within organizations, and across industries. While every business is unique, cross-pollinization of ideas often leads to some of the most valuable and unpredictable insights. In this session speakers from well-known brands, Starwood and Volvo, share content strategies that have worked for them, and […]

This post originally published on https://gilbane.com

Categories: DITA

Gilbane Advisor 10-23-17 — martec orgs, aligning vectors, emotion AI, search

Martech & marketing orgs Scott Brinker looks at two surveys on how modern marketing organizations are re-structuring to manage marketing technology. In short, they have and are. Read More What Elon Musk taught me about growing a business Dharmesh Shah was inspired by Elon Musk’s advice on growing and scaling a business, “Every person in your company is […]

This post originally published on https://gilbane.com

Categories: DITA

Commerce, Content, and Conversion

Featured session: Commerce, Content, and Conversion Of all the different functions and systems that need to be integrated to provide a clean consistent customer experience, content management systems and commerce systems are the most obvious. Speakers look at three areas: e-commerce and CMS integration, why content is so critical to e-commerce success, and strategies for […]

This post originally published on https://gilbane.com

Categories: DITA

Building Chatbots with Intelligent Content

The Content Wrangler - Mon, 2017-10-16 08:12

Industry analysts predict that chatbots and intelligent personal assistants will overtake traditional web interfaces as the primary consumer touchpoint, that they will replace or augment mobile apps, and they will completely transform customer service.

If those predictions don’t boggle your mind, read them again.

Is your content team ready for that future? Most aren’t.

The good news is that with some engineering, chatbots can employ and extend an existing content repository. Intelligent content allows us to use single-source publishing to push content out to interactive channels, including those that involve chatbots and intelligent assistants.

Possible benefits:

  • Boost the ROI of existing content
  • Increase sales cycles
  • Improve conversions
  • Reduce customer-service costs
  • Improving satisfaction

In his August 9, 2017, webinar in The Content Wrangler series, Building Chatbots with Intelligent Content, Cruce Saunders—founder and principal at [A] and author of Content Engineering for a Multi-Channel World—discussed chatbots as a new content-distribution channel that businesses can’t afford to ignore. Cruce covered basic chatbot content requirements, components and construction, and a future-proofing model that can make your content chatbot-ready.

Why is Cruce so passionate about this topic? “I’ve been in the content structure business for twenty-plus years working across lots of media,” he says. “I’m passionate because I believe that structured content is the path to a more intelligent world.”

Read on for some highlights from Cruce’s talk. For the details, go to his webinar and listen to the whole hour’s worth for free.

Here come the new technologies

Question-and-answer (Q&A) content is everywhere in our organizations. It’s in customer documentation, in frequently asked questions (FAQs), in knowledge bases, in online help—and now, increasingly, in chatbot interfaces.

We’re all in the habit of asking robots questions already. We search every day in text and, more and more, we’re using our voices. Some 60.5 million Americans now use a virtual assistant of some kind at least once a month. According to Gartner, chatbots will power 85% of all customer-service interactions by the year 2020.

Q&A content has been around for a long time in various forms. The new forms fall into three main types:

These are simply new forms of delivering answers to questions. You might say, “The FAQ is back!”

 

Although today’s chat-related technologies are often implemented in simplistic and limited ways, they have the potential of making humans capable of doing smarter, better things, Cruce says. “Customers want an immediate way to interact with our content in a conversational way. Chatbots are answering.”

What organizations need to be doing today (and most are not)

We’re moving toward the conversational commerce of the next generation. And we’ll get there only if we can reuse the content we already have, says Cruce. Ideally, companies would publish their Q&A content out in multiple forms, including bots, from a single source. It only makes sense to set up a single repository for all Q&A content, following the principles of the unified content strategy: write it once, use it where needed.

It’s counterproductive—and quickly becomes expensive and messy—when companies create a whole new repository of Q&A content for bots.

Yet, all too often, that’s exactly what happens. Companies take an expedient approach rather than an intelligent approach. As a result, Cruce says, “duplication between content repositories is becoming a bigger and bigger problem for organizations that are answering lots of questions in lots of ways.”

We’re asked to copy and paste existing content into new repositories or platforms all the time, Cruce says.

“Stop! If we don’t start centralizing Q&A content, we will hit the Q&A apocalypse where everything is going to be out of date in various channels, and we’ll have a mischmasch of customer experiences. We’ve got to say ‘no more’ to new content silos. We can’t allow our organizations to continue hiring people to move content from one repository to the next. It’s time to put our foot down.”

The goal—which will require the help of content strategists and content engineers to achieve—is to unify your Q&A content across all delivery channels and platforms. Yes, this is a challenging goal. But shying away from this effort has big consequences for the bottom line. “If we can’t keep our content lifecycle and our publishing infrastructure up to date, “ Cruce says, “we’re going to accrue technical debt in the millions of dollars.”

We need to keep evolving our Q&A content to include voice. As recently reported in Forbes, by 2020 half of all searches will be voice searches. The voice-powered bot market is expected to grow from $1.6 billion in 2015 to $15.8 billion in 2010.

If you’re going to invest in a chatbot, you’re not buying a thing. You’re investing in a process that needs to evolve our way of working. The technology is secondary or even tertiary. “It’s uncomfortable. It’s hard. It takes work. Anybody who tells you it’s easy is selling a widget, a thing. Making the widgets sing with our content requires training, innovation, and change of the culture that supports those customer interactions.”

To make the new technologies and processes work, we must move toward intelligent content, including such elements as structure, schema, metadata, microdata, taxonomy, and content modeling. “Knowledge lives in containers and can make an impact only when those containers are connected with an audience.”

All this talk of chatbots may sound daunting, but there’s no avoiding the importance of these new options and the processes they’ll require us to develop. “Organizations should not play chicken with the future,” Cruce says. “Invest in engineering content now before competitors’ robots steal customer mindshare. This is clear to executives and C suites everywhere.”

Related resources

Watch the full webinar

For the rest of what Cruce has to say on this topic, including lots more detail on content engineering and designing chatbot conversations, watch the full webinar: https://www.brighttalk.com/channel/9273/270709

The post Building Chatbots with Intelligent Content appeared first on The Content Wrangler.

Categories: DITA

Enterprise Content Strategy: A Project Guide

The Content Wrangler - Wed, 2017-10-11 09:23

The following is an excerpt from Enterprise Content Strategy: A Project Guide, by Kevin P. Nichols, the fifth book in The Content Wrangler Content Strategy Series of books from XML Press (2015).

Enterprise Content Strategy: A Project Guide
Chapter 7. Publish and Measure Phases

Anyone who has written anything or aspires to be a writer knows that the word publish can bear a profound power. However, within a content strategy, publish functions as a mere step within a content lifecycle where content becomes exposed to an audience. Publish represents the culmination of several steps, and as a step itself, it lives within a larger content lifecycle. In a world where anyone can publish any content online via a blog, tweet, or personal website, the power of the term sometimes becomes lost. But make no mistake, publish does create finality in that the content will be seen, heard, read, and felt by an external audience.

The publish phase brings the content experience to life.

As soon as your content lives in the published or external realm and a consumer can access it, it travels down paths, journeys, and experiences over which you have little control. Tracking the path of your content, its use, and its exposure proves essential to its success.

An effective content strategy requires a performance-driven model, so measuring your content performance ensures a successful, sustainable content experience. By definition, successful content must resonate with a consumer and meet his or her needs. Only through constant evaluation will you know what works and what does not. An effective enterprise content strategy must include a well-defined metrics strategy. Metrics should reflect the strengths and weaknesses of the solution design and provide the impetus for content and solution optimization.

This chapter combines the publish and measure phases since the two go hand-in-hand. It defines measuring content performance, demonstrates how to create metrics, and provides information on reporting.

Definitions

Let’s define a few key concepts to frame this effort.

  • Analytics: The capture and assessment of data, particularly with performance in mind. In the case of enterprise content strategy, analytics includes the measurement of content performance and the analysis of those measurements.
  • Metrics: Units of measurement. A metric can reflect any kind of measurement. This chapter provides the common metrics used to indicate the performance of content, such as the number of consumers who download an article.
  • Key performance indicator (KPI): A metric used to evaluate the performance of an organization’s objectives, for example, the number of products sold.
  • Conversion metrics: Measurement of a specific conversion, for example, when a content consumer completes a desired task.Typical conversion activities:
    • Purchase a product
    • Add an item to a shopping cart
    • Download a white paper
    • Share a video
    • Create a profile
    • Click to make a call on a smartphone
    • Register a product

    A successful metrics strategy begins in the assess, define, and design phases. During those phases, identify the metrics needed to ensure a successful experience so you know exactly what to evaluate after you publish.

    Identify metrics early during technology implementation, because you may need to customize your technology solution to track the metrics you need. Some systems require programming or database changes to enable measurement, so identifying metrics early will help avoid delays.

Creating performance metrics

A successful metrics strategy starts with business goals and objectives. A business goal frames a general aspiration to which you create specific, measurable objectives. You should always start with a strategic intent for your experience and a goal. Let’s use a desktop website as an example. In this case, the strategic intent, goals, and objectives of a desktop website might look like this:

  • Strategic intent: Answer the question why our company? in a way that competitively differentiates us for the consumer, investor, career seeker, financial analyst, and media.
  • Goal: Become the premium website in the industry and go-to source for all products, outperforming all other competitors in purchases, traffic, and brand perception.
  • Objectives:
    • Sell X number of products within X amount of time to X audiences.
    • Generate X number of articles in (names of media) over X time due to exceptional media experience in news and media section.
    • Increase overall website traffic by X percent by X time.
    • Increase the amount of socially shared content by X by X time.
    • Increase number of consumer profiles created by X over X time.

    The strategic intent provides an umbrella strategy for the experience; the goal, a lofty aspiration; and the objectives, specific and measurable desired outcomes.During the plan, assess, and define phases, identify the key criteria for success. At that point, you should identify the strategic intent, goal, and objectives at a high level. Through the design phase, hone them all so each is specific to the solutions you create, down to the page, template or even module level. Metrics will measure whether you meet each of these objectives.

    To develop metrics, first look at an objective, and then extract a metric from that objective. Then define what success or finality of the metric means (for example, through analytics applications, dashboards, consumer surveys, conversion rates, or sales reports). Example:

Objective: Increase online sales by X% over X time with X consumers.
Metric: 
Number of website consumers who purchase a product within a given time period as measured by web analytics and sales data. Make the metrics as specific as possible by asking these questions:
  • For whom is the objective targeted? Customers, potential customers, analysts, career seekers, etc. You can also include persona or segment.
  • When or how will we complete the objective? Example: within 6 months we will sell 20% more products.
  • How many consumers, products, downloads, the piece of content shared, etc., are we aiming for?
  • Where are we targeting the objective? Example: the geographical location, the channel, or a specific area on the site.
  • Why are we doing it? Example: to increase sales, to increase downloads, to increase shared content, to increase the number of content consumers.

Incorporate as many of the above points as you can within an objective to make it as specific as possible.

 

Adopt the SMART approach

You can also use the SMART approach to develop your objectives. The SMART approach generally applies to setting business goals and objectives, requiring objectives to have these characteristics:

  • Specific
  • Measurable
  • Accountable
  • Realistic
  • Timely

Example: Increase the number of new visitors to the home page by 20% within the next 6 months.

From your objectives, you can glean what to measure. See Table 7.1, “Common metrics” for a list of common metrics.

Table 7.1 – Common metrics

Metric

 

Definition

  User/consumer path and clickstream Measures the path a user takes to complete a task. To use this metric, assume user journeys or paths for the completion of specific tasks (for example, purchase an item or download a white paper). This metric helps you determine what a content consumer does within a journey. This metric helps to validate what you think your consumer journeys are versus the actual path a content consumer takes. For omnichannel experiences, measure this journey across multiple channels.

  Length of visit Captures how long a consumer stays within the experience. For example, how long does a content consumer stay on the website?

  Depth of visit Shows how far a consumer goes into an experience, such as a website. You can also look across channels to see which channels a content consumer engages and where and when.

  Conversion Measures the completion of a task. Many types of conversion metrics exist. You will want to measure number of consumers, tally bounce and exit rates prior to conversion (noting where the exit happens), and review the journey taken to convert. For each conversion metric, create one or more user/consumer journeys.

  External keyword search terms Identifies which terms are used in search, both within your digital experience and through organic search (for example, Google.com, Bing.com). You may want to review both mobile and desktop experiences. Google Analytics or other tools can help track this information. Stay informed regarding changes to algorithms by major search engines, which can render this task difficult.

  Onsite search keywords Shows which key terms are used within your digital experience for search, as opposed to an external search engine. These indicate people’s interests. Note when a consumer jumps to use online search, often indicating that the consumer cannot find what he or she seeks via navigation. In addition to top search keywords, look at failed searches or searches that return no results. Also note when the consumer refines the search terms, and capture facet usage, if relevant. Preferred search terms (canine over dog) are another important metric.

  Number of visits to convert Identifies the number of times a consumer leaves and return before converting. Where does the consumer go (if you can track it) upon leaving the experience?

  Point of entry Identifies where a consumer enters the experience or content. This metric may provide a starting point for the consumer journey. How does a content consumer get to the experience: via a keyword search? via a banner ad? via a competitor’s site?

  Value of interaction Calculates the total revenue generated from the visit. This metric can be itemized or can account for all visits to the website by dividing the number of visitors by the total revenue.

  Cost to convert Demonstrates how much a conversion costs a business or an organization. This metric looks at internal spending and the total number of conversions as well as revenue of conversions when relevant.

  Exit metrics Measures where a content consumer exits an experience. Note the length of time spent and which device the consumer uses prior to exiting. An exit does not necessarily correlate to a cause for concern; perhaps the visitor accomplished what he or she needed to do and, thus, left your experience satisfied.

  Bounce rates In contrast to exit rates, bounce rates inform you that a visitor reached your experience and left immediately. In other words, a consumer might reach a product-landing page through an external site and – without spending any time there or going further into the experience – bounce out of the website by going to a different URL. Track whenever this happens, as well as point of entry, length of time of visit, where the consumer went after, etc. This metric may help you detect under-performing content.

  User-interaction history Indicates how often a consumer visits an experience. What does he or she do while within the experience? For consumers with profiles (users who are logged in), which features, functions, and content do they use?

 

In addition to the metrics in Table 7.1, “Common metrics,” you might need to capture social media metrics. Table 7.2, “Example social media metrics” provides some common social metrics:

Table 7.2 – Example social media metrics

Metric

 

Definition

  Post rates Tracks which content (for example, a product or video on Facebook, Twitter, Tumblr, Pinterest) is shared by whom and when. Look at how often a consumer re-shares the content (for example, by retweeting).

  Share of voice Captures how frequently social media mentions your experience, brand, or organization.

  Referrals from social media Indicates which social media refers visitors to your experience, for example, a link in Twitter that results in a visitor landing on an article on your website.

  Social sentiment Tracks what others are writing about you in social media. Sentiment can be tracked with regard to perception of a brand, an experience such as a website, specific pieces of content such as a video, or even the experience with a product or service.

  Repeat engagement Indicates which consumers, and how many, continue to mention your experience or content, for example, repeat likes within Twitter, repeat shares of your content on Facebook, repeat mentions of your brand or organization, etc.

  The metrics in Table 7.2, “Example social media metrics,” can all be attained in various ways, including Google Analytics, Bing Analytics, social-tracking tools, and web analytics software. Additionally, many content management systems include this functionality, and there are applications that track a variety of metrics. In many cases, you may require more than one application.

Operational Metrics

So far, I’ve covered metrics for digital experiences. Obviously, though, digital metrics do not capture all the objectives that an enterprise should measure. Let’s consider the following operational metrics, which can prove equally important for showing the value of content within your organization.
  • Reduction in cost to produce content: Measured by data supplied by business units, internal audits, and operational metrics dashboards
  • Reduction in cost associated with finding and leveraging content within an organization: Measured by user and consumer surveys, audits, and operational metrics dashboards
  • Reduction in localization cost due to improved processes and systems: Measured by audits and operational metrics dashboards
  • Cost per word (used in translation cost assessments): Measured by audits and operational metrics dashboards
  • Time saved authoring, maintaining, and optimizing content: Measured by user and consumer surveys, audits, and operational metrics dashboards
  • Increase in internal satisfaction with information and content: Measured by surveys and operational metrics dashboards
  • Decrease in content redundancy: Measured by user and consumer surveys, audits, and operational metrics dashboards
  • Reduction in cost due to content reuse: Measured by user and consumer surveys, audits, and operational metrics dashboards
  • Time saved in taking a product to market: Measured by user and consumer surveys, audits, and operational metrics dashboards
  • Decrease in employee attrition through improved employee tools, self-service tools, and resources (portals): Measured by user and consumer surveys, audits, and operational metrics dashboards

Content experience metrics

Finally, you should look at other evidence related to content experience. User/consumer/customer feedback, surveys, and user-testing tools can show how your content performs and why content consumers may or may not respond to it.

Additional content experience metrics:

  • Consistent brand experiences with all customer touchpoints (facilitated by content that is on-brand and effectively targeted across multichannel platforms): Measured by consumer surveys and audits
  • Retention of customers: Measured by customer databases, sales data, surveys, and audits
  • Acquisition of new customers: Measured by analytics, sales data, and audits
  • Optimized content quality (means consistent content across channels, free from errors): Measured by quality standard audits, customer feedback, and time-to-publish updates and modifications
  • Up-to-date, relevant content: Measured by quality standard audits, customer feedback, and time-to-publish updates and modifications
  • Efficacy of content related to its value proposition and key selling points: Measured by analytics, testing (for example, A/B testing or multivariate testing), customer feedback, audits, and sales data
  • Improved localized content with fewer errors and revisions: Measured by quality standard audits

Identifying the types of metrics to capture only provides you with partial success; what you do with the metrics is what really matters. Let’s discuss how to analyze metrics data and report on it.

 

Analyzing and reporting metrics

Metrics provide you with data that helps you draw conclusions about your content and its performance. But metrics by and large do not answer the question why? Metrics do not tell you why consumers do or do not view or share your content. To find out why you must dig deeper.

Let’s first discuss when and where you should look to answer this question. If content performs well, that is, it’s meeting its objectives, then perhaps you will want to produce more content similar to it and make investments in its ongoing success.

When content fails to meet its objectives, you have a problem. Look at every place where content does not perform well. After you have a list of the problem areas – which can be anything from consumer journey to conversion to content not receiving any visitors at all – find the cause. For content not viewed at all, are consumers interested in the topic? Do they seek it out? Are issues in search or navigation preventing them from getting there in the first place? Have you received negative feedback on the content?

When something seems amiss, first check to see if there are issues with the user experience. Then, see how the content performs elsewhere in the industry. Do competitors use the same content? If so, how does it differ from yours? Are there social metrics to indicate interest? You may need user testing to see why content fails to perform successfully. In some cases, you might need to modify your objectives. Maybe, content you consider important is not important to your audience.

As you determine the causes, build and maintain a list of resolutions.

Report to the content team any findings, perhaps using a dashboard. Present internal metrics, track efficiencies, costs, etc. quarterly. For metrics that track your content experience, determine how often you wish to review and present. In many cases, you will want to analyze metrics monthly. In other cases, you might want to do so quarterly. In some larger e-commerce environments, organizations track metrics hourly. Chapter 8, Optimize Phase, deals with how to optimize your content based on your findings.

To read more from Enterprise Content Strategy: A Project Guide, check it out on the XML Press website, or buy the book now on the Amazon, Barnes & Noble or the O’Reilly Media website.

The post Enterprise Content Strategy: A Project Guide appeared first on The Content Wrangler.

Categories: DITA

How Cisco Uses DevOps-friendly Publishing for Dev Docs

JustWriteClick - Sat, 2017-10-07 14:48

Cisco DevNet is our developer program for outreach, education, and tools for developers at Cisco. From the beginning, the team has had a vision for how to run a developer program. Customers are first, and the team implements what Cisco customers need for automation, configuration, and deployment of our various offerings. Plus, the DevNet team thinks learning and coding should be fun and exciting.

With the help of Mandy Whaley and the team, I wrote up how Cisco DevNet (developer.cisco.com) created a system called PubHub to publish developer docs. The docs are stored in enterprise Git-based storage, either Bitbucket or Enterprise GitHub. The source files can be Markdown, Swagger/OpenAPI, RAML, or even a Stripe-like source file. Read more on docslikecode.com in DevOps-friendly Docs Publishing for APIs.

Categories: DITA

Gilbane Advisor 9-27-17 — Killing keyboards, conquering healthcare, framework churn, GDPR

Will Microsoft’s new augmented reality patent kill the keyboard? Well, there is a difference between the function of a keyboard, typing, which has legs for the foreseeable future, and its physical instantiation, which will eventually be eclipsed by something virtual. There are those who think voice will replace keyboards, and perhaps even typing, but it is […]

This post originally published on https://gilbane.com

Categories: DITA

Podcast interview talking about docs as code with Ellis Pratt of Cherryleaf

JustWriteClick - Thu, 2017-09-21 02:30

I had a great talk with Ellis Pratt of Cherryleaf Technical Writing consulting last week. Here are the show notes, full of links to all the topics we covered.

Podcasts are great fun to listen to and participate in, if a bit nerve-wracking to think on your feet and make sure you answer questions succinctly without too much meandering. I think it’s difficult to determine the depth to go into for docs-as-code techniques. I can go down a deep rabbit hole while explaining webhooks, continuous integration, and bash scripts, not to mention static site generators based in either Python or Ruby. Whew! Great chat, well worth the listen. Would love to hear your thoughts.

Wow, and the last time we spoke on a podcast was in 2009 after Conversation and Community: The Social Web for Documentation was released! Thanks Ellis for another engaging chat. Updated to add: Ellis had stickers made for the Cherryleaf Podcast. Check it out.

Categories: DITA

How To Estimate The Impact of Business Decisions on Content Teams

The Content Wrangler - Fri, 2017-09-15 08:00

Every strategic business decision has an impact on content. Entering a new market creates the need for localization. Images that work well in our home market may need to be altered or replaced in order to avoid offending members of new audiences. Mergers and acquisitions create a need for content updating and adaptation. Content obtained from others requires rebranding, creates the need for new metadata tags, and training content creators on new tools and workflows. A new product feature may require your content team to update your content strategy.

Unfortunately, business strategists don’t always foresee the impact of their decisions on content. They can find themselves surprised by the amount of time and money required to tackle the challenges they introduced.

Who should be involved in making such strategic business decisions, and what process would they follow, ideally, to avoid such rude surprises? Ann Rockley and Charles Cooper of The Rockley Group have a model for you.

Ann and Charles have spent decades consulting with big companies and their content teams. They’ve seen processes that work and processes that don’t. At the Intelligent Content Conference, in a talk they gave called Playing Well with Others: Using Workflow and Approval Process to Smooth Communications & Content Flow in Your Organization, Charles used his turn at the mic to walk the audience through a process that works.

Recommended reading: Adopting Intelligent Content: Practical Advice

While your process may not have exactly eight stages, and while you don’t have to use exactly these labels for them, “You need everything that’s in here,” Charles says.

As you read this post, which sums up Charles’s take on this process, look for opportunities for your company to smooth out its process and to unite people in your organization around it.

Why put this kind of process in place?

The process that Charles maps out—although it may seem, to some, like a series of roadblocks—enables corporate leaders to make strategically sound decisions in a timely manner, guiding the company toward activities that forward the business and away from activities that don’t.

The process also increases the chances that strategists will take into account the full impact of their decisions on their company’s content. I’ve been involved in situations where that didn’t happen, and the scrambling that resulted wasn’t pretty.

Ideally, this process keeps a company’s whole body of content consistent and up to date. As a result, even as the business evolves, customers and content teams alike get to have the kind of satisfying content experiences you would expect from a brand you love.

“This process gives companies control without over-controlling,” Charles says. It does this by giving people in various departments a regular opportunity to talk and get aligned—a critical benefit for content teams that operate in silos.

“If you have one team doing videos and another doing podcasts, you want to keep them moving in the same direction—hitting the same talking points, delivering the same messaging—as they create the content.”

Who belongs on the decision-making team?

Charles suggests that companies appoint four or five people (referred to variously in this post as “decision makers,” “strategists,” “approvers,” and “the team”) who have a stake in both the short- and long-term repercussions of the decisions.

Examples:

  • The appropriate project manager
  • The person who manages the project managers
  • Someone who understands the relevant content systems and teams
  • Someone from the legal team
  • Someone from the brand team
  • Someone from the region in question

The people you choose to involve may vary. Not every strategic decision needs to involve legal representation. And maybe your product or service isn’t international. It may even be hyper-local, requiring no consideration of regional concerns. Don’t force these roles onto your team. Identify the roles your company needs to support its business decisions, and choose people who can fill those roles.

Subject-matter experts (SMEs) don’t need to be on this team; the team reaches out to them as appropriate.

If the team is focused on a product, the product manager might be at the appropriate level. If the team deals with a number of products within a brand, the brand manager might be at the appropriate level, calling on individual product managers, as SMEs, between meetings. In some cases, those product managers might be invited to a team meeting to provide extra input.

The goal: Create a stable team that provides strategic consistency.

How much time does this process take?

Depending on the nature of the request, this decision-making process may take a concentrated hour, or days, or weeks.

The team may want to hold regular decision meetings—maybe weekly, maybe monthly—depending on the number of requests that roll in. Organizations with a large backlog of requests may want to meet frequently at first, perhaps weekly or biweekly for three to five months, Charles says, tapering to monthly meetings after that.

The conversations that happen between meetings don’t have to take long. A quick Google search or 30-minute conversation may yield a pivotal discovery. At the same time, some requests merit in-depth research.

Companies that rush their strategic decisions or follow inconsistent methods shortchange themselves; they may suffer expensive—and avoidable—consequences.

Don’t be a slave to consistency, though, Charles says. Flex as needed to support your business requirements. If you’ve settled down to a monthly meeting cadence, for example, and an important issue comes up, don’t wait for the next monthly meeting to address it.

“Use the process—gather information, distribute it for discussion, research the situation, and come together to decide—to support your business needs. Don’t force your business needs into a defined meeting schedule if that would cause more problems than it would solve.”

In short, this process takes however long it takes. Wise leaders give each phase its due.

A walk through the process

The following sections detail the stages of the decision-making process that Charles recommends for strategic leaders of any company.

Stage 1. Someone submits a request, following a defined method

Strategic ideas may come from any number of sources: people anywhere in the organization, customers, the public. The ideas may come in via email, web forms, tweets, phone calls, hallway conversations, meetings—any way that human beings communicate.

Somebody somewhere asks somebody at some company to do something different. Charles calls this input a “request” (aka an idea, a change, a suggestion).

To smooth out the infinite variability at this stage, he suggests that companies define and streamline the ways that people submit requests. “If you’ve got 50 ways of receiving requests across all your touchpoints, see if you can cut that down to a smaller number, perhaps to five,” Charles says. “Come up with a consistent approach.”

Stages 2 & 3. A sanity checker reviews the request, rejecting it if appropriate

When a request comes in, someone must determine whether the idea merits further assessment (Stage 2). Charles calls this the sanity check.

Your company may want to establish a separate group of people who do the sanity checking. Choose people who know enough to understand the requests, the audiences, and the business needs and who can spot ideas that should be immediately rejected (stage 3) rather than waste everyone’s time by moving it on to the decision-makers (stage 4).

Sanity checkers may reject a request for several reasons:

  • It may not make business sense.
  • It may not be described clearly or fully.
  • It may not include enough information for the decision-making team to consider.

Where appropriate, the sanity checker returns the request to the requester, asking for whatever additional information is needed. From the requester’s point of view, this feedback loop takes the guesswork out of submitting a request. From the company’s point of view, it keeps underdeveloped requests from wasting the team’s time.

The goal: Improve requests so that they are more likely to be approved (and approved efficiently).

Stage 4. The sanity checker distributes information about the request to the team

When a content request passes the sanity check, the sanity checker creates and distributes a packet of information to the members of the decision-making team. Like anything worth reading, this information must be fair, accurate, concise, and easy to understand, giving the strategists everything they need to make a good decision.

What does this packet contain? Whatever it takes to sell the idea.

Examples:

  • The request. (Exactly what is being proposed?)
  • The rationale. (Why do this? What problem would be solved or what market advantage gained?)
  • The scope. (Would this require action locally, in certain regions, or worldwide? What departments would be affected and how?)
  • The consequences of rejecting the request. (If the company doesn’t do this, what’s likely to happen?)
  • The concerns. (If the company does this, what concerns might need to be considered?)

This packet must be distributed far enough in advance of the decision meeting (Stage 7) that the team can examine it and have the necessary conversations with SMEs (Stages 5 and 6).

Stages 5 & 6. Team members examine the request, getting input from SMEs as needed

In most companies, decision makers have too little time to examine requests (Stage 5) and then to reach out to SMEs for further input if needed (Stage 6). Sometimes companies skip these “incredibly important” stages altogether.

“It’s not unusual for five people to come into a meeting to approve a bunch of requests without having looked at any of them. They’ve had no time to understand the requests, no time to ask their own questions, no time to talk to people across the organization.”

In that situation, no one can make good decisions.

Although it sounds like a lot of work, this research often doesn’t take much time. A brief conversation with an expert or knowledgeable colleague– via phone, email, desk visit, hallway encounter, or electronic exchange in a formal approval system—may be all it takes to address the questions.

“Ask SMEs what they think,” Charles says. Give them a chance to weigh in, especially on changes whose ramifications will resonate for years.

Charles uses the term “SME” to describe someone knowledgeable about anything—a product, a country’s culture, a group of customers—anything that the decision makers need to understand.

Budget sufficient time for Stages 5 and 6. To give strategic direction is to take the time required to understand and wonder about the requests you’re being asked to approve.

Stage 7. The team meets and decides whether to approve the request

At this penultimate stage, decision makers meet to decide which requests to approve and which to reject. In this meeting, no one is looking at the requests for the first time. People walk in having done their homework, ready to make informed, considered decisions.

Occasionally, in the course of the meeting, it becomes clear that more information is needed. Issues come up in the conversation. People go back, get the information, and make a decision at the next meeting or through email. No problem. But the goal of this meeting is to say yea or nay.

Stage 8. The team passes on the approved request to be implemented

After the decision makers approve a request (Stage 7), they let the appropriate teams know what they need to do to implement the request (Stage 8). Sometimes, though, especially when a process like this is just getting started, the decision makers don’t know whom to pass the approvals on to. “That education is crucial for the process to work,” Charles says.

Conclusion

Charles’s insights ring true for me. I’ve worked in situations where business decision makers underestimated the impact of their decisions on content teams. Stress abounds, and expenses soar. A process like the one described above could have made all the difference.

How about your company? To what extent do strategic decision makers anticipate the impact of their decisions on content teams across the company? What would you add to what Charles has to say?

Recommended: How Questions Drive Innovative Solutions

The post How To Estimate The Impact of Business Decisions on Content Teams appeared first on The Content Wrangler.

Categories: DITA

Meet the Gilbane Conference keynote speakers

Join us in Boston to learn how your peers and competitors in marketing, IT, business, and content across industries integrate content strategies and computing technologies to produce superior customer experiences for all stakeholders. Keynote presentations The Gilbane Digital Content Conference is focused on content and digital experience technologies and strategies for marketing, publishing, and the workplace. […]

This post originally published on https://gilbane.com

Categories: DITA

Orbis Heading to GEOINT Portfolio Conference Next Week

Really Strategies - Thu, 2017-09-14 14:31

Orbis Technologies, Inc.’s to Showcase REnDER Product at Key Geospatial Intelligence Industry Event

Annapolis, MD – September 14, 2017 In less than one week, Orbis’ REnDER team will make the short trip over to Chantilly, VA, for the 2017 GEOINT Portfolio Conference. Hosted by the National Reconnaissance Office (NRO) and the National Geospatial-Intelligence Agency (NGA), this two-day classified event takes place September 20th and 21st, featuring noted speakers, industry-specific presentations, and discussion of current and future GEOINT capabilities and challenges.

Categories: DITA

The 10th Annual RSuite® User Conference is Upon Us!

Really Strategies - Thu, 2017-09-07 15:38
The 10th Annual RSuite® User Conference is Upon Us!

RSuite Enterprise Content Management System Hosts 10th Annual User Conference

Annapolis, MD – Orbis Technologies, makers of the RSuite Enterprise Content Management System, will host its 10th User Conference and Tech Day on September 18th – 19th at the Convene Cira Centre in Philadelphia, PA.  This annual event allows both current members of the global RSuite community and those considering RSuite to interact, discuss trends, and get a sneak peek at what’s next with RSuite.  

Categories: DITA

Adobe Event Focuses on DITA for Marketing and Technical Communication

The Content Wrangler - Tue, 2017-09-05 23:46

There’s been a lot of talk about convergence between marketing and technical communication over the past few years. Most of the ideas being discussed are focused on finding ways to improve customer experience by unifying content production and distribution efforts, but few companies are actually making a concerted attempt to break down the silos that prevent collaboration. Adobe aims to change that.

This October 10-12, Adobe will attempt to bridge the gap between technical communication and marketing professionals by bringing together content creators from both camps to learn about structured content. The effort is called Adobe DITA World, an online event to which the software maker expects to attract over 1,000 marketing and technical communication experts from around the globe.

The three-day virtual confab aims to showcase experts in the fields of content management, content strategy, content engineering, translation, and localization in order to help “connect the dots” between marketing and technical communication content.

The Content Wrangler is pleased to be named the official media partner of Adobe DITA World. Founder and Chief Wrangler, Scott Abel, will serve as the opening keynote presenter. He’ll discuss the impact of cognitive content, artificial intelligence, and agentive technologies on technical communication and marketing.

Abel will join a roster of on outstanding guest experts including technical communication and content strategy notables, Val SwisherRahel Anne Bailie, Tom Aldous, Andrea Ames, Robert Anderson, and Kristen James Eberlein.

Adobe DITA World offers three-days of programming for free. But, to attend, you’ll need to register.

Take a peek at the line-up—and if it’s a good fit for you—register today! It’s free.

If you can’t make the live event, register anyway to gain access to recorded presentations. And, make sure to follow the event on Twitter.

The post Adobe Event Focuses on DITA for Marketing and Technical Communication appeared first on The Content Wrangler.

Categories: DITA

Publishing process got you in a pickle?

Really Strategies - Fri, 2017-09-01 15:28
Publishing Process Pickle.png

Long, long ago Jeff Wood made me a t-shirt that read "Publishing Process Got You in a Pickle?" accompanied by a hokey animated pickle graphic. It came up during today's planning meeting for the upcoming 2017 RSuite User Conference and had us all roaring.

Categories: DITA

Artificial Intelligence

The Content Wrangler - Thu, 2017-08-24 21:50

The following is an excerpt from The Language of Technical Communication, the seventh book in The Content Wrangler Content Strategy Series of books from XML Press (2016).

What is it?

A branch of computer science that focuses on the development of software agents, also known as cognitive technologies, capable of performing tasks that would normally require human intelligence, such as finding, interpreting, and manipulating visual and textual information.

Why is it important?

Artificial intelligence is producing cognitive technologies that are radically changing, and even automating, many traditional communication tasks. Technical communicators need to adapt accordingly.

Why does a technical communicator need to know this term?

Artificial intelligence (AI) has been advancing rapidly recently, and, as cognitive technologies, its impact has been spreading. Here are some of the reasons why AI has become so important today including:

  • Cognitive technologies have become much more practical, shifting the focus to performing human tasks rather than emulating human thought.
  • Massively scalable big data acquisition, storage, and processing infrastructure has become broadly accessible.
  • Decades of research and experimentation in AI, while not successful in emulating human thought, has been successful in improving problem-solving and learning algorithms.

A key area of application for AI is Natural Language Processing. Here, tasks commonly performed by people are being increasingly automated, or at the very least facilitated, by intelligent software applications. These tasks include text translation, summation, validation, classification, interpretation, and even generation.

Another area of AI advancement is computer vision, where image and video processing automates the selection, interpretation, and manipulation of visual resources. Yet another important area of AI advancement is information discovery, where contextually aware applications help select relevant information resources for users based on real-time data.

For technical communicators, these changes could not be more significant. More and more traditional communication tasks will be subjected to automated support and even replacement. What this means is that the focus for technical communicators will shift more and more towards the human side of the equation, such as facilitating all-important cross-functional collaborations, the value of which will in fact be increased and not diminished by the advance of AI.

About Joe Gollner

Joe Gollner is the Managing Director of Gnostyx Research, which he founded to help organizations leverage content standards and technologies as the basis of scalable and sustainable content solutions. For over 25 years, he has championed content technologies as an indispensable mechanism to help organizations manage and leverage what they know.

To read more from The Language of Technical Communication, check it out on the XML Press website, or buy the book now on the AmazonBarnes & Noble or the O’Reilly Media website.

The post Artificial Intelligence appeared first on The Content Wrangler.

Categories: DITA
XML.org Focus Areas: BPEL | DITA | ebXML | IDtrust | OpenDocument | SAML | UBL | UDDI
OASIS sites: OASIS | Cover Pages | XML.org | AMQP | CGM Open | eGov | Emergency | IDtrust | LegalXML | Open CSA | OSLC | WS-I