DITA

The Language of Localization

The Content Wrangler - Mon, 2018-02-19 19:42

 

The following is the foreword written by Ulrich Henes from The Language of Localization, edited by Kit Brown-Hoekstra, the eighth book in The Content Wrangler Content Strategy Series of books from XML Press (2017).

Adapting content to make it more meaningful, appropriate, and effective

Like many of you, I have frequently had to explain what localization is, usually in the context of what I do for a living. I find localization easy to explain, often using the example of getting a mobile phone adapted for other countries. Because most people own one, they can imagine why adaptation makes sense. For good measure, I add that it is not only the language, but often local laws, different technical specifications, and customs that need to be considered.

As I get fired up and start explaining the difference between localization and internationalization, eyes quickly glaze over because, like in all languages, words are not just words; they represent concepts that, in turn, often rely on the comprehension of other concepts for a clear understanding. At this point, the value of a guide like The Language of Localization becomes apparent. Like any other profession, we have developed a jargon of our own. We took common words that have multiple meanings and defined them in the context of localization, or if a word did not exist, we invented one.

A clearinghouse for localization terminology

This sounds a lot easier than it is because there is no central authority to do that work, no arbiter to clarify meaning. Instead, it was and continues to be, a crowd-sourced exercise with no central repository to use for reference.

Enter Kit Brown-Hoekstra, Scott Abel, Richard Hamilton, and their merry band of subject matter experts. For the benefit of localization professionals, global marketers, and technical communicators, they compiled a “must-know” list of terms that define the common language of our industry.

I appreciate how thoroughly they tackled the task. Not only are the terms defined clearly and their importance explained, but in the essay, they explain why business professionals need to have this information.

When localization goes wrong

Which brings to mind a case when poor communication and a lack of clearly defined terms cost a pretty penny. In 1999, NASA lost a US $125 million Mars orbiter because NASA was using metric measurements, while Lockheed Martin, the contractor, relied on English Imperial units.

We can do much better

Having clarity in the way we communicate about the many tasks and processes that make up life in localization benefits every aspect of our work. It helps us achieve good quality, speeds up time-to-market, and improves the cost-effectiveness of our collaboration. Everyone benefits, from the translator to the project managers to, maybe most important of all, the end users.

We all owe thanks to the people behind this project for helping to develop a shared language for a still young industry. Along with other standards, it is a critical tool in meeting the challenges of an ever faster-paced, globalized world.

As new terms appear and existing ones change their meaning, I hope that the authors will issue new editions of this excellent guide.

Well done, and thank you on behalf of the localization community.

Ulrich Henes
Madison, Wisconsin, USA

The post The Language of Localization appeared first on The Content Wrangler.

Categories: DITA

Gilbane Advisor 2-16-18 — Marketing & AI, publishing & AR, blockchain & media, IoT

How is AI disrupting marketing? An excellent summary from Scott Brinker on the current/near-term reality of “AI” marketing applications. “…here’s the irony: as much as the hype has overstated what AI might do formarketing in the next 12-24 months, the reality of how AI is already working in marketing today is often under-recognized.” Tis true. Read […]

This post originally published on https://gilbane.com

Categories: DITA

Gilbane Advisor 1-30-18 — Molecular content, beyond bitcoin, ML data value, Facebook “platform”

Molecular content & the separation of concerns The creation and management of content continues to increase in complexity as we need to design for nmachines in addition to n screens. Content Strategist Michael Andrews lays out why we need to move beyond, single sourcing and modular content. Michael proposes an approach based on “molecular content” combined […]

This post originally published on https://gilbane.com

Categories: DITA

Gilbane Advisor 1-30-18 — Molecular content, beyond bitcoin, ML data value, Facebook “platform”

Molecular content & the separation of concerns The creation and management of content continues to increase in complexity as we need to design for nmachines in addition to n screens. Content Strategist Michael Andrews lays out why we need to move beyond, single sourcing and modular content. Michael proposes an approach based on “molecular content” combined […]

This post originally published on https://gilbane.com

Categories: DITA

Review: The Science Behind Memorable Visuals

The Content Wrangler - Thu, 2018-01-18 21:42

It’s intuitive to believe that visuals are more memorable than text. To a degree, science confirms this. Research shows that visuals impact recall because they help viewers process information faster and assist them to pay attention by being more engaging than text.

But there is such a thing as a forgettable visual. Think of all the information you encounter in a typical week. How much of it do you remember? We forget our lives almost as quickly as we live them, and visuals can still escape our memories.

In her July 19, 2017, Content Wrangler webinar, The Science behind Memorable Visuals, cognitive neuroscientist, Dr. Carmen Simon, talked about how to stay on people’s minds by applying science-based guidelines from the angle of how the brain processes visuals.

A bestselling author and leading expert on using memory to influence decision-making, Carmen covered how to use visual thinking skills in four areas that are prevalent in business communication: facts, processes, data, and abstracts. She also covered how to use design elements (images, text, lines, shapes) to create interesting—and memorable—content.

Carmen provided universal visual design principles and explained how they influence viewer attention and retention, including how to create and select visuals that impact memory and how to avoid those that don’t.

Visual thinking is important because, when you use images correctly, you have the luxury of staying on your audience’s minds long-term. This helps you influence their decisions because people act in your favor based on what they remember, not on what they forget.

Her most recent book, Impossible to Ignore: Create Memorable Content to Influence Decisions, has won the acclaim of publications such as Inc.com, Forbes, and Fast Company and has been selected as one of the top international books on persuasion.

Read on for some highlights from Carmen’s talk. For the details, go to her webinar and listen to the whole hour’s worth for free.

Why does it matter which visuals are more memorable?

“All your audiences make decisions in your favor based on what they remember,” Carmen says. So all businesses need to ask themselves, What makes people remember your content where visuals are concerned? How do we “stay on people’s minds long enough for them to decide in our favor?”

While we can’t assume that people remember pictures more than text, pictures can influence someone’s memory significantly. People often find pictures more interesting than text. Also the brain can process pictures faster. Finally, pictures generally retain our attention longer.

What are the elements of visual thinking for nondesigners?

Carmen breaks down elements of visual thinking into three groups:

  • Think in pictures
  • Visual elements
  • Universal design principles

 

Think in pictures

Facts: Most information we share with our audiences in the business world is based on facts, some objective reality we have to define. When you are sharing factual information, challenge yourself to communicate it with pictures. “The brain is always looking to conserve cognitive energy,” Carmen says. “Your brain is not like a computer; it’s looking to help you live to fight another day. It enjoys cognitive ease and will retain it.”

Processes: When you describe a process, you tell people “You need to move from point A to point B, and these are the steps.” Use arrows to indicate the order of the information in a sequence and a smooth flow to help the brain visualize and remember.

Numbers: “Challenge yourself to place data in a visual format so that you bring it to life and give it meaning.” She gives this memorable example:

Abstract terms: In these days, when people are constantly multitasking, their brains are “often too cognitively lazy to go through the effort” of making sense of your abstract information (generalizations, theories, feelings, attitudes). So don’t “leave it to the audience to visualize the meaning.”

For example, Carmen asks, what image would you choose to represent the abstract meaning of “revenge”?

It makes sense to evoke emotion with a concrete image where appropriate since emotions are memorable. “The brain is mobilized by specifics.”

Note that words can build mental pictures, too. Mental pictures often come from metaphors, as in Carmen’s Johnson & Johnson example describing Band-Aids as bodyguards:

“Use metaphors to rescue text from content amnesia,” Carmen says. “What is memory but an association between two concepts.”

How much difference is there from one culture to another? Carmen says that the brain hasn’t changed much in the last 40,000 years. “Whether you’re from Portland or Pakistan,” she says, “most of our body receptors are visual. The brain is physiologically equipped to handle images.”

Visual elements

Pictures: “Attention is mandatory for memories to be built,” she says. At the same time, if the brain has too much visual stimulation, it doesn’t know what to focus on. So don’t use pictures gratuitously. Avoid visual clichés. “There is such a thing as a forgettable picture,” she says.

One of her pet peeves is generic images of people touching a tablet, and out of the screen come a bunch of magical icons or interconnected dots, implying “good things come out of these devices.” Don’t use these to make slides pretty, for example. These days, those images are meaningless. “If you’re using those images, it probably means that you don’t know the subject well enough.” Another example: People wearing business suits and using a laptop on the top of a mountain.

Avoid SGSs: stupid generic shots.

Text: Use vivid, concrete words that help the brain build images. For example, if you showed an otherwise forgettable photo of Mount Everest with this vivid description, the words alone would be memorable:

Lines: Lines are important in any communication, for example, in a slide or in a document. With simple lines, you can impact the way the brain processes information. Lines can create a mood, and they can separate or group information. You can tilt them, display them in multiple formations, make them wavy, or get creative in any number of ways to organize and draw attention to your content in ways that help people remember it.

Shapes: You can also “earn a spot in someone’s mind” with shapes. Here’s another of Carmen’s examples:

Watch the full webinar

For the rest of what Carmen has to say on this topic, watch the full webinar here.

The post Review: The Science Behind Memorable Visuals appeared first on The Content Wrangler.

Categories: DITA

Gilbane Advisor 1-16-18 — Open Web, Mobile Mesh, Machine Learning, AR

We’re back after our annual December break and looking forward to a year of consequential, if not yet tectonic, shifts in enterprise and consumer content strategies and applications. We’ll be closely watching how, and how fast, three major technology areas will drive these changes: 1) The tension between the Open Web and proprietary platforms; 2) […]

This post originally published on https://gilbane.com

Categories: DITA

Gilbane Advisor 1-16-18 — Open Web, Mobile Mesh, Machine Learning, AR

We’re back after our annual December break and looking forward to a year of consequential, if not yet tectonic, shifts in enterprise and consumer content strategies and applications. We’ll be closely watching how, and how fast, three major technology areas will drive these changes: 1) The tension between the Open Web and proprietary platforms; 2) […]

This post originally published on https://gilbane.com

Categories: DITA

The DITA Code Review (Content Audit) – put your content on track to success

Accelerated Authoring - Fri, 2018-01-05 16:47

A DITA Code Review is a collaborative process, building skills and keeping your DITA implementation on track towards semantic markup. Make DITA content auditing part of your team culture.

Review of your DITA content for semantically correct markup is referred to as a DITA code review or a DITA content audit.

What happens in a DITA content audit (DITA code review)?

The auditor(s) review DITA source files and check tagging in the content. The editors/writers whose content is reviewed get practical, hands-on feedback towards implementing DITA best practices.

Who does a DITA content audit?

Any member of the team who is versed and conditioned in semantic markup can lead a DITA code review. Ideally, the review auditor(s) will be or will include at least one person from outside the team who can:

  • Benchmark against DITA best practices
  • Compare to other teams in the organization (creating healthy competition between teams)

Why review your DITA markup

DITA has lots of promise. Semantically correct DITA markup is the key to successfully realizing that promise.

Semantic DITA markup can enable:

  • filtering content by detailed conditions,
  • displaying expandable/collapse content,
  • applying automation to display, and
  • enabling rapid identification of issues that need to be checked or modified when your product updates
  • … and much more, such as intelligent tools that link error messages or support requests to content require semantic tagging to work.

For example, a call to help that references a specific task or error can automatically map to the appropriate content – if the content has meaningful semantic markup.

Does your DITA separate content from formatting? If not, the presentation is likely to be inconsistent, problematic across emerging platforms, difficult to maintain over time, and time-consuming and costly across localization combinations (for different languages and markets).

When should you do a DITA code review/content audit?

DITA code reviews should start early in your DITA implementation. Content auditing helps your team make the shift from tagging for formatting to tagging for semantic meaning. Shifting to semantic markup is not easy, and the content review helps keep your team on track to successful DITA implementation.

The earlier in the process that you start content audits, the earlier writers and editors can recover from problematic tagging and acquire healthy semantic tagging habits. The earlier this happens, the smaller the scope of problematic DITA content created.

Over time your content will tend to regress. New writers come on board. Experienced writers tend to slip back into old habits. Content needs to be added in a rush and semantic markup tends to suffer. Implementing a regular cycle of code reviews helps you keep your content effective and agile. While the frequency of code review will drop over time, content auditing should become a regular part of your team culture.

Every DITA implementation needs a DITA code review. (Read Is Markup Mentoring for Me?)

Use tools whenever possible to turbo-charge your content review

Consider usage of the <b> element in DITA. Bold has no semantic meaning. Getting users to not use <b>, and use a semantically meaningful tag for the content (such as <uicontrol>), is part of the DITA mandate.

You can use the CSS to focus writers and reviewers on use of problematic markup. For example, you can display content marked with <b> on a bright red background.

Consider use of search scripts to identify “known issues” (such as overuse of conditions).

The content audit should be a resource for writers – not an Orwellian experience

Writers and editors don’t want to feel that their professional autonomy and competence is under attack. The writers do want to feel supported in a challenging environment. Part of the goal of the content audit is to provide support, in the form of positive, focused, and practical feedback that writers and editors can implement immediately.

Sample scenario for a content audit

  1. The auditor(s) and the editor/writer select a representative set of 10 topics. (Include task content as much as possible in the set.)
  2. The auditor(s) spend 15-30 minutes per topic and provide feedback on the topics.
  3. The auditor(s) meet with the writers and editors to review the feedback. Editors and writers have an opportunity to explain their approach – and get supportive feedback towards semantic markup.
  4. Writers and editors then have a chance to work on 5-10 additional topics and present the topics to the auditor(s) for review. The auditors will provide positive feedback where semantic markup has been implemented and will point out where improvements should be made.
  5. This step can be repeated for an additional round as needed.

The post The DITA Code Review (Content Audit) – put your content on track to success appeared first on Method M - The Method is in the Metrics.

Categories: DITA

SPARQL and Amazon Web Service's Neptune database

bobdc.blog - Sun, 2017-12-31 14:53
Promising news for large-scale RDF development. Bob DuCharme http://www.snee.com/bobdc.blog
Categories: DITA

Orbis Earns 6th Patent in the Realm of Semantic Technology

Really Strategies - Wed, 2017-12-13 22:15
Patent (Dec '17).png

Orbis Awarded U.S. Patent for 3-Term Semantic Search

(Annapolis, Md.) - December 13, 2017  On November 21, 2017, Orbis Technologies, Inc. was awarded a U.S. Patent entitled, “Systems and Methods for 3-Term Semantic Search” (U. S. Patent No. 9,824,138). This patent recognizes a next generation semantic approach to information retrieval at enterprise scale.  In contrast to simple keyword or phrase based searching, this advancement yields far greater accuracy by identifying similar content and/or relationships between documents.

Categories: DITA

SPARQL queries of Beatles recording sessions

bobdc.blog - Sun, 2017-11-19 15:40
Who played what when? Bob DuCharme http://www.snee.com/bobdc.blog
Categories: DITA

Gilbane Advisor 11-15-17 — news value, implausible AI, software &#038; CMS 2.0

Scoring news stories is hard? Frederic Filloux dives into some research and unique challenges the News Quality Scoring project faces. A worthy project to benefit producers and consumers, the NQS “is aimed at assessing the value-added deployed by a media for a given news coverage in terms of resources, expertise, thoroughness of the process, and […]

This post originally published on https://gilbane.com

Categories: DITA

Gilbane Advisor 11-15-17 — news value, implausible AI, software &#038; CMS 2.0

Scoring news stories is hard? Frederic Filloux dives into some research and unique challenges the News Quality Scoring project faces. A worthy project to benefit producers and consumers, the NQS “is aimed at assessing the value-added deployed by a media for a given news coverage in terms of resources, expertise, thoroughness of the process, and […]

This post originally published on https://gilbane.com

Categories: DITA

Who you&#8217;ll meet at Gilbane Boston

Dear Reader: Join us in Boston in 3 weeks to network with your peers and learn how they are building successful next generation content strategies and digital experiences for customers and employees. Here is just a sample of who you’ll meet… Starwood Hotels & Resorts ? Elisa Oyj ? State Street Global Advisors ? KrellTec ? Commonwealth of MA […]

This post originally published on https://gilbane.com

Categories: DITA

Who you&#8217;ll meet at Gilbane Boston

Dear Reader: Join us in Boston in 3 weeks to network with your peers and learn how they are building successful next generation content strategies and digital experiences for customers and employees. Here is just a sample of who you’ll meet… Starwood Hotels & Resorts ? Elisa Oyj ? State Street Global Advisors ? KrellTec ? Commonwealth of MA […]

This post originally published on https://gilbane.com

Categories: DITA

How Structured Content Makes Chatbots Helpful

The Content Wrangler - Tue, 2017-11-07 08:19

Remember when context-sensitive help was the revolutionary way to deliver the right content to the right people at the right time in the right way? Just a few years ago, many technical communication teams did nothing but create context-sensitive documentation for software products. They aimed to provide contextually relevant, helpful content based on what the customer was doing in the software at any given moment.

These forward-thinking teams deconstructed large technical documents into discrete chunks, which they then hooked into the product interface. Customers no longer had to paw through a fat user manual or poke around in an online portal to seek answers to their questions. With a click of the F1 (help) key, they got the information they needed on the screen right in front of them.

Oooh. Ahhh. Contextual relevance had arrived in the digital world.

Today, savvy consumers simply expect digital content to be contextually relevant. What’s more, “context” now means more than location in a user interface. “Context” includes many factors: user-profile data, geographic location, product model, version number, preferred language, time zone, interaction history, the device’s capabilities, and so on.

Providing contextually-relevant content today is no trivial matter. It’s challenging, especially for teams that have not adopted advanced practices and tools for developing and managing information.

In his recent Content Wrangler webinar, The Fifth Element: How Structured Content Makes Chatbots Helpful, Alex Masycheff, structured-content expert and co-founder and CEO of Intuillion Ltd., discussed how emerging delivery technologies can take advantage of structured technical content to deliver contextually relevant content via conversational user interfaces, such as chatbots.

Alex delved into the following:

  • How chatbots improve context-sensitive assistance
  • Five elements of a helpful chatbot
  • When chatbots bring the greatest value
  • Why structured content is critical to chatbot success

Read on for some highlights from Alex’s talk. For the details, go to his webinar and listen to the whole hour’s worth for free.

Single source publishing today

Single source publishing has evolved since the early days of context-sensitive help. It adapts to a range of channels. People might access it through a customer portal, through a chatbot on Facebook that provides a conversational UI, or through an augmented-reality application that applies a visual layer of information over physical objects.

Content may have to adapt also to align with business rules that determine how it gets processed. Depending on the user’s goals and preferences, access rights, and other criteria, a set of business rules can be applied, on the fly, to any content to make it deliverable to the user in a way that fits the situation.

Further, we’ve broadened our notion of context sensitivity. In the early days of context-sensitive help, context meant “the user’s location in the UI.” Today, the user context has many facets. Examples:

  • Goals
  • Skills and abilities
  • Current activity
  • Profile
  • Product
  • Geographical location
  • Interaction history

Five elements of chatbot helpfulness

Alex’s webinar title starts with “The Fifth Element” in reference to the movie The Fifth Element. In that movie, four stones represent various elements in nature. A fifth stone brings them together and activates their powers.

Alex’s fifth element—structured content—brings all the others together and activates their power to create human experiences that just might qualify (depending on the human) as helpful.

Here are his five elements:

  1. User’s context
  2. User’s intent
  3. Entities of the user’s intent
  4. Knowledgebase
  5. Structured content

Element 1: User’s context

The first requisite element of chatbot helpfulness is an ability to capture info about the user’s context. The system can capture some of the contextual info (for example, the user’s location and basic profile data) automatically. The chatbot then kicks into conversation mode to “unveil” other key bits of contextual info (the user’s goal and so on).

Chatbots can gather information about people’s context by asking questions. Based on the answers they get, they can then offer advice, as shown in this conversation between a chatbot and a maintenance engineer:

You could think of this robot as a chatty version of the old F1 key.

Element 2: User’s intent

To efficiently suss out the user’s intent—the thing someone wants to know or do in a given moment—a chatbot must keep the conversation within a narrow domain of information. Here’s an example of a domain that might support conversations between a chatbot and maintenance engineers:

While chatbot designers can’t control what the human will toss out (ever amused yourself by messing with Siri?), they can and must define the scope of machine’s side of the conversation. Presuming that the person stays within that scope—by asking something like “Do I need to lubricate the XZ-135?”—the conversation has a chance of satisfying the user’s intent.

Element 3: Entities of the user’s intent

To understand the user’s intent, chatbots need info about the parameters, or entities, that make up the user’s intent. Here’s what such entities might look like for our maintenance conversation:

To find out which entities go with each intent—to “fill all the required slots,” Alex says—chatbots must ask questions. For example, in the earlier conversation, after the chatbot learns that the first entity is the ZX-135, it asks a question to fill in the slot for the next entity:

When the chatbot has filled all the entities of the user’s intent, it can proceed to offer help.

Element 4: Knowledgebase

Chatbots pull their content from a knowledge base. As content professionals, our challenge is to organize that knowledge base so that the chatbot can find and deliver the content chunks that will satisfy users’ intents.

How do we make this happen? Here’s the critical behind-the-scenes insight: Just as we have learned to structure content in standalone modules (granules), so too must we structure CONTEXT.

Aha!

Here’s how Alex illustrates a possible structure for context granules:

Creating a chatbot is a game of matching context granules with content granules. The chatbot pulls content from the knowledge base according to that matching.

Element 5: Structured content

Structured content—our fifth element—unites the other elements (context, intent, entities, and knowledge base) and, as Alex put it, “activates their powers.” Structured content is granular content. In other words, it’s made up of topics (or “chunks” or “units”) that can be “managed and processed independently,” Alex says.

Without structured content, he adds, a chatbot can’t create helpful experiences.

Here’s how Alex illustrates structured content:

To enable a chatbot to find and process the right topics at the right time, each topic must be associated with metadata that identifies applicable user contexts and user intents. Example:

You might wonder why we need to bother with structure, why we can’t “just let artificial intelligence do the work.” Here’s why in Alex’s words: “We’re not there yet. Understanding human language is still a challenge.”

Watch the full webinar

For the rest of what Alex has to say on this topic—including his insights into the role of artificial intelligence, deep learning, speech recognition, image recognition, natural language processing, machine translation, metadata auto-identification, and scalability—watch the full webinar here.

The post How Structured Content Makes Chatbots Helpful appeared first on The Content Wrangler.

Categories: DITA

An HTML form trick to add some convenience to life

bobdc.blog - Sun, 2017-10-29 14:07
With a little JavaScript as needed. Bob DuCharme http://www.snee.com/bobdc.blog
Categories: DITA

Every DITA topic should be able to fit anywhere. (Not really.)

Geekery - Sat, 2013-10-12 17:05

When I talk to writers about this, I state the case strongly: every topic should be able to fit anywhere. That always provokes some pushback, which is good. Of course it’s not really so, in practice. There are many combinations of topics that are just never going to happen. However, on a large scale, with hundreds or thousands of topics, there are many, many plausible combinations, some of them completely unexpected.

In fact, there are so many plausible combinations, you might as well not worry about the impossible ones. You might as well just go ahead and write each topic as if you had no idea what parent topic it was going to be pulled into.

That’s what we mean by “unleashing” your content with DITA. It’s the combinations of topics that bring the value, not the individual topics themselves. If you draft each individual topic so that it’s eligible for the largest possible number of combinations, you’ve multiplied the usefulness to the user (yes, and the ROI, and the technical efficiency) of the information in that topic. For any given topic, it’s true, there may be only three or four conceivable combinations in which it could make sense. For some, there might be hundreds. You’re not going to know unless you write for reuse in every case.

Once we’ve put this into action, we can go back to the managers and gurus and say, now you’ve really got ROI; now you’ve really got efficiency. Because we’ve given you something that is worth investing in, something that’s worth producing efficiently. Something that can delight readers with its usefulness and its elegance. This isn’t just content, this is writing.

Categories: DITA

DITA makes it possible for any information set, no matter how complex and huge, to be represented with a single page.

Geekery - Tue, 2013-10-08 00:33

In any information set, every component should be able to roll up into what is ultimately a single top-level summary. We know most readers don’t come in through the front door, but in principle you can provide the reader with an entry point that fully sums up what’s in the information set. From there they can drill down into more and more detailed levels. (Readers can be very easily trained to do this, because they have learned from their previous reading to scan for summary-like information and use that to judge whether it’s worth reading on for more detail.)

If each level is itself a rolled-up collection of subordinate units, and so on in turn down the ranks, what you are offering is a set of pages in which each page is itself a table of contents. The content is the navigation and the navigation is the content.

Picture this single page sitting at the apex of a pyramid. It contains (describes) everything that is included in that pyramid. Not that many people are ever going to actually read that page, but we need it to be there, because it defines the pyramid.

The bigger the pyramid, the higher level the information in its top node is going to be. So, for a very large information set, that single page is going to be very general. Each of its immediate child pages will be a level more detailed, and each successive level is going to be more detailed, until you get to the bottom “leaf” level where a topic describes only itself.

 

Categories: DITA

Modularity is what makes it fun to write with DITA.

Geekery - Tue, 2013-09-24 19:13

The most disorienting thing about learning structured writing is modularity. There are a lot of things we’ve learned about writing that we have to unlearn; this is the most fundamental of them. This is way bigger than deciding it’s OK to dangle a preposition.

Modularity means, in practice, conveying meaning in free-standing chunks instead of in a unified stream. Why is it so great to be free-standing? What does that get me, from a purely authoring perspective? (Remember, we’re still excluding managerial and technical perspectives from this conversation. You folks can come on back later.)

In mature DITA writing, many topics are built up automatically from component topics. Done well, these composite topics look like you lovingly handcrafted them with sections, section titles, section detail, overview material, and so on. In fact, you threw them together on the fly from component topics that you happened to have lying around.

How good your built-up topics are depends on how good those component topics are. How good the components are is largely a function of how well each one delivers meaning on its own, without having to wait for any other component to its job.

A composite topic that looks and reads like a composite topic is a failed composite topic. It needs to look and read like it was specifically conceived for this particular user at this particular moment. We want its component topics to match, in tone and style and scope, so well that they look like they were all written at once for this specific collection.

You’re working on a building, from the roof down and from the foundation up at the same time. You know what you need your built-up composite topic to do, which influences how to you’ll define and select or draft its component topics. At the same time, as your component topics come into being, they’ll influence the scope, scale and ultimately the effectiveness of the composite topic you’re building from them. In my experience, it’s when this process gets rolling that you really start to feel like you’re doing interesting, useful writing. This is where the fun starts.

Categories: DITA
XML.org Focus Areas: BPEL | DITA | ebXML | IDtrust | OpenDocument | SAML | UBL | UDDI
OASIS sites: OASIS | Cover Pages | XML.org | AMQP | CGM Open | eGov | Emergency | IDtrust | LegalXML | Open CSA | OSLC | WS-I