Bayer Diabetes Care (BDC) is looking for an individual who is keen to pair their solid understanding of DITA content strategy, development and publishing with management of all the aspects of medical device labeling including people, regulatory and translation.
Bayer is a global enterprise with core competencies in the Life Science fields of health care and agriculture. Its products and services are designed to benefit people and improve their quality of life. Bayer HealthCare, an Equal Opportunity Employer, makes an important contribution to human and animal health with its innovative products and by researching new therapeutic approaches promoting science for a better life throughout everything that we do. At Bayer you have the opportunity to be part of a culture where we value the passion of our employees to innovate and give them the power to change.
DITA Content Strategist / Content Manager
Responsibilities of this role include:
- Managing a team of writers responsible for creating the content required for packaging and labeling materials
- Working with the Sr. Manager, Global Content to develop content models, metadata, reuse strategies and workflow
- Setting the tone and structure of all content in accordance with DITA standards and content management system guidelines
- Enforcing consistent implementation of content across brands
- Managing project schedules and coordinating activities within the Content Department
- Coordinating with Core Team Members including Legal, Medical, Regulatory within BDC to ensure that all labeling is submitted and approved by government regulatory agencies
- Acting as liaison between Research and Development, Marketing and the Packaging Services team
- Working closely with the Translation Project Manager to facilitate localization of core content into required languages
- Working closely with the regulatory team to assure that regional/country requirements are defined within in the content management system
- Coordinating proofreading and editing of all final English text for 510K and IVDD submittal
- 8-10 years of experience as a Technical Writer or Manager in a Pharmaceutical or Medical Device environment
- 5 or more years of experience with structured writing, preferably using DITA and a component content management system
- Microsoft Office, Adobe Creative Suite, and Adobe FrameMaker knowledge a plus
- Bachelor’s Degree in liberal arts, English major is desirable, or Bachelor’s degree in science or medical technology
- Experience working with content in a government regulated industry
- Foreign language skills and/or experience managing translation projects
- Strong relationship building skills
- Assertive and comfortable working across all levels of the organization
- Highly organized and be able to manage multiple projects at any given time
- Technical expertise to comprehend and manage complex technical information (software, instrument, and reagent systems) and the writing skills to transform this information into accurate, understandable publications
- Analytical thinker with superior problem-solving skills
- Thorough working knowledge and understanding of the FDA regulations as they apply to the preparation, distribution, and control of product labeling for in vitro diagnostics products
- Experience managing outside agencies and freelance creative talent
If this sound like you, let us know. Send your resume and cover letter to The Content Wrangler today!
is one of, if not the, largest technical documentation conferences in
Europe. Several of us from the DITA community were invited to speak,
including Kris Eberlein, Keith Schengili-Roberts, Jang Graat, Scott
Prentice, and Sarah O'Keefe. This is the second year that Tekom has had
dedicated DITA presentations, reflecting the trend of increasing use of
and interest in DITA in Europe.
DITA vs. Not-DITAThe theme for me this year was "DITA vs German CCMS systems".
For background, Germany has had several SGML- and XML-based component
content management system products available since the mid 90's, of which
Schema is probably the best known. These systems use their own XML models
and are highly optimized for the needs of the German machinery industry.
They are basically walled garden CCMS systems. These are solid tools that
provide a rich set of features. But they are not necessarily generalized
XML content management systems capable of working easily with any XML
vocabulary. These products are widely deployed in Germany and other
DITA poses a problem for these products to the degree that they are not
able to directly support DITA markup internally, for whatever reason,
e.g., having been architected around a specific XML model such that
supporting other models is difficult.
So there is a clear and understandable tension between the vendors and
happy users of these products and the adoption of DITA. Evidence of this
tension is the creation of the DERCOM association
(http://www.dercom.de/en/dercom-home), which is, at least in part, a
banding together of the German CCMS vendors against DITA in general, as
evidenced by the document "Content Management and Struktured Authoring in
Technical Communication - A progress report", which says a number of
incorrect or misleading things about DITA as a technology.
The first DITA presentation of the conference was "5 Reasons Not to Use
DITA from a CCMS Perspective" by Marcus Kesseler, one of the founders of
It was an entertaining presentation with some heated discussion but the
presentation itself was a pretty transparent attempt to spread fear,
uncertainty, and doubt (FUD) about DITA by using false dichotomies and
category errors to make DITA look particularly bad. This was unfortunate
because Herr Kesseler had a valid point, which came out in the discussion
at the end of his talk, which is that consultants were insisting that if
his product (Schema, and by extension the other CCMS systems like Schema)
could not do DITA to a fairly deep degree internally then they were
unacceptable, regardless of any other useful functionality they might
This is definitely a problem in that taking this sort of purist attitude
to authoring support tools is simply not appropriate or productive. While
we might argue architectural choices or implementation design options as a
technical discussion (and goodness knows I have over the years), it is not
appropriate to reject a tool simply because it is not DITA under the
covers. In particular, if a system can take DITA in and produce DITA back
out with appropriate fidelity, it doesn't really matter what it does under
the covers. Now whether tools like Schema can, today, import and export
the DITA you require is a separate question, something that one should
evaluate as part of qualifying a system as suited to task. But certainly
there's no technical barrier to these tools doing good DITA import and
export if it is in fact true, as claimed, that what they do internally is
*functionally* equivalent to DITA, which it may very well be.
In further discussions with Marcus and others I made the point that DITA
is first and foremost about interchange and interoperation and in that
role it has clear value to everyone as a standard and common vehicle for
interchange. To the degree that DERCOM, for example, is trying to define a
standard for interoperation and interchange among CCMS systems, DITA can
offer some value there.
I also had some discussions with writers faced with DITA--some
enthusiastic about it, some not--who were frustrated by the difficulty of
doing what they needed using the usual DITA tools as compared to the
highly-polished and mature features provided by systems like Schema. This
frustration is completely understandable--we've all experienced it. But it
is clearly a real challenge that German and, more generally, European
writing teams face as they adopt or consider adopting DITA and it's
something we need to take seriously.
One aspect of this environment is that people do not separate DITA The
Standard from the tools that support the use of DITA precisely because
they've had these all-singing, all-dancing CCMS systems where the XML
details are really secondary.
A DITA-based world puts the focus on the XML details, with tools being a
secondary concern. This leads to a mismatch of expectations that naturally
leads to frustration and misunderstanding. When people say things like
"The Open Toolkit doesn't do everything my non-DITA CCMS does" you know
there is an education problem.
This aspect of the European market for DITA needs attention from the DITA
community and from DITA tool vendors. I urged the writers I talked to to
talk to the DITA CCMS vendors to help them understand their specific
requirements, the things tools like Schema provide that they really value
(for example, one writer talked about support for creating sophisticated
links from graphics, an important aspect of machinery documentation but
not a DITA-specific requirement per-se). I also urged Marcus to look to
us, the DITA community, for support when DITA consultants make
unreasonable demands on their products and emphasized the use of DITA for
interchange. I want us all to get along--there's no reason for there to be
a conflict between DITA and non-DITA and maintaining that dichotomy is not
going to help anyone in the long term.
Other TalksOn Wednesday there was a two-hour "Intelligent Information" panel
consisting of me, Kris Eberlein, Markus Kesseler from Schema, and Torsten
Kuprat of Acolada, another CCMS vendor. Up until the end this was a
friendly discussion of intelligent information/intelligent content and
what it means, what it is and isn't, etc. At the end of the session we did
touch on the DITA vs. non-DITA arguments but avoided getting too
argumentative. But Kris and I both tried to push on the
standard-for-interchange aspect of intelligent content and information.
This panel was opposite a couple of other DITA presentations so I was
unable to attend those.
Keith Shengili-Roberts presented on the trends of DITA adoption around the
world, which was pretty interesting. While his data sources are crude at
best (LinkedIn profiles and job postings as well as self-reported DITA
usage) he clearly demonstrated a positive trend in DITA adoption around
the world and in Europe. I thought it was a nice counter to the
presentations of the day before.
Frank Ralf and Constant Gordon presented NXP's use of DITA and how they've
applied it to the general challenges of semiconductor documentation
management and production. It was a nice high-level discussion of what a
DITA-based system looks like and how such a system can evolve over time,
as well as some of the practical challenges faced.
My talk was on why cross-book links in legacy formats like DocBook and
Framemaker break when you migrate those documents to DITA: "They Worked
Before, What Happened? Understanding DITA Cross-Book Links"
anding-dita-crossbook-links). (Short version: you have to use the new
cross-deliverable linking features in DITA 1.3.)
George Bina presented on using custom Java URL handlers with custom URL
schemes to seamlessly convert non-XML formats into XML (DITA or otherwise)
in the context of editors like oXygenXML and processors like the DITA Open
Toolkit. He demonstrated treating things such as spreadsheets, Java class
files, and markdown documents as XML using URL references from otherwise
normal XML markup. Because the conversion is done by the URL handlers,
which are configured at the Java system level, the tools handling the XML
documents don't need to have any knowledge of the conversion tools. The
sample materials and instructions for using the custom "convert:" URL
scheme George has defined are available at
Wednesday's DITA events ended with a panel discussion on challenges faced
when moving to DITA, moderated by Sarah O'Keefe from Scriptorium and
featuring George Bina (Syncro Soft), Magda Caloian (Pantopix), and
Nolwenn Kezreho (IXIASOFT). It was an interesting discussion and
touched on some of the same tools and expectation mismatches discussed
On Thursday, Jang Graat gave a tutorial titled "The DITA Diet": using DITA
configuration and constraints to tailor your use of DITA to eliminate the
elements you don't need. He also demonstrated a FrameMaker utility he's
developed that makes it easy to tailor DITA EDDs to reflect the
configuration and constraints you want.
Also on Thursday was the German-language version of the intelligent
content panel, with Sarah O'Keefe from Scriptorium representing the consultant
role. I was not present so can't report on what was said.
Tool and Service VendorsOne interesting new tool I saw (in fact the only new product I saw) was
the Miramo formatter Open Toolkit plugin, which is currently free for
personal use. It is a (currently) Windows-only formatter that competes
with products like Antenna House XSL Formatter and RenderX XEP. It is not
an FO implementation but offers comparable batch composition features. It
comes with a visual design tool that makes it easy to set up and modify
the composition details. This could be a nice alternative to hacking the
PDF2 transform. The server version price is comparable to the Antenna
House and XEP products. The tool is available at http://www.miramo.com. I
haven't had a chance yet to evaluate it but I plan to. I emphasized the
value of having it run on other platforms and the Miramo represented
thought it would be possible for them to support other platforms without
too much effort.
Adobe had their usual big booth, highlighting Framemaker 2015 with it's
new DITA 1.3 features. Syncro Soft had a bigger and more prominent booth
for oXygenXML. FontoXML had their booth and I think there was another
Web-based XML/DITA editor present but I didn't have a chance to talk to
Of the usual DITA CCMS vendors, IXIASOFT was the only one at the
conference (at least that I noticed). SDL had a big booth but they
appeared to be focusing on their localization and translation products,
not on their CMS system.
I think the mix of vendors reflects a settling out of the DITA technology
offerings as the DITA products mature. The same thing happened in the
early days of XML. It will be interesting to see who is also at DITA
Europe next week.
SummaryAll-in-all I thought Tekom was a good conference for me--I learned a lot
about the state of DITA adoption and support in Europe generally and
Germany specifically. I feel like I have a deeper understanding of the
challenges that both writers and tool vendors face as DITA gets wider
acceptance. Hopefully we can help resolve some of the DITA vs. not-DITA
tension evident at the conference. I got to talk to a lot of different
people as well as catch up with friends I only see at conferences (Kris
Eberlein and Sarah O'Keefe were joking about how, while they both live in
the same city, they only see each other at this conference).
It's clear to me that DITA is gaining traction in Europe and, slowly, in
Germany but that the DITA CCMS vendors will need to step up their game if
they want to compete head-to-head against entrenched systems like Schema
and Acolada. Likewise, the DITA community needs to do a better job of
educating both tools vendors and potential DITA users if we expect them to
be both accepting of DITA and successful in their attempts to implement
and use it.
I'm looking forward to next year. Hopefully the discussion around DITA
will be a little less contentious than this year.
Content marketers are tasked with delivering clear, concise, and relevant content to the right prospect at the right time—content designed to convert prospects into customers. And, for the most part, marketers have absolutely no idea how to do this.
Ask any marketer what a great marketing campaign looks like. You might be surprised how uninspiring the answer will be. According to 2013 stats from the Direct Marketing Association (DMA), the average successful direct marketing campaign (snail mail) has a conversion rate of about 4.4% (up to 10 to 30 times better than for email). Very successful campaigns might see 6% conversions. Seriously?
In what other industry would 94% failure be called a “success?” Only in marketing-and that’s got to change.
Marketers need to mature and move past the spray-and-pray marketing techniques that have dominated for decades. Creating personas—and aiming content at the members of those imaginary groups—is no longer sufficient. Marketers need to marry information about the individual humans they hope to convert with the power of advanced techniques designed to help deliver the right piece of content to the right prospect at the right time on the device of their choosing. Content marketers need intelligent content.
What is intelligent content? Simply put, it is content that is not limited to one purpose, technology, or output. It is content that is intentionally designed and constructed to be modular, structured, reusable, format-free and semantically-rich—and is therefore discoverable, reconfigurable, and adaptable. It’s content that is both able to be read by humans and processed by machines. When implemented correctly, it can help content marketers deliver the right pieces of content to the right prospects with the objective of driving profitable customer action.
- Read a free chapter from “Intelligent Content: A Primer” by Ann Rockley, Charles Cooper, and Scott Abel (2015 XML Press).
Intelligent content hails from the world of technical communication. It got its start in the technology sector, in which technical writers were charged with creating an increasing list of deliverables (online help website, customer support content, user guides, learning materials, and job aids) for several different platforms in a variety of languages.
Technical writers didn’t just wake up one day and think, “Wow, we can do better.” They were forced to adopt intelligent content approaches out of necessity. They were overwhelmed with the sheer number of output formats and channels into which they were required to provide content. They were crippled by the volume of content they needed to produce.
As more technical communication departments began to see the value of intelligent content, software and services vendors began creating the tools and technologies required to change the paradigm. Thought leaders, such as Ann Rockley, began helping companies think differently about their content. Rockley and others convinced some of the world’s largest firms to stop thinking about content as documents, modularize the content, and label each piece semantically. These individual chunks of semantically enriched content were then able to be repurposed (often automatically with the help of software designed for the purpose) in the myriad content types technical communicators were responsible for creating.
The result? Technical communication departments that adopted intelligent content became able to quickly publish content to multiple output channels from a single source without having to handcraft each deliverable. They quickly discovered creating semantically rich modular content afforded them the capability to do things they never envisioned, such as create dynamic content experiences personalized to the individual customer, offer content as a service, and build deliverables on-the-fly in response to threats or opportunities.
- Read: Multi-Channel Publishing: A Case Study by Richard Hamilton and Scott Abel (Book Business, April 2015)
Content marketers can—and should—borrow lessons learned from technical communication professionals who have adopted intelligent content. Doing so will afford marketers the opportunity to beat the competition by differentiating themselves from the pack. Imagine being able to efficiently and effectively create relevant content that can be routed to the right person at the right time in the right channel in the right language, efficiently and effectively. It’s being done already, often in departments other than marketing.
It will be interesting to see which brands emerge as the leaders in intelligent content marketing. One thing is certain: The business proposition for change is an easy sell. There are 94 percentage points available, and that’s a lot of room for improvement.
Thanks to all the Gilbane Conference sponsors and exhibitors this year! Diamond Sponsor Platinum Sponsor Gold Sponsors Exhibitors Associate Sponsor Also thanks to all our Gilbane conference media sponsors!
This post originally published on http://gilbane.com
I wore my Google I/O t-shirt the other day while hiking, and realized I hadn’t posted a write up from that developer conference back in May 2015. This year they had a big push towards bringing women to the conference through effective methods.
Google has an Android developer community called Women Techmakers, led by Natalie Villabos. She did an amazing job with the event itself, but what most impressed me was the building of community prior to the event through the use of Slack, emailed communications, and networking opportunities both online and in person. Natalie said they went from 10% female attendees in 2013 to 23% this year. The first night I went to a Women Techmakers dinner was at an amazing Peruvian restaurant. The giveaways were lovely zippered canvas bags with Adafruit’s Gemma wearables package for hacking on later, which is awesome. I got the system working the other night, and ordered an additional soft potentiometer hoodie pull so I can make this project with the hoodie I bought at the conference.
This article offers a good overview of what they did to get more women to I/O. Here’s a short list:
- Nominations from Googlers
- Reserved invites to women coding groups
- Dinners together
- Network enablement such as a Slack channel before and after the event
I sat with three women at dinner: one is an engineer at Quora in SF, about a year out of college, the other was a product manager at Fox News in New York City for their app work, and the third I spoke to most is a developer at GreatSchools.net.
Faves and Raves
On Tap Now demo — or, how to not lose 20 minutes to your phone — was an example of a truly amazing context search. First, it brings us back to why we love Google in the first place. Second, she had a Skrillex song up and said, “OK Google, what’s his name.” I kid you not, I audibly gasped when it “just worked” and said “His name is John Moore.”
Expeditions gives Cardboard/VR to kids in classrooms. I went to this demo and it was pretty cool, we all sat on 360-degree swivel stools and turned to see what the “teacher” pointed us to in the VR screens. It was like having a fancy view finder. We went to Versailles in France, which I had very recently visited. It was impressive, but one aspect that threw me off was the cameras must have been super high off the ground. The Hall of Mirrors felt like I was floating through it rather than walking through it. But wow, what a cool experience for a classroom of kids. My second grader absolutely LOVED this.
Jump gives 16 cameras to do VR recording (Go Pro made one), Google assembler then makes it so you can interpolate the viewpoints and get 3 dimensions (depth-corrected stereo).
I just read an article that said YouTube supports Cardboard for its videos now, here’s the help page.
Developer advocacy observations
Half the dev sessions were in these “alcoves” which were rooms created with cardboard tubes and boxes. It made for difficult hearing the presenter, even with mics on. Basically, no one left their comfy seating, so it was extremely hard to go around to different sessions.
I went to a Firebase demo, which had a great example of a developer advocate trying to identify with audience and do story telling. They did an example app that’s a chat application in a web browser that the audience could interact with right there. He could also turn the example chat off quickly if the crowd got out of hand in playing with it live. Side-by-side display made for a great demo as the noSQL updated before our very eyes.
Women at conferences, what’s in store?
As I reflect back on what it was like to go to a large tech conference with at least twice as many women as any I’d been to, I felt like the proof in the techniques to get us there is going to be in the return rate and the new signup rate. Will women feel like it was “their” conference too? Will they be annoyed with the overcrowding and not feel like it’s worth the extra effort to get on a plane? Will women keep returning because they’re there with their friends and it’s a tradition? Will women be more likely to spread the word about the conference itself to a wider group than just developers? I heard a woman explaining Google I/O to a man on the BART transit back to the airport, and thought, I haven’t seen much of that type of evangelism to the general public before. I think it’s going to take a while to see the effects of the special outreach efforts. I definitely think the diversity in networking effect is going to make a lasting change in the system. I sure hope so.
Here’s the code lab I worked through.
Here are talks I watched later:
To tell even more of the story, here are all my photos from Google I/O, appropriately uploaded to the announced-at-I/O Google Photos, with captions.
Employees are customers too. You want to reach them, you want them responsive and engaged with your organization and your joint customers, and you want to keep them. Today’s employees have little patience with poor workplace digital experiences. In addition, organizations need to consider the connection between engaged employees and the ultimate customer experience. Below are a selection of four conference sessions […]
This post originally published on http://gilbane.com
Writers are conditioned to using tables. We are creatures of habit, and using tables in DITA seems to come naturally if you are used to writing in Word or FrameMaker.
Bottom line: When you need to present information that will be table-like when read, your default preference should be for definition lists.
We know that you are used to using tables. And love merging cells and doing all the cool stuff that oXygen Author and XMetaL Author can help you do with DITA tables. Get past it. Consider using definition lists wherever possible.
What’s wrong with tables in DITA?
Telling a technical writer to try and avoid tables is a bit like telling Genghis Khan to avoid conquering. It’s just what we do. This is true — but we want to get past it. Using tables in DITA is sometimes necessary, but often we can get ahead by avoiding use of tables.
Tables invite endless tampering with settings for column width. In most cases, get over it and get past it. Go for <dl>, definition lists.
The key point here is “most cases”. Most, but not all. If it’s absolutely critical that you display content (such as images and descriptive text side-by-side), and you don’t want to configure <dl> so that <dt> and <dd> display side-by-side in general, or if you absolutely must have a large number of columns, or if you need to merge cells. then go for tables.
Your next steps
A definition list sample — an alternative to tables in DITA
A sample definition list with a heading:<dl> <dlhead> <dthd>Image File View Selection</dthd> <ddhd>Resulting Information</ddhd> </dlhead> <dlentry> <dt>File Type</dt> <dd>Image's file extension</dd> </dlentry> <dlentry> <dt>Image Class</dt> <dd>Image is raster, vector, metafile or 3D</dd> </dlentry> <dlentry> <dt>Number of pages</dt> <dd>Number of pages in the image</dd> </dlentry> <dlentry> <dt>Fonts</dt> <dd>Names of the fonts contained within a vector image</dd> </dlentry> </dl>
Why use <choices> or <choicetables> instead of <ol> or <ul> in a task topic when you need to choose what to do next? DITA markup offers different options for describing choices in a DITA task topic.
The benefit of <choices> or <choicetables> is that the markup is semantic! When you use <choices> or <choicetables>, the machine (and the writer!) understands explicitly if we are talking about.
Decision rule – <choices> or <choicetables> for choices in a DITA task topic
Use <choices> where the customer has reached a decision point and must choose one of the options.
- Example: take Route 66 to Boston or Route 81 to Ithaca.
Use <choicetable> where the customer has different options to get to the same result.
- Example: to save, click CTRL+S or choose File > Save).
What’s wrong with using ordered or unordered lists to indicate choices in a DITA task topic>
Using <ol> or <ul> eliminates semantic markup! Using <choices> or <choicetables> explicitly indicates the kind of juncture the reader has reached — and forces the writer to state if no matter what the choice, the end result will be the same (<choicetable>), or if the choice selected will lead to a different outcome (<choices>).
Ultimately, the reader will have a clearer idea of his or her options when you pick the correct, semantic markup for choices in a DITA task topic.
Your next steps
Embedding multiple topics inside one topic file (polygamous topics) is not good practice. We call these topics, “polygamous topics”. If, in real life, polygamy tends to be a really bad idea, the same is true for DITA topics.
The golden rule for DITA topics is one topic, one idea.
What’s wrong with including multiple ideas in one topic?
Here is a short list of why you don’t want your DITA topics to include multiple ideas.
- Makes it harder to reuse topics.
- Prevents you from using your <title> and <shortdesc> from keeping your DITA topics focused.
- Undercuts minimalism in your content.
- Leads to too much blather or fluff in your content.
How to identify polygamous DITA topics?
Markers for this phenomenon are use of <title> more than once in a topic.
If you have more than one <title> in a topic, with the obvious exception of titles for images or tables, you are almost certainly embedding multiple topics inside one topic file.
What should you do to keep your DITA topics focused?
Take the content under each <title> or under each <section> and include in a separate topic.
If you are not sure about how you are using DITA elements and DITA markups in DITA topics and DITA maps, reach out to us today to arrange a content audit. We are here to help you succeed! Contact us today with any questions that you might have.
How you organize a bookmap will affect how easy or difficult it will be for you to maintain and update your content. Should you nest a DITA map under the <chapter> element in a bookmap, or should you nest topics directly under the <chapter> element?
Referencing a DITA map from the <chapter> element:
- Enables easy reuse of the collection of topics in the chapter.
- Facilitates editing of the DITA map for the chapter in parallel to editing of the bookmap.
Example where the <chapter> element references a DITA map<chapter href="intro.ditamap" format="ditamap"/>
Even when a <chapter> references topics directly, those topics can reference nested subtopics<chapter href="intro.dita"> <topicref href="caring.dita"/> <topicref href="feeding.dita"/> </chapter> <chapter href="setup.dita"> <topicref href="prereq.dita"/> <topicref href="download.dita"/> </chapter>
What is the best way to organize a bookmap?
Bottom line: In the <chapter> element of a bookmap, should you refer to a specific topic or to a DITA map? In most cases, our vote is squarely on the side of referencing a DITA map.
The post How to organize a bookmap: nesting under the element appeared first on Method M - The Method is in the Metrics.
By Deborah S. Bosley, Owner and Principal, The Plain Language Group
“People who lean on logic and philosophy and rational exposition end by starving the best part of the mind.”
This quote by William Butler Yeats, one of Ireland’s most famous writers, illustrates a problem we have in the content creation community. We create a multitude of written material, but we rely primarily on logical structures, building authority, plain language style, and information design principles, but we pay little attention to how readers respond emotionally to what they read.
We know our customers, when faced with complex text or unable to find what they’re looking for, respond with anger, frustration, fear, etc. According to a 2013 survey, 68% of US consumers experienced “customer rage.” But what can we do about that?
We need to understand emotional responses and write with empathy. Cognitive empathy is the largely conscious drive to accurately recognize and understand another’s emotions. Why is that critical for content creators? Because people make decisions with their emotions, and then use data and logic to justify those decisions. We, on the other hand, write as if the emotional response had far less power than it does.
The marketing world has long been aware of how people respond and act based on their emotions. They use fear, guilt, trust, value, belonging, competition, instant gratification, leadership, trends, and the pressure of time to motivate people to purchase products and services.
However, much of what content creators write is not marketing materials. Instead, readers engage with Terms and Conditions, Help Centers, FAQs, and a multitude of information users need to solve problems. In fact, often any written in “marketingese” is immediately dismissed as “selling,” not “solving.”
The emotions we should be expressing and eliciting are trust, confidence, relief, protection, and understanding. We should help readers trust what we say, have confidence in their decisions, feel relief that the text was easy and they found the solution, feel that the company or government agency has their back, and easily understand what action they should take.
If we can do that, if we can write with empathy, we will have responded to a person’s need to understand and be understood.
There are tectonic shifts underway among competing web, mobile, and social platforms, that will have profound effects on digital strategies. There are too many moving parts and shifting alliances for anyone to predict outcomes with any certainty. But Apple, Google, Facebook, and others are making moves that need to be considered in the context of […]
This post originally published on http://gilbane.com
When you have a collection of DITA topics to nest, or individual DITA topics to next, you have a lot of choices. Often, too many choices. In a ditamap, you can nest DITA topics under a topic, <topichead> or <topicgroup>. In a bookmap, you can nest DITA topics or a ditamap under the <part> or <chapter> element.
The best practice, in general, is to group related topics in a DITA map, and then to nest those topics by referencing the ditamap from a parent bookmap or regular map. This allows easy reuse of the collection of topics, and allows editing of the DITA map in parallel to editing of the bookmap. Click here to learn more about when to use bookmaps and when to use regular ditamaps.
When should you nest DITA topics?
Nested task topics are especially useful when a procedure divides into individual tasks. You can nest the subtasks under a concept topic or under a task topic.
Where should you nest DITA topics in a ditamap – under <topic> or under <topichead>?
In a standard ditamap, you can nest topics under a parent topic or under<topichead> or <topicgroup>. Using a parent topic rather than a <topichead> element enables you to easily add a short description or context information to the collection to topics.
Tip for when you nest DITA topics
If the subtasks need to performed in a particular order, set the collection-type attribute for the parent topic to sequence.
Your next steps
Get help right now from the DITA gurus at Method M.
The post Nest DITA topics under a topic or under a topichead? appeared first on Method M - The Method is in the Metrics.
Referring to DITA maps from a bookmap or from another DITA map provides a great way for grouping topics. You can refer to the DITA map in the bookmap, or in a parent ditamap, whenever you want to include the collection of topics in the bookmap.
You can still use a DITA map to group topics when slight variations in the map are required (such as a topic describing how to use a feature applies to variant A of the product but not to variant B of the product.) Apply conditions to the topic references in the DITA map (such as “gram-negative”) and when the map is published (for example, for “gram-positive”), topics can be filtered out by condition.
Topics, taken alone, are generally just part of larger units of information. To organize your topics meaningfully, you can:
- arrange topics into chapters and parts in the bookmap,
- group topics in DITA maps, and
- nest child topics under parent topics (sometimes called super topics, super tasks, or chapter maps).
- nest topics under topic headings
When to use DITA Bookmaps instead of a regular ditamap to group content
Use DITA bookmaps when you need to support front and back matter, or support division into chapters or parts. Use regular DITA maps for grouping a cluster of topics. Click here to learn more about nesting topics in DITA.
Your next steps in using DITA maps more effectively
Bookmaps enable users to organize their DITA information into front matter, parts, chapters, and back matter. Typically, publications are created from a bookmap while DITA maps are used for
simpler documents, or to group topics for reference from a book map or from another DITA map.
Some neat features of DITA Bookmaps
- Divide your publication using chapters or parts
- Parts can group chapters
- Chapters can group topics
- You can nest regular DITA maps in DITA bookmaps
- You can specify an initial title or booktitle (booktitle has more options than the title property in a regular ditamap)
- You can store book metadata (such as publisher, author, and copyright details)
- Define front matter (specify cover pages, notices, safety information, Table of Contents, and other preliminary information)
- Define back matter (such as appendices, glossary, and back cover page)
- An appendices section (similar to a part or a chapter) can be used to organize and group multiple appendices
When to use DITA Bookmaps
Use DITA bookmaps when you need to support front and back matter, or support division into chapters or parts. Use regular DITA maps for grouping a cluster of topics.
When not to use DITA Bookmaps
Don’t use DITA Bookmaps to group a cluster of related topics that will be nested in a parent map.
Your next steps
This is a break from our posts about markup, but if you’re going to do DITA, you should also be able to figure out what it’s costing your company. Forecasting under uncertainty is a fact of life for most of us.
Uncertainty about the benefits you will reap from DITA
- how many products will be documented
- frequency of updates
- percentage of reuse
- translations (if any)
Estimating DITA costs also involves uncertainty
- How long will it take your team to get up to speed?
- Will you need specializations?
- How much effort and money will be needed to get the outputs that you need from DITA?
- What will migration to DITA cost?
Using Markov Models and other techniques we have developed a proven approach to predicting DITA costs and estimating benefits from implementing a DITA solution when you face uncertainty. And, let’s face it, what tech docs team does not face uncertainty?
Ask us for our presentation.Get More Information Now
The post Predicting costs and benefits from implementing a DITA solution appeared first on Method M - The Method is in the Metrics.
DITA enables reuse on many levels. This post discusses some tactics for reusing topics.
Referring to DITA maps from a book map or from another DITA map provides a great way for grouping topics. Refer to the DITA map in the bookmap whenever you want to include the collection of topics in the bookmap.
Tip: You can still use a DITA map to group topics when slight variations in the map are required (such as a topic describing how to use a feature applies to variant A of the product but not to variant B of the product.) Apply conditions to the topic references in the DITA map (such as “gram-negative”) and when the map is published (for example, for “gram-positive”), topics can be filtered out by condition.
Using different DITA maps or book maps you can assemble different publications from the same set of topics. Using conditions on topic references (<topicref>) from maps you can include a topic conditionally. As noted in post #12 in The DITA Project, topicrefs can be included or excluded based on conditions applied to the topicref.
Reusing a Topic with Some Variation
In many cases, however, you will want to include the same topic in different publications, but have slightly different content in each publication. Of course, if the content was very different it would — often — make sense to have separate topics.
The easiest way to reuse a topic with slight variations is by applying conditions to the topic that will vary in each publication.
For example, you could change a description for what happens when an antibiotic reaches the outer cell wall for a gram-negative organism by applying the condition ”gram-negative” to the sentence. You might have another sentence with a description for what happens when an antibiotic reaches the outer cell wall for a gram-postitive organism by applying the condition ”gram-positive” to the sentence.
Like many great questions, the answer to “how small is too small for conditional content” is – it depends. If the content will be translated, then mitigating future translation costs is a key factor.
This post will consider just one factor that determines how small is too small: translation.
If the content will be translated, then translations will face difficulty if just one or two words are marked as conditional. This is true because adjacent words or phrases need to change when the text changes. When switching to a language where adjectives have masculine and feminine variants or when moving between languages where word order changes in a sentence, having too small a unit for conditional content will create huge problems.
Consider the following example, simplified for clarity: a product is available in separate models for boys and for girls, and the content was marked as with conditions: <condition=girl>girls</condition><condition=boy>boys</condition>. When translating to French, the writer needs to change “green girls” to “les filles vertes”, and translate “green boys” to “garçons verts” but since “green” is outside the bounds of the condition, translation tools will break down.
The bottom line is that while good things come in small packages in jewelery, smaller is not always better in DITA.
- Use the keyref mechanism for a change that will be made globally and will appear in many locations in your content.
- For example, a product name is a great candidate for a keyref. As your product name changes over time, your content will change automatically everywhere.
- Use a condition for content that changes locally based on what the content is used for, whose using and how it’s presented. For examples:
- For what:
Put conditions on the content of a step if the instruction is different for Model A and for Model B.
- For who:
Use conditions to hide/display information if the reader is a customer or a highly skilled service technician.
- How presented:
Display thumbnail images of diagrams when presented on a browser or eReader. Show no images when presented on a smartphone.
- For what:
Bottom line: When the content of your conref needs to change — as in for different audiences or for different products — conditions enable your conref to adapt! You should, however, consider using conkeyref (for large reusable chunks) or using keyref (for smaller reusable chunks, such as a product name or other string that fits neatly in a <ph> element).