The thud test: Measuring OpenSearch Documentation
When I first started my career as a technical writer about 20 years ago, success was measured by the volume of content produced. Back then, everything was printed and bound. When you dropped a stack of paper on a desk or the ground, it would make an audible thud. The theory went that the louder the thud, the more content you had produced, and the happier your customers were.
Fast-forward about 10 years, and it was all about simplification: Only provide the minimum amount of content that your customers need to accomplish their tasks. So we set about removing information and only documenting what was absolutely necessary.
In the years that followed, technical writers continued to face tough choices. We know that we have to write for customers first. We know that writing simply is hard. We know that we need to insist on the highest standards. We know that words are expensive to translate and resource intensive to change as the user interface changes.
And we know that product managers still want to use the thud test to measure documentation output.
So how do we strike the right balance between quantity and simplicity? The answers aren’t always straightforward, and we make judgment calls every day in a never-ending pursuit of high quality.
Where did my OpenSearch journey begin?
Earlier this year, I became the Documentation Manager for OpenSearch, a recently launched product that had 2 writers and estimated documentation coverage of 30%. My dilemma was how to build the content while growing a technical writing team and balancing all the theories of the last 20 years of technical writing. In addition, for the first time I found myself managing documentation for an open-source project, where anyone can contribute directly to the content. I’ve always loved the idea of community-sourced content, and now it was a staple in my day-to-day work. These added contributors represent another strong voice in the pool of opinions that affect the choices a technical writer makes.
The OpenSearch documentation team and I began our journey by working backwards from what our customers were asking for: more content. As the technical writing team has grown to 8 writers and we’ve learned more about how OpenSearch works, we’ve been able to make considerable progress in closing the gap. But not only are we creating a lot of content—producing a thud—we are also ensuring that our quality goals are met.
Fortunately, an advantage of open source writing is that quality is inherently built in to the process. For example, there are several levels of approvals before content is published, from both internal and external OpenSearch contributors. We also rely on analytics to help us make informed decisions about which documentation needs the most attention.
So where are we now?
We are making headway in our pursuit. In the past nine months, we’ve enhanced our documentation to include the addition of or enhancement to the following topics:
- OpenSearch documentation home page
- Getting started with OpenSearch Dashboards
- New Dashboards visualization types
- Data Prepper 2.0
- OpenSearch core REST APIs
- Alerting API - new document-level monitors, and per-document monitors for Dashboards
- Notifications - new Notifications plugin and Notifications API
- Segment replication
- Anomaly Detection
- Field types
- OpenSearch CLI
- OpenSearch Kubernetes Operator
- Installation instructions for Tarball, RPM, and Docker
- Performance Analyzer
- Index Management
- Search and query - new Optimizing text for searches
- Machine Learning (ML) Commons
- Security overview
- .Net, Ruby, and Go clients
- Snapshot management
- Custom GeoJSON - new region map visualizations
- Rollup enhancements
- SQL and PPL section refactoring and additions
- SQL Aggregate functions
What’s next for OpenSearch documentation?
We are not stopping at creating a thud in our content. In addition to providing full content coverage, we are making usability enhancements to the documentation website as well. We are restructuring the navigation pane to make it easier and more accessible for our users, we are making improvements to our metadata tagging (taxonomy) for optimizing search, and we are designing a more friendly UI that makes it easier to access what our users need with fewer clicks. We will also implement improved analytics in order to have another level of feedback to drive our content decisions.
We sincerely thank you for your patience as we build the OpenSearch documentation and appreciate any input you may have. Please see our contributing guidelines if you are interested in contributing to the OpenSearch documentation, and check out our documentation site periodically to learn more about OpenSearch.
As always, the OpenSearch documentation team looks forward to your feedback and contributions!