OpenSearch Project update: A look at performance progress through version 2.17
Our commitment to enhancing OpenSearch’s performance remains unwavering, and this blog post showcases the significant progress we’ve made. Recently, we’ve focused our investments on four key areas: text querying, vector storage and querying, ingestion and indexing, and storage efficiency. Additionally, we’ve published our search and performance roadmap, reaffirming that performance continues to be our top priority. In this blog post, we’ll bring you up to date on our continuing performance improvements through OpenSearch 2.17.
OpenSearch 2.17 offers a remarkable 6x performance boost compared to OpenSearch 1.3, enhancing key operations like text queries, terms aggregations, range queries, date histograms, and sorting. Additionally, the improvements in semantic vector search now allow for highly configurable settings, enabling you to balance response time, accuracy, and cost according to your needs. These advancements are a testament to the dedicated community whose contributions and collaboration propel OpenSearch forward.
The first section focuses on key query operations, including text queries, terms aggregations, range queries, date histograms, and sorting. These improvements were evaluated using the OpenSearch Big5 workload, which represents common use cases in both search and analytics applications. The benchmarks provide a repeatable framework for measuring real-world performance enhancements. The next section reports on vector search improvements. Finally, we present our roadmap for 2025, where you’ll see that we’re making qualitative improvements in many areas, in addition to important incremental changes. We are improving query speed by processing data in real time. We are building a query planner that uses resources more efficiently. We are speeding up intra-cluster communications. And we’re adding efficient join operations to query domain-specific language (DSL), Piped Processing Language (PPL), and SQL. To follow our work in more detail, and to contribute comments or code, please participate on the OpenSearch forum as well as directly in our GitHub repos.
Performance improvements through 2.17
Since its inception, OpenSearch has consistently improved performance, and version 2.17 continues this trend. Compared to earlier versions, OpenSearch 2.17 delivers improved performance, achieving a 6x speed increase over OpenSearch 1.3 and reducing query latencies across various categories. The following graph shows the relative improvements by query category as 90th percentile latencies, with a baseline of OpenSearch 1.3.
Key highlights
Based on our benchmarking, we’ve identified the following key highlights:
- Overall query performance: OpenSearch 2.17 delivers 6x better performance than OpenSearch 1.3.
- Text queries: Text search queries, fundamental to many OpenSearch use cases, are 63% faster in 2.17 compared to the baseline of OpenSearch 1.3.
- Terms aggregations: This critical query type for log analytics shows an 81% improvement compared to OpenSearch 1.3, allowing for faster and more efficient data aggregation.
- Date histograms: Date histogram performance has improved by 97% compared to OpenSearch 1.3, providing major speed improvements for time-series analysis.
- Range queries: With an 87% performance improvement compared to OpenSearch 1.3, range queries now execute more quickly while using fewer resources.
- Sorting and filtering: OpenSearch 2.17 delivers faster sorting, with a 59% improvement compared to OpenSearch 1.3, enhancing query performance for numeric and textual datasets.
The following table summarizes performance improvements for the preceding query types.
Query type | OS 1.3.18 | OS 2.7 |
OS 2.11 | OS 2.12 | OS 2.13 | OS 2.14 | OS 2.15 | OS 2.16 | OS 2.17 | |
---|---|---|---|---|---|---|---|---|---|---|
Big 5 areas mean latency, ms | Text queries | 59.51 | 47.91 | 41.05 | 27.29 | 27.61 | 27.85 | 27.39 | 21.7 | 21.77 |
Sorting | 17.73 | 11.24 | 8.14 | 7.99 | 7.53 | 7.47 | 7.78 | 7.22 | 7.26 | |
Terms aggregations | 609.43 | 1351 | 1316 | 1228 | 291 | 293 | 113 | 112 | 113 | |
Range queries | 26.08 | 23.12 | 16.91 | 18.71 | 17.33 | 17.39 | 18.51 | 3.17 | 3.17 | |
Date histograms | 6068 | 5249 | 5168 | 469 | 357 | 146 | 157 | 164 | 160 | |
Aggregate (geo mean) | 159.04 | 154.59 | 130.9 | 74.85 | 51.84 | 43.44 | 37.07 | 24.66 | 24.63 | |
Speedup factor, compared to OS 1.3 (geo mean) | 1.0 | 1.03 | 1.21 | 2.12 | 3.07 | 3.66 | 4.29 | 6.45 | 6.46 | |
Relative latency, compared to OS 1.3 (geo mean) | 100% | 97.20% | 82.31% | 47.06% | 32.60% | 27.31% | 23.31% | 15.51% | 15.49% |
For a detailed benchmark analysis or to run your own benchmarks, see the Appendix.
Queries
OpenSearch now features the following query improvements.
Text queries
Text queries are fundamental to effective text search, especially in applications requiring fast and accurate document retrieval. OpenSearch 2.12 introduced the match_only_text field to address specific needs in analytics and applications prioritizing 100% recall or customized ranking strategies. This field type dramatically reduced index sizes and accelerated query execution by removing the complexity of relevance-based scoring. As a result, text queries performed 47% faster compared to OpenSearch 2.11 and 57% faster compared to OpenSearch 1.3.
With OpenSearch 2.17, we further amplified these performance gains. Building on the foundation of the match_only_text field, OpenSearch 2.17 optimizes text queries, achieving 21% faster performance compared to 2.14 and 63% faster performance compared to 1.3. These improvements stem from continued enhancements to query execution and index optimization. Applications relying on text search for analytics or high-recall use cases can now achieve faster results with reduced resource usage, making OpenSearch 2.17 an even more powerful choice for modern text search workloads.
Terms and multi-terms aggregations
Terms aggregations are crucial for slicing large datasets based on multiple criteria, making them important query operations for data analytics use cases. Building on prior advancements, OpenSearch 2.17 enhances the efficiency of global terms aggregations, using term frequency optimizations to handle large immutable collections, such as log data, with unprecedented speed.
Performance benchmarks demonstrate a 61% performance improvement compared to OpenSearch 2.14 and an overall 81% reduction in query latency compared to OpenSearch 1.3, while multi-terms aggregation queries demonstrate up to a 20% reduction in latency. Additionally, memory efficiency is improved dramatically, with a 50–60% reduction in memory footprint for short-lived objects because new byte array allocations for composite key storage are not needed.
OpenSearch 2.17 also introduced support for the wildcard field type, enabling highly efficient execution of wildcard, prefix, and regular expression queries. This new field type uses trigrams (or bigrams and individual characters) to match patterns before applying a post-filtering step to evaluate the original field, resulting in faster and more efficient query execution.
These advancements make OpenSearch 2.17 a powerful tool for analytics use cases, from large-scale log processing to complex query scenarios, continuing the mission of delivering speed, efficiency, and scalability to your data workflows.
Date histograms
Date histograms are fundamental to time-based data analysis, underpinning OpenSearch Dashboards visualizations such as time-series charts. In OpenSearch 2.17, date histogram queries now execute 55% faster compared to OpenSearch 2.13 and 97% faster compared to OpenSearch 1.3, significantly improving the performance of time-series aggregations. This enhancement is particularly impactful for queries without subaggregations and has also been extended to range aggregations, further optimizing time-based analyses.
Additionally, cardinality aggregation—a critical tool for counting distinct values, such as unique visitors, event types, or products—has received a major performance boost. OpenSearch 2.17 introduced an optimization that dynamically prunes documents containing distinct values that have already been collected, significantly reducing redundant processing. For low-cardinality requests, this leads to notable performance gains, while high-cardinality requests see improvements of up to 20%, streamlining the handling of even the most demanding datasets.
These enhancements make OpenSearch 2.17 an essential upgrade for managing time-based or high-volume datasets, ensuring faster and more efficient query execution for diverse analytics needs.
Range queries and numeric fields
Range queries, commonly used to filter data within specific numerical or date ranges, have undergone significant performance improvements in OpenSearch 2.17. These queries are now 81% faster compared to OpenSearch 2.14 and 87% faster compared to OpenSearch 1.3 because of advancements and optimizations in range filter processing.
At search time, OpenSearch evaluates whether a query can be rewritten from its original query to an approximate query, which executes faster and uses fewer resources. This approximate range optimization ensures that most queries deliver equivalent results with reduced computational overhead, maintaining accuracy while improving performance. These enhancements make OpenSearch 2.17 an excellent choice for applications requiring high-performance range filtering, such as analytics dashboards, monitoring systems, and time-series data exploration.
Sorting and filtering
Sorting performance has been a focus throughout the OpenSearch 2.x series, with OpenSearch 2.17 showing minor improvements compared to OpenSearch 2.14 but a 59% performance improvement compared to OpenSearch 1.3. These refinements enable faster query results, particularly for datasets requiring extensive sorting by numeric or textual fields.
Filtering has also seen a significant advancement with the introduction of roaring bitmap encoding for handling large filter lists. This approach minimizes memory and network overhead by compressing filter data into efficient bitmap structures. It is particularly effective in high-cardinality scenarios, such as when filtering a product catalog by items owned by a specific customer. The bitmap-based filters can be stored and seamlessly joined with the main index at query time. Tests demonstrate that this method maintains low latency even with numerous filters, making it a scalable and high-performing alternative to traditional terms queries or lookup strategies. These improvements ensure that OpenSearch 2.17 delivers faster and more efficient sorting and filtering for diverse use cases, from search to large-scale analytics.
Vector search
Disk-optimized vector search: The OpenSearch vector engine continues to prioritize cost savings in the 2.17 release. This release introduced disk-optimized vector search, allowing you to use the full potential of vector workloads, even in low-memory environments. Disk-optimized vector search is designed to provide out-of-the-box 32x compression when using binary quantization, a powerful compression technique. Additionally, you have the flexibility to fine-tune costs, response time, and accuracy to your unique needs through configurable parameters such as compression rate, sampling, and rescoring. According to internal benchmarks, OpenSearch’s disk-optimized vector search can deliver cost savings of up to 70% while maintaining p90 latencies of around 200 ms and recall of over 0.9. For more information, see Disk-based vector search.
Cost improvements by reducing memory footprint: Vector search capabilities in native engines (Faiss and NMSLIB) received a significant boost in OpenSearch 2.17. In this version, OpenSearch’s byte compression technique is extended to the Faiss Engine HSNW and IVF algorithms to further reduce memory footprint by up to 75% for vectors within byte range ([-128, 127]). These provide an additional 25% memory footprint savings compared to OpenSearch 2.14 with FP16 quantization and an overall savings of up to 85% compared to OpenSearch 1.3.
Vector index build improvements: In 2024, the Vector Engine team made significant investments in improving the performance of the OpenSearch vector engine. These included adding AVX512 SIMD support, fixing some bugs related to segment replication with vector indexes, transitioning to the more efficient KNNVectorsFormat, and employing incremental graph builds during merges to reduce memory footprint. With incremental graph builds, the native memory footprint during indexing has been significantly reduced because the full dataset is loaded into memory at once. This improvement supports HNSW graph builds in low-memory environments and reduces overall build time by approximately 30% compared to OpenSearch 1.3.
Exact search improvements: In OpenSearch 2.15, SIMD optimizations were added to the k-NN plugin’s script scoring, resulting in significant performance gains for CPUs with SIMD support, such as AVX2 or AVX512 on x86 or NEON on ARM. Further improvements in OpenSearch 2.17 introduced Lucene’s new vector format, which includes optimized memory-mapped file access. Together, these enhancements significantly reduce search latency for exact k-NN searches on supported hardware.
Roadmap for 2025
The following improvements are included in the OpenSearch roadmap for 2025.
Core search engine
In 2025, we will push the boundaries of OpenSearch’s query engine with several key initiatives aimed at improving performance, scalability, and efficiency:
- Streaming architecture: We’re moving from a request/response model to streaming, processing, and delivering data in real time, thereby reducing memory overhead and improving query speed.
- Native join support: We’re introducing efficient join operations across indexes that will be natively supported and fully integrated with OpenSearch’s query DSL, PPL, and SQL.
- Native vectorized processing: By using modern CPU SIMD operations and native code, we’re optimizing the processing of data streams to eliminate Java’s garbage collection bottlenecks.
- Smarter query planning: Optimizing where and how computations run will ensure reduced unnecessary data transfer and improve performance for parallel query execution.
- gRPC-based Search API: We’re enhancing client-server communication with Protobuf and gRPC, accelerating search by reducing overhead.
- Query performance optimization: Improving performance remains our consistent priority, and several key initiatives, such as docID encoding and query approximation, will reduce index size and enhance the performance of large-range queries.
- Star-tree indexing: Precomputing aggregations using star-tree indexing will ensure faster, more predictable performance for aggregation-heavy queries.
Vector search
In 2025, we will continue to invest in the following key initiatives aimed at performance improvements and cost savings:
- Index build acceleration with GPUs and SIMD: k-NN performance can be enhanced by using libraries with GPU support. Because vector distance calculations are compute-heavy, GPUs can speed up computations and reduce index build and search query times.
- Autotuning k-NN indexes: OpenSearch’s vector database offers a toolkit of algorithms tailored for diverse workloads. In 2025, our goal is to enhance the out-of-the-box experience by autotuning hyperparameters and settings based on access patterns and hardware resources.
- Cold-warm tiering: In version 2.18, we added support for enabling vector search on remote snapshots. We will continue focusing on decoupling index read/write operations to extend vector indexes to different storage systems in order to reduce storage and compute costs.
- Memory footprint reduction: We will continue to aggressively reduce the memory footprint of vector indexes. One of our goals is to support the ability to partially load HNSW indexes into native engines. This complements our disk-optimized search and helps further reduce the operating costs of OpenSearch clusters.
- Reduced disk storage using derived source: Currently, vector data is stored both in a doc-values-like format and in the stored
_source
field. The stored_source
field can contribute more than 60% of the overall vector storage requirement. We plan to create a custom stored field format that will inject the vector fields into the source from the doc-values-like format, creating a derived source field. In addition to storage savings, this approach will improve indexing throughput, reduce shard size, and even accelerate search.
Neural search
Neural search uses machine learning models to understand the semantic meaning of search queries, going beyond traditional keyword matching. It encompasses dense vector search, sparse vector search, and hybrid search approaches that combine semantic understanding with lexical search. Since introducing neural search capabilities in OpenSearch 2.9, we’ve expanded the functionality to include text embedding, cross-encoder reranking, sparse encoding, and hybrid search.
Our 2025 roadmap emphasizes optimizing performance, enhancing functionality, and simplifying adoption. Key initiatives include:
- Improving hybrid query performance: Reduce latency by up to 25%.
- Introducing explainability for hybrid queries: Provide insights into how each subquery result contributes to the final hybrid query result, enabling better debugging and performance analysis.
- Supporting additional algorithms for combining hybrid query results: Support algorithms like reciprocal rank fusion (RRF), which improves hybrid search latency by avoiding costly score normalization because the scores are rank based.
- Enhancing neural sparse pruning strategies: Apply techniques such as pruning by weight, by ratio with max weight, by top-k, and by alpha-mass to improve performance by 20%.
- Optimizing inference calls during updates and reindexing: Reduce the number of inference calls required for neural and sparse ingestion pipelines, increasing throughput by 20% for these operations.
- Consolidating multifield inference calls: Combine multiple field inference calls into a single operation for dense and sparse vector semantic search, reducing inference latency by 15% for multifield dense-vector-based semantic queries.
- Reducing memory usage for resource-constrained systems: Introduce a new quantization processor to decrease memory usage by 20%, improving efficiency in environments with limited resources or connectivity.
These advancements aim to enhance query performance, streamline operations, and expand usability across diverse workloads.
Conclusion
OpenSearch continues to evolve, not only by expanding functionality but also by significantly enhancing performance, efficiency, and scalability across diverse workloads. OpenSearch 2.17 exemplifies the community’s commitment, delivering improvements in query speed, resource utilization, and memory efficiency across text queries, aggregations, range queries, and time-series analytics. These advancements underscore our dedication to optimizing OpenSearch for real-world use cases.
Key innovations like disk-optimized vector search and enhancements to terms and multi-terms aggregations demonstrate our focus on staying at the forefront of vector search and analytics technology. Additionally, OpenSearch 2.17’s improvements to hybrid and vector search, combined with roadmap plans for streaming architecture, gRPC APIs, and smarter query planning, highlight our forward-looking strategy for meeting the demands of modern workloads.
These achievements are made possible through collaboration with the broader OpenSearch community, whose contributions to testing, feedback, and development have been invaluable. Together, we are building a robust and efficient search and analytics engine capable of addressing current and future challenges.
Stay connected to our blog and GitHub repos for ongoing updates and insights as we continue this journey of innovation and share future plans.
Appendix: Benchmarking tests and results
This section outlines the performance benchmarks and results achieved using OpenSearch Benchmark to evaluate the Big5 workload across various OpenSearch versions and configurations.
Benchmark setup
- Benchmarking tool: OpenSearch Benchmark, our standard benchmarking tool used in prior evaluations.
- Instance type: c5.2xlarge (8 vCPU, 16 GB RAM), chosen as a mid-tier option to avoid masking resource efficiency gains with oversized instances.
- Cluster configuration: A single-node cluster for reproducibility and ease of setup.
- Index setup: A Big5 Index configured with one shard and no replicas (
--workload-params="number_of_replicas:0"
). - Corpus details: A 100 GB dataset with 116 million documents. The storage size after ingestion was 24 GB for the primary shard. After removing overheads like doc values, the RSS is expected to be around 8 GB, matching the JVM heap size for the instance. Keeping most of the data in memory ensures a more accurate evaluation of performance improvements.
- Ingestion: Conducted with a single bulk indexing client to ensure that data is ingested in chronological order.
- Merge policy:
- LogByteSizeMergePolicy for OpenSearch 2.11.0 and later.
- TieredMergePolicy for OpenSearch 1.3.18 and 2.7.0.
- Test mode: Tests were run in benchmarking mode (
target-throughput
disabled) so that the OpenSearch client sent requests to the OpenSearch cluster as fast as possible.
Results
The following table provides benchmarking test results.
Buckets | Query | Order | OS 1.3.18 p90 service time (ms) | OS 2.7 p90 service time (ms) | OS 2.11.1 p90 service time (ms) | OS 2.12.0 p90 service time (ms) | OS 2.13.0 p90 service time (ms) | OS 2.14 p90 service time (ms) | OS 2.15 p90 service time (ms) | OS 2.16 p90 service time (ms) | OS 2.17 p90 service time (ms) |
---|---|---|---|---|---|---|---|---|---|---|---|
Text queries | query-string-on-message | 1 | 332.75 | 280 | 276 | 78.25 | 80 | 77.75 | 77.25 | 77.75 | 78 |
query-string-on-message-filtered | 2 | 67.25 | 47 | 30.25 | 46.5 | 47.5 | 46 | 46.75 | 29.5 | 30 | |
query-string-on-message-filtered-sorted-num | 3 | 125.25 | 102 | 85.5 | 41 | 41.25 | 41 | 40.75 | 24 | 24.5 | |
term | 4 | 4 | 3.75 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | |
Sorting | asc_sort_timestamp | 5 | 9.75 | 15.75 | 7.5 | 7 | 7 | 7 | 7 | 7 | 7 |
asc_sort_timestamp_can_match_shortcut | 6 | 13.75 | 7 | 7 | 6.75 | 6 | 6.25 | 6.5 | 6 | 6.25 | |
asc_sort_timestamp_no_can_match_shortcut | 7 | 13.5 | 7 | 7 | 6.5 | 6 | 6 | 6.5 | 6 | 6.25 | |
asc_sort_with_after_timestamp | 8 | 35 | 33.75 | 238 | 212 | 197.5 | 213.5 | 204.25 | 160.5 | 185.25 | |
desc_sort_timestamp | 9 | 12.25 | 39.25 | 6 | 7 | 5.75 | 5.75 | 5.75 | 6 | 6 | |
desc_sort_timestamp_can_match_shortcut | 10 | 7 | 120.5 | 5 | 5.5 | 5 | 4.75 | 5 | 5 | 5 | |
desc_sort_timestamp_no_can_match_shortcut | 11 | 6.75 | 117 | 5 | 5 | 4.75 | 4.5 | 4.75 | 5 | 5 | |
desc_sort_with_after_timestamp | 12 | 487 | 33.75 | 325.75 | 358 | 361.5 | 385.25 | 378.25 | 320.25 | 329.5 | |
sort_keyword_can_match_shortcut | 13 | 291 | 3 | 3 | 3.25 | 3.5 | 3 | 3 | 3 | 3 | |
sort_keyword_no_can_match_shortcut | 14 | 290.75 | 3.25 | 3 | 3.5 | 3.25 | 3 | 3.75 | 3 | 3.25 | |
sort_numeric_asc | 15 | 7.5 | 4.5 | 4.5 | 4 | 4 | 4 | 4 | 4 | 4 | |
sort_numeric_asc_with_match | 16 | 2 | 1.75 | 2 | 2 | 2 | 2 | 1.75 | 2 | 2 | |
sort_numeric_desc | 17 | 8 | 6 | 6 | 5.5 | 4.75 | 5 | 4.75 | 4.25 | 4.5 | |
sort_numeric_desc_with_match | 18 | 2 | 2 | 2 | 2 | 2 | 2 | 1.75 | 2 | 2 | |
Terms aggregations | cardinality-agg-high | 19 | 3075.75 | 2432.25 | 2506.25 | 2246 | 2284.5 | 2202.25 | 2323.75 | 2337.25 | 2408.75 |
cardinality-agg-low | 20 | 2925.5 | 2295.5 | 2383 | 2126 | 2245.25 | 2159 | 3 | 3 | 3 | |
composite_terms-keyword | 21 | 466.75 | 378.5 | 407.75 | 394.5 | 353.5 | 366 | 350 | 346.5 | 350.25 | |
composite-terms | 22 | 290 | 242 | 263 | 252 | 233 | 228.75 | 229 | 223.75 | 226 | |
keyword-terms | 23 | 4695.25 | 3478.75 | 3557.5 | 3220 | 29.5 | 26 | 25.75 | 26.25 | 26.25 | |
keyword-terms-low-cardinality | 24 | 4699.5 | 3383 | 3477.25 | 3249.75 | 25 | 22 | 21.75 | 21.75 | 21.75 | |
multi_terms-keyword | 25 | 0* | 0* | 854.75 | 817.25 | 796.5 | 748 | 768.5 | 746.75 | 770 | |
Range queries | keyword-in-range | 26 | 101.5 | 100 | 18 | 22 | 23.25 | 26 | 27.25 | 18 | 17.75 |
range | 27 | 85 | 77 | 14.5 | 18.25 | 20.25 | 22.75 | 24.25 | 13.75 | 14.25 | |
range_field_conjunction_big_range_big_term_query | 28 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |
range_field_conjunction_small_range_big_term_query | 29 | 2 | 1.75 | 2 | 2 | 2 | 2 | 1.5 | 2 | 2 | |
range_field_conjunction_small_range_small_term_query | 30 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |
range_field_disjunction_big_range_small_term_query | 31 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2.25 | |
range-agg-1 | 32 | 4641.25 | 3810.75 | 3745.75 | 3578.75 | 3477.5 | 3328.75 | 3318.75 | 2 | 2.25 | |
range-agg-2 | 33 | 4568 | 3717.25 | 3669.75 | 3492.75 | 3403.5 | 3243.5 | 3235 | 2 | 2.25 | |
range-numeric | 34 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 | |
Date histograms | composite-date_histogram-daily | 35 | 4828.75 | 4055.5 | 4051.25 | 9 | 3 | 2.5 | 3 | 2.75 | 2.75 |
date_histogram_hourly_agg | 36 | 4790.25 | 4361 | 4363.25 | 12.5 | 12.75 | 6.25 | 6 | 6.25 | 6.5 | |
date_histogram_minute_agg | 37 | 1404.5 | 1340.25 | 1113.75 | 1001.25 | 923 | 36 | 32.75 | 35.25 | 39.75 | |
range-auto-date-histo | 38 | 10373 | 8686.75 | 9940.25 | 8696.75 | 8199.75 | 8214.75 | 8278.75 | 8306 | 8293.75 | |
range-auto-date-histo-with-metrics | 39 | 22988.5 | 20438 | 20108.25 | 20392.75 | 20117.25 | 19656.5 | 19959.25 | 20364.75 | 20147.5 |
Notes and considerations
- Additional queries: The Big5 workload was recently updated to include additional queries. These queries have been included in the results of this blog post.
- *
multi_terms-keyword
support: OpenSearch 1.3.18 and 2.7.0 recorded0
ms service time formulti_terms-keyword
. This is becausemulti_terms-keyword
was not supported until OpenSearch 2.11.0. Mean latency calculations account for this by excludingmulti_terms-keyword
from the geometric mean computation for OpenSearch 1.3.18 and 2.7.0.