Elasticsearch Performance Basics Made Easy

 
light-2223129_1920.jpg

One of Elasticsearch’s strengths is just how easy it is to get a cluster up and running. It’s also easy to work around any early performance problems by just adding a couple more nodes to the cluster. However, at some point, you need to solve the causes of the performance problems to maximize the utilization of your compute resources.

In this article, we will cover a small number of performance optimization basics. We will focus on cluster- and node-level settings, plus discuss an important index-level setting, skipping mapping and query optimizations for now.

Before starting to change any cluster settings, always ensure you have a complete backup of your data. You should also measure the important performance characteristics of your system both before and after applying optimizations: generally-applicable optimizations can occasionally worsen the performance for a specific combination of hardware, topology, configuration, data, and use case.

First: use a recent version of Elasticsearch. The latest version of the software is continually improved by the engineers at Elastic, with regular releases improving performance, security, and data integrity. 

There are those who dislike the rate of breaking changes, but regardless it is incumbent on all of us to keep our servers up to date, at least for security patches. If upgrading your Elasticsearch cluster is very painful, it might be a signal to expend more engineering effort on infrastructure automation.

Next, get your topology basics right. Once you scale past a single Elasticsearch node you need to consider the unavoidable implications of the CAP theorem. In short, plan for failure. Your cluster will lose nodes. Your cluster will probably go RED from time to time. This is all ok as long as you expect and plan for this. Rule number one: have an odd number of master (or master-eligible) nodes. With 2n+1 master nodes, your cluster can sustain a loss of n master nodes and remain fully functional.

We always suggest having more data nodes in your cluster than the maximum number of copies of a single shard in your indices. If your index has a replica count of 2, each shard will have a primary and two replicas; for this replica count to be at all meaningful in terms of fault tolerance, you need a minimum of three data nodes.

Think about your servers’ shared resources and shared points of failure. If you have pizza boxes in racks with shared power and networking per rack, a single fault may cause multiple nodes to instantaneously drop offline. If you use containerization, your problem may be compounded. Use shard allocation awareness to mitigate this risk.

Elasticsearch server roles are important to most effectively use your limited compute resources. Multiple master nodes are necessary for fault tolerance, but they generally have low resource utilization. Adding data nodes generally improves cluster performance, but more nodes in the cluster means more overhead and more work for the master. Using a few dedicated query coordinating nodes can reduce the load on read-heavy clusters. For read- and write-heavy clusters, you can use specialized ingest nodes to segregate the ingest and query responsibilities.

Three-node clusters are fine with three standalone (i.e. fulfilling all roles) nodes. For larger clusters, carve out three dedicated master nodes. If you have 10+ nodes, and you have a fairly heavy bulk ingest regime, and you are seeing slow writes, see whether a dedicated ingest node or two helps. Likewise, with 10+ nodes and a heavy concurrent query load, see if a couple of dedicated query coordinating nodes improves performance.

Elasticsearch requires you to use your cluster RAM wisely. A number of factors need balancing: more heap space for your nodes’ JVMs is better; Lucene (the library Elasticsearch uses heavily for its core search and indexing capabilities) has performance optimizations relying on lots of free RAM for disk caches; JVMs with smaller heaps (less than ~32GB) operating using compressed memory pointers are generally faster than with uncompressed pointers, however other edge cases can also apply.

Balancing and optimizing these can be daunting, but there are some good rules to get you started:

  • Size your cluster’s machines’ RAM to split memory evenly between Elasticsearch heap, and unallocated memory that will be used for caching.
  • Keep your Elasticsearch JVMs using compressed pointers, i.e. don’t allocate heaps larger than ~31GB. 

It follows from these heuristics that 64GB RAM per server is a good maximum, and since Elasticsearch clusters’ performance are often limited by memory, 64GB per machine is a good start point.

JVM garbage collection (GC) can cripple Elasticsearch, and disabling swap is a highly recommended tactic to reduce one cause of GC in any cluster. As with any JVM memory management issue the details get technical fast, but use one of the three mechanisms (disabling swap, locking memory, reducing swappiness) to disable or minimize memory to disk swapping on all nodes in your cluster. Even one data node swapping to disk and experiencing stop-the-world GC events can trigger shard reallocation, causing ripples through an entire cluster, even dropping a cluster to RED state.

Our last recommendation to reduce shard counts as much as you can. The out-of-the-box default is 5 shards and 1 replica per index. There are valid reasons for this being a sensible default, but our experience tells us that 1 shard per index would in many cases be a better default. Shards are not free, and oversharding (i.e. having too many shards given the amount of data in an index) can be very expensive. A good rule is to keep your shards between a few GB and a few tens of GB. If you know your index will be hundreds of GB, plan for more shards. Our experience suggests many indices aren’t that big, do not have the type of write loads where more shards might be beneficial, and would be stored most efficiently with just one shard. 

The main downside to large shards is reallocation has much more data to move, so recovering from the loss of a data node takes a long time. Reducing your index defaults to 1 shard may be overly simplistic, but you will likely find that 5 shards per index is too many.

Altering shard counts can (sort of) be achieved through either the Shrink API or the Reindex API. It’s not necessarily a fast operation, but you aren’t stuck with the defaults, nor with your index creation settings.


We have created a free tool to automatically create customized recommendations based on your cluster topology. Simply paste the results of a couple of your cluster’s diagnostic API endpoints, and instantly see a breakdown of how your cluster is set up, and what you can do to improve your cluster. We don’t harvest your data, it’s like free hugs for your Elasticsearch. Check it out!


head.jpeg

Ian Truslove is a co-founder of Cambium Consulting. He specializes in building large-scale resilient data processing systems using tools like Clojure and Elasticsearch. When not hunched over an Emacs terminal, you might find him on a bike in the wilds of Colorado.