Part Two: How Many innodb_buffer_pool_instances Do You Need in MySQL 8 With a CPU-Bound Workload?

how many innodb_buffer_pool_instances Do You Need in MySQL 8

how many innodb_buffer_pool_instances Do You Need in MySQL 8Following up on my recent benchmark posts on MySQL and MariaDB, MySQL and MariaDB on Enterprise SSD Storage and How MySQL and MariaDB Perform on NVMe Storage, I wanted to dig a little deeper and understand how different MySQL parameters affect performance.

One of the obscure MySQL Parameters (in my opinion) is innodb_buffer_pool_instances. In particular, I do not have a feel for how many you need and how to understand if a chosen value is a good or a bad one.

In our articles MySQL 5.7 Performance Tuning After Installation and MySQL 101: Parameters to Tune for MySQL Performance we recommended to use value “8”, but I can’t really say if this is a good enough value. This post is a follow-up to my previous article about results we got with different innodb_buffer_pool instances with an IO-bound scenario.

So let’s see what result we will get with different innodb_buffer_pool instances, under the following scenario. I will use sysbench oltp_read_write benchmark with pareto distribution and set innodb_buffer_pool_size=140GB for the database in size 100GB, so this will be a CPU-bound workload.

Benchmark

The hardware I use is:

System | Supermicro; SYS-F619P2-RTN; v0123456789 (Other)
   Platform | Linux
    Release | Ubuntu 18.04.4 LTS (bionic)
     Kernel | 5.3.0-42-generic
Architecture | CPU = 64-bit, OS = 64-bit
  Threading | NPTL 2.27
    SELinux | No SELinux detected
Virtualized | No virtualization detected
# Processor ##################################################
 Processors | physical = 2, cores = 40, virtual = 80, hyperthreading = yes
     Models | 80xIntel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz
     Caches | 80x28160 KB
# Memory #####################################################
      Total | 187.6G

With the storage on  SATA SSD INTEL SSDSC2KB960G8 (Intel Enterprise-grade SSD D3-S4510).

A short settings overview:

  • Data fits into memory (The datasize is ~100GB, memory on the server is 188GB, and we allocate 140GB for MySQL innodb_buffer_pool_size).
  • The workload on storage will be very read-write-intensive (reads are done from the storage), with full ACID-compliant and data safe settings in MySQL.
  • For SATA SSD storage  innodb_io_capacity I will use 2000 and innodb_io_capacity_max = 4000 .
  • I will test the following values for innodb_buffer_pool_instances: 1 2 4 8 16 32 64.
  • Innodb_buffer_pool_instances = 64 is the maximum value allowed by MySQL.

The benchmark command line is:

sysbench oltp_read_write --threads=150 --time=10000 --tables=40 --table_size=10000000 --mysql-host=127.0.0.1 --mysql-user=sbtest --mysql-password=sbtest --max-requests=0 --report-interval=1 --mysql-db=sbtest --mysql-ssl=off --create_table_options=DEFAULT CHARSET=utf8mb4 --report_csv=yes --rand-type=pareto run

This means that the benchmark will run for three hours with reporting throughput every 1 sec. I will use a three-hour time frame for the reason which will be apparent later from the results.

Results on SATA SSD

Let’s see what results I’ve got individually for each of innodb_buffer_pool_instances:

innodb_buffer_pool_instances=1

innodb_buffer_pool_instances=1

innodb_buffer_pool_instances=2

innodb_buffer_pool_instances=2

innodb_buffer_pool_instances=4

innodb_buffer_pool_instances=4

innodb_buffer_pool_instances=8

innodb_buffer_pool_instances=8

innodb_buffer_pool_instances=16

innodb_buffer_pool_instances=16

innodb_buffer_pool_instances=32

innodb_buffer_pool_instances=32

innodb_buffer_pool_instances=64

innodb_buffer_pool_instances=64

What seems apparent is that as we increase innodb_buffer_pool_instances, it has a positive effect on variation in the throughput. We can condense the results in a single chart to see it closely:

increase innodb_buffer_pool_instances

And if we want to compare the throughput and deviation, let’s compare the results for the last 2500 sec:

compare the throughput and deviation

So actually, innodb_buffer_pool_instances=64 showed the best throughput and the less variability, as even with innodb_buffer_pool_instances=32 there are noticeable stalls.

With innodb_buffer_pool_instances from 1 to 8, there are regular stalls, including drops to 0 tps for a few seconds.

Increasing innodb_buffer_pool_instances over 64 seems to have no effect.

Final Thoughts

For this particular case, innodb_buffer_pool_instances=64 was the best choice, but I still can’t recommend a reliable way to find what the optimal value is. Keep in mind that this particular setting depends on the hardware (I tested on the hardware with 80 CPU cores) and on the workload.


by Vadim Tkachenko via Percona Database Performance Blog

Comments