ProxySQL Overhead — Explained and Measured

ProxySQL brings a lot of value to your MySQL infrastructures such as Caching or Connection Multiplexing but it does not come free — your database needs to go through additional processing traffic which adds some overhead. In this blog post, we’re going to discuss where this overhead comes from and measure such overhead. 

Types of Overhead and Where it Comes From 

There are two main types of overhead to consider when it comes to ProxySQL — Network Overhead and Processing Overhead. 

Network Overhead largely depends on where you locate ProxySQL. For example, in case you deploy ProxySQL on the separate host (or hosts) as in this diagram: 

The application will have added network latency for all requests, compared to accessing MySQL Servers directly. This latency can range from a fraction of milliseconds if ProxySQL is deployed at the same local network to much more than that if you made poor choices with ProxySQL locations.  

I have seen exceptionally poor deployment cases with ProxySQL deployed in the different regions from MySQL and Application causing a delay of tens of milliseconds (and more than 100% of overhead for many queries).

Processing Overhead

The second kind of overhead is Processing Overhead — every request which ProxySQL receives undertakes additional processing on the ProxySQL side (compared to talking to MySQL Directly). If you have enough CPU power available (CPU is not Saturated) when the main drivers to cost of such processing will be the size of the query, its result set size, as well as your ProxySQL configuration. The more query rules you have, and the more complicated they are the more processing overhead you should expect. 

In the worst-case scenario, I’ve seen thousands of regular expression based query rules which can add very high overhead. 

Another reason for high Processing overhead can be improper ProxySQL configuration. ProxySQL as of Version 2.0.10 defaults to a maximum of 4 processing threads (see mysql-threads global variable) which limits use no more than 4 CPU cores. If you’re running ProxySQL on the server with a much larger number of CPU cores and see ProxySQL pegged with CPU usage you may increase the number up to a matching number of your CPU cores.

Linux “top” tool is a good way to see if ProxySQL is starved for resources — if you have mysql-threads set at 4 and it is showing 400% of CPU usage — It is the problem.

Also watch for overall CPU utilization, especially if something else is running on the system beyond ProxySQL – oversubscribed CPU will cause additional processing delays. 

Reducing Overhead 

In this blog post we look at the additional Overhead ProxySQL introduces through it also can reduce it — overhead of establishing network connection (especially with TLS) can be drastically lower if you run ProxySQL which is local to the application instance and that maintains a persistent connection to a MySQL Server. 

Let’s Measure It!

I decided not to measure Network overhead because it is way too environment specific but rather look at the Processing Overhead, in case we run MySQL, ProxySQL, and Benchmark Client on the same box. We will try using TCP/IP and Unix Domain Socket to connect to ProxySQL because it makes quite a difference and we also look at Prepared Statements and standard Non-Prepared Statements. Google Spreadsheet with all results and benchmark parameters is available here.

We use Plain ProxySQL setup with no query rules and only one MySQL Server configured so overhead is minimal in this regard.

To get stable results with single-thread tests we had to set up CPU affinity as described in this blog post.

MySQLDump

Let’s start with the most non-scientific test — running MySQLDump on the large table (some 2GB) and measuring how long it takes. This test exposes how expensive result processing is in ProxySQL as query routing work in this case is negligible.

We can see 20% longer times with ProxySQL (through considering results processing by mysqldump actual query execution time difference is likely higher).

Another interesting way to think about it is — we have 4.75sec added to process 10mil rows meaning the ProxySQL overhead is 475ns per about 200-byte row which is actually pretty good.

64 Concurrent Connections Workload 

For this workload, I’m using the server with 28 Cores and 56 logical CPU threads and I have to raise mysql-threads to 32 to make sure ProxySQL is not keeping itself on a diet.

There is a lot of interesting data here. First, we can see disabling Prepared Statements gives us a 15% slowdown with direct MySQL connection and about 13.5% when going through ProxySQL, which makes sense as Processing overhead on ProxySQL side should not increase as much when Prepared Statements are disabled.

The performance between direct connection and going through ProxySQL is significant, though going directly is almost 80% faster when Prepared Statements are in use and over 75% when Prepared Statements are disabled. 

If you think about these numbers — considering sysbench itself is taking some resources, for trivial primary key lookup queries the number of resources ProxySQL requires is comparable to those needed by MySQL Server itself to serve the query.

Single Connection Workload

Let’s now take a look at the performance of the same simple point lookup queries but using only a single thread. We also schedule MySQL, Sysbench, ProxySQL to the different CPU cores so there is no significant contention for CPU resources and we can look at efficiency. In this test, all connections are done using UNIX Socket so we’re looking at best-case scenarios and Prepared Statements are Enabled.

The direct connection gives some 55% better throughput than ProxySQL. 

The other way we can do the math is to see how long does it takes to server the query directly and with ProxySQL in the middle — it is 46 microseconds with MySQL Directly and 71 microseconds when going through ProxySQL, meaning ProxySQL adds around 25 microseconds. 

While 25 microseconds is a large portion of total query execution in this single-host environment and trivial queries it may be a lot less significant for more complicated queries and network-based deployments.

Unix Socket vs TCP/IP

As I recently wrote — there is quite a performance difference between using TCP/IP or Unix Socket for local MySQL Connection.  It is reasonable to assume that the same would apply to ProxySQL deployment, only with ProxySQL we have two connections to take care of — the connection between ProxySQL and MySQL Server and between ProxySQL and Application. In our single host test, we can use Unix Socket in both cases. If you deploy ProxySQL as your application sidecar or on MySQL Server you will be able to use Unix socket at least for one of such connections.

Letters “U” and “T” correspond to connection type — the “UU” means Unix Socket was used for both connections and “TT” means TCP/IP was used in both places.

Results are quite expected — for best performance you should use Unix Socket, but even using Socket for one of the connection types improves performance.

Using TCP/IP for both connection types instead of Unix Socket reduces performance by more than 20%.

If we do the same math to compute how much latency going through TCP/IP adds — it is 20 microseconds, meaning ProxySQL through TCP/IP adds almost double processing latency compared to ProxySQL via Unix Socket.

Summary

ProxySQL is quite efficient — 25-45 microseconds of added latency per request and hundreds of nanoseconds per row of the result set is going to be acceptable for most workloads and can more than pay for itself with features ProxySQL brings to the table. Poor ProxySQL configuration though can yield much higher overhead. Want to be confident? Perform similar tests for your real ProxySQL deployment with its full rules configuration within its real deployment topology. 


by Peter Zaitsev via Percona Database Performance Blog

Comments