Need to Connect to a Local MySQL Server? Use Unix Domain Socket!

Unix Socket Domain

Unix Socket DomainWhen connecting to a local MySQL instance, you have two commonly used methods: use TCP/IP protocol to connect to local address –  “localhost” or 127.0.0.1  – or use Unix Domain Socket.

If you have a choice (if your application supports both methods), use Unix Domain Socket as this is both more secure and more efficient.

How much more efficient, though?  I have not looked at this topic in years, so let’s see how a modern MySQL version does on relatively modern hardware and modern Linux.

Benchmarking  TCP/IP Connection vs Unix Domain Socket for MySQL

I’m testing Percona Server for MySQL 8.0.19 running on Ubuntu 18.04 on a Dual Socket 28 Core/56 Threads Server.  (Though I have validated results on 4 Core Server too, and they are comparable.)

As in the test we run sysbench doing a simple primary key lookup on the small table using prepared statements, this benchmark should have one of the shortest execution paths in the MySQL Server code, hence stressing the TCP/IP Stack.  As such, I would expect these results to show a closer to worst-case scenario for TCP/IP overhead versus Unix Domain Socket.

I also run mysqldump on the sysbench table which shows the “streaming” overhead and which should be closer to a best-case scenario.

Single Thread and 64 Thread Benchmark Run

A single thread run allows us to see the difference in the throughput when there is no contention in MySQL Server or Linux Kernel, while a 64 thread run shows what happens when there is significant contention.

For single thread, we’re seeing a massive 35 percent reduction in throughput by going through TCP/IP instead of the more efficient Unix Domain Socket. For 64 threads, the difference is similar  – 33 percent perhaps highlighting less efficient execution outside of the communication between Sysbench and MySQL.

Running MySQLDump

MySQLDump on a 10 million row Sysbench table (about 2GB in size) should be dominated by the overhead of streaming data through socket versus TCP/IP connection, rather than the overhead of passing small packets, which we see with short queries.

time mysqldump -u sbtest -psbtest --host=127.0.0.1 --port=3306 sbtest sbtest1 > /dev/null

We can see the operation through TCP/IP takes 11 percent longer to complete, so there is overhead for streaming, too. Note that when MySQLDump does data conversion and pushes the output to /dev/null, the actual overhead of running the query is higher than 11 percent.

100K QPS Injection Benchmark

In this benchmark and the next one, instead of pushing the system to the limit, we put a certain load on it and see how it performs.  Such a workload tends to be closer to many real-world applications, and it stresses the kernel differently.

With the system capable of handling some 250K queries/sec at full load, 100K QPS corresponds to light load – about 40 percent of full capacity.

In this case, we see a 54 percent increase in average latency and a 50 percent increase in 99 percentile latency, which is pretty much in line with the above throughput tests at peak load.

200K QPS Injection Benchmark

At 200K QPS, we’re driving a system much closer to capacity. With some 250K queries/sec capacity at full load, we’re driving 80 percent of the load for TCP/IP connection. Because the system can handle some 380K queries/sec with Unix Domain Socket, this only corresponds to only 53 percent of the full load. 

We can see for this workload that the average latency is 3.7 times better for the Unix Domain Socket and 99 percentile latency is almost 12 times better.

These well may be unusual results but they serve as a great illustration to the following effect: when a system is operating close to its capacity, the latency can be very sensitive to load, and reducing load through performance optimization can have outsized gains on such latency, especially worst-case-scenario latencies.

If you want to run the test on your system, raw results and sysbench command line for runs are in this public Google spreadsheet.

Summary

As Unix Domain Sockets are much simpler and tuned for local process communication, you would expect them to perform better than TCP/IP on the loopback interface. Indeed they perform significantly better! So if you have a choice, use Unix Domain Sockets to connect to your local MySQL System!


by Peter Zaitsev via Percona Database Performance Blog

Comments

Popular posts from this blog