Showing posts with label Benchmarks. Show all posts
Showing posts with label Benchmarks. Show all posts

Tuesday, December 3, 2024

InnoDB Tablespace Duplicate Check Threads (and EBS Volumes for MySQL Startup with Many Tables)

In the last weeks / months, I have been working on understanding / improving MySQL startup with many tables.  I already wrote five posts on the subject, they are listed below.  In this post, I use the knowledge we gained in the previous two posts to show the interest of tuning InnoDB Tablespace Duplicate Check Threads, making startup 30% in one case (2:28 vs. 3:33) and 5% in another (5:33 vs. 5:53).

Tuesday, November 26, 2024

The Light MySQL Startup Optimization on EBS Volumes

In the last weeks / months, I have been working on understanding / improving MySQL startup with many tables.  I already wrote four posts on the subject, they are listed below.  In this post, I use the system analysis of the previous post to revisit the light optimization on EBS volumes.  With this analysis, I am able to determine why the previous tests did not show improvements, and I am able to provide an example of a faster startup.

Monday, November 11, 2024

Long and Silent / Stressful MySQL Startup with Many Tables

In the last weeks / months, I have been working on understanding / improving MySQL startup with many tables.  I already wrote two posts on the subject, the links are below.  So far, I did not share what brought my attention to this, and it is the subject of this post.  Also, and because it is related, I come back to the optimization / contribution I already made on the subject, looking at it with new information from this post.

Tuesday, September 3, 2024

Faster MySQL Startup with Many Tables (1M+)

I have been scratching my head about MySQL startup for some time.  There is much to say about this, and many other posts will probably follow.  For now, it is enough to know that with many tables (millions) the startup of MySQL 8.0+ (including 8.0, 8.4 and 9.0) is suboptimal (to say the least).  With very little changes, I was able to speed it up, from 2:39 to 1:09 (1 minute and 9 seconds).  This result is obtained with 1 million tables on a m6id.xlarge AWS instance (4 vcpu and local SSD).  It does not translate directly to EBS volumes, even though there are still things I think can be done there.  I describe all the details of my optimizations in the rest of this post.

Monday, December 13, 2021

Trick to Simulate a Linux Server with less RAM

I created the first draft of this post many years ago.  At that time, I was working with physical servers having 192 GB of RAM or more.  On such systems, doing memory pressure tests with MySQL is complicated.  I used a trick to simulate a Linux server with less RAM (also works with vms, probably not with Kubernetes or containers).  I recently needed the trick again and as I will refer to it in a future post, now is a good time to complete and publish this.  TL&DR: huge pages...

Tuesday, July 16, 2019

MySQL Master Replication Crash Safety Part #5a: making things faster without reducing durability - using better hardware

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the previous posts, we explored the consequences of reducing durability on masters (different data inconsistencies after an OS crash depending on replication type) and the performance boost associated with this configuration (benchmark results done on Google Cloud Platform / GCP).  The consequences are summarised in the introduction of Part #4, and the tests are the subject of this last post.  Also in this last post, I mentioned that my results for high durability are limited by the sync latencies of GCP persistent disks.  As I found a system with better latencies, I am able to present new results.  And this system is a vm in Amazon Web Services (AWS) with local SSD.

MySQL Master Replication Crash Safety Part #5: faster without reducing durability (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #5: making things faster without reducing durability.  There is no introduction or conclusion to this post, only landing sections: reading this post without its context is not not recommended. You should start with the main post and come back here for more details.

Tuesday, July 9, 2019

MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability

This is a follow-up post in the MySQL Master Replication Crash Safety series.  In the three previous posts, we explored the consequence of reducing durability on masters (including setting sync_binlog to a value different from 1).  But so far, I only quickly presented why a DBA would run MySQL with such configuration.  In this post, I present actual benchmark results.  I also present a fundamental difference between on-premise servers and cloud virtual machines as my tests are done in Google Cloud Platform (GCP).  But before going further, let's summarise the previous posts.

MySQL Master Replication Crash Safety part #4: benchmarks (under the hood)

This post is a sister post to MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability.  There are no introduction or conclusion to this post, only landing sections: reading this post without its context is not recommended. You should start with the main post and come back here for more details.