I am starting a blog post series on using indexes — or tables — as queues. I had this series in the back of my mind for some time. This started a few years back when I worked on optimizing a row deletion job (I do not call this a purge job, to avoid confusion with the InnoDB Purge). Such jobs can be generalized to using indexes (or tables) as queues (this is fairly cryptic, I come back to this). In this post, I explain why queries, which are expected to be fast, might become slow, and as the title of this post implies, it is related to the InnoDB Purge.
Let's take the very simple query below, and let's run it on a table without index...
SELECT * FROM t1 LIMIT 1
This query should be fast because it just returns the first row of the table (there is a catch, I come back to this below). In a normal situation, this query runs in 0.008 second. But in a bad situation, it takes 3.2 seconds ! It is not related to locking, and it might even take more time in a more convoluted situation (46 seconds). It is because of to the InnoDB Purge !
As I covered in a previous post about the InnoDB Undo Logs, deleted rows still live in a table (and in indexes) for some time. A DELETE only marks the row as deleted and the InnoDB Purge garbage collects these rows when the time is right. The problem I have in mind when the above query runs for longer than expected is that it is scanning many delete-marked rows. And this is why it is not exact to say that this query returns the first row of the table, it returns the first row which is not marked as deleted.
This is all shown in annex #1. After initializing a large-enough table and showing the query is fast, I block the InnoDB Purge by opening a transaction, and I remove a large number of rows in the table. After that, the query is slow because it is scanning many delete-marked rows. We can also see that for removing rows, the DELETEs are getting slower and slower, exactly for the same reason (because it is scanning delete-marked rows) :
- the first 10% delete took 25.7 seconds (it looks degraded, the second took 20.7), the last 29.7;
- the first 0.1% delete took 0.22 seconds, the last 11.8.
Going from 20 to 29 seconds might not be such a big problem because it is only an increase in query time of 50%. But these 10% delete is not a normal use-case, I only did these for convenience (I wanted the demonstration to be short). Normally, such long / big transactions (taking more than 1 to 5 seconds) should be avoided, because they make a lot of things more complicated / painful at scale, especially in database operations (schema changes, switchover, etc...). Normally, there should only be small / quick transactions in a production database, like the 1% delete above taking 0.22 seconds. And it is for this reason that this small transaction becoming slow (11.8 seconds) is a problem at scale.
I do not know of any good way of observing queries scanning delete-marked rows. I am sure that one WebScale Player has a patch in their fork to observe this, but it looks like this did not make its way in Upstream. Also shown in annex #1, none of ROWS_EXAMINED (events_statements_summary_by_digest), Innodb_rows_read (Global Status) or dml_reads (InnoDB Metric) shows scanning delete-marked rows. In some ways, this is another rows examined blindspot (the two others I know of is with Index Condition Pushdown, which is very similar to delete-marked rows as it happens in the Storage Engine, and with querying non-existent rows). Only the InnoDB Metric buffer_pool_read_requests can give us a hint that something is wrong (by being unusually high), combined with a higher than usual CPU usage, and a high trx_rseg_history_len (even though it is small in the example shown in the annex, because only a few transactions were run). The Percona Extended Slow Log with log_slow_verbosity set to innodb would show an unusually high innodb_pages_distinct in this case (not shown in annexes, left as an exercise to the reader).
Also shown in annex #1, once the InnoDB Purge is unblocked, things stay slow for some time until the purge has finished garbage-collecting all the delete-marked rows (more than 6 minutes in my tests).
You might think this is not such a big problem because you do not care about query execution time of less than 30 seconds, but annex #2 might change your mind. In it, I show the impact of scanning delete-marked rows when IOs are involved. The SELECT query is now taking 47 seconds (from 3.2 seconds without IOs, and 0.008 without scanning delete-marked rows), and the 0.1% delete up to one minute (from 11.8 seconds and 0.22 seconds) !
Extending this to indexes...
And all of the above was only an appetizer. Such DELETEs on a table without indexes might not be a very compelling use-case. Things become more interesting with an index...
In annex #3, I show that the same behavior affects scanning an index (a fast query becoming slow after removing many rows, with the DELETEs via the index becoming slower and slower). This is the use-case of a row deletion job. In this specific case, when deleting 0.1% of the table (10K rows at a time), the query execution time via the index goes from 0.9 second to 12.8 seconds. Doing some extrapolation, if we had run this query 900 times (instead of 9 times the 10% delete), the total time taken would have been in the order of 89 minutes (900 * (12.8-0.9) seconds / 2 = 89 minutes, instead of 8 times 85 seconds = 12.5 minutes). From an algorithm complexity point of view, executing a row deletion job via an index with scanning delete-marked rows is O(n^2).
And this brings us to what I call using an index as a queue. In annex #4, instead of deleting rows via an index, I process them (process means updating the indexed value to a new value, as opposed to deleting the row for a row deletion job). As an update of a row in an index is a DELETE of the old row followed by an INSERT of the new one, we have the same problem : the query might scan delete-marked rows. Such processing using an index as a queue can also suffer from degraded throughput because we did not Mind the InnoDB Purge. And this is why I consider these two very similar : a row deletion job being a special case of using an index as a queue (deletion of the row at the head of the queue / index instead of updating the row).
But what is the solution ? It is just to be clever, and craft SQL statements in such a way that delete-marked rows are not scanned. The trick is to determine a range to process / delete, and then after processing / deleting it, to start the next range at the end of the previous one (not from the beginning of the index / queue). This is what I show in annex #5 for use-case of the row deletion job. As you can see, the SELECT and DELETE are not getting slower as the job advances, even though there are a lot of delete-marked rows at the head of the index / queue.
This is a good use-case for OFFSET !
One thing I find interesting about this trick is that it is a good use-case of the OFFSET SQL construct. This construct is usually disliked — sometimes even boycotted — because it a bad way to implement pagination queries (also a O(n^2) algorithm). But in our case, OFFSET allows finding the upper-bound of our range in an elegant way (without fetching all the rows), and avoid the O(n^2) complexity for scanning delete-marked rows. I find it ironic that an SQL construct hated for causing an O(n^2) algorithm allows for avoiding another O(n^2) algorithm.
(note that even without scanning delete-marked rows, the algorithm for the row deletion job via an index is still suboptimal in some / most cases; a better one will be the subject of a follow-up post, with another similar use-case for OFFSET)
About this trick to avoid scanning delete-marked rows... yes, it is a leaky abstraction ! I would prefer to not have to mind the inner workings of InnoDB, but they are sometimes brought to the surface, causing performance problems, and we have to deal with them. PostgreSQL or MyRocks will have different problems, so optimizing for them is different. And remember that premature optimization is the root of all evil : only optimize when solving an actual problem !
Before closing this post, I need to mention query complexity. Because scanning complex indexes in their order involves SQL queries with lengthy where-clauses like below (would be even more complex if 2, 3 or more columns were indexed)...
((v = $v1 AND id >= $id1 OR v > $v1) AND (v < $v2 OR v = $v2 AND id < $id2))
...it is tempting to use row constructors like below...
(($v1,$id1) <= (v,id) AND (v,id) < ($v2,$id2))
...however, until Bug #111952 is fixed, it is probably better to avoid this (all the details and in the bug report). I really would like this bug to get more attention, because it prevents query simplification.
Annex #1 : Slow Query Scanning Delete-Marked Rows
Below commands are run on a m6i.xlarge AWS instance (16 GiB RAM) with a gp3 EBS volume running Debian 12. Similar results can be obtained with local SSDs (either a m6id.xlarge or my MacBook), but slow enough disks and Linux are needed for Annex #2.
# Create a sandbox for our tests. # (no need for binlogs, so let's skip them for not having to mind disk space) # (the pv command is a trick to time command execution) { v=mysql_8.4.8; d=${v//./_} dbda="-c skip-log-bin" dbdeployer deploy single $v $dbda | pv -tN dbdepl. > /dev/null cd ~/sandboxes/msb_$d } dbdepl.: 0:00:16 # Create a table for our tests, fill it, and save it. # (aiming at a large table fitting in the InnoDB Buffer Pool) # (large enough so a SELECT COUNT(*) is not too fast) # (saving the table for restoring it after destructive tests) # (two FLUSH TABLE for the second to be quick) # (CREATE TABLE t1_bak for keeping a copy of the table structure) # (the function will be useful in the next annexes) n=$((100 * 1000 * 1000)); { ./use <<< " SET GLOBAL innodb_buffer_pool_size = 12*1024*1024*1024; CREATE DATABASE test_jfg; CREATE TABLE test_jfg.t1(id INT PRIMARY KEY, v INT)" seq 1 $n | awk '{printf "(%d,%d)\n", $1, $1 % 10}' | paste -s -d "$(printf ',%.0s' {1..1000})\n" | sed -e 's/.*/INSERT INTO t1 values &;/' | ./use test_jfg | pv -tN insert ./use test_jfg <<< "FLUSH TABLE t1 FOR EXPORT" | pv -tN export ./use test_jfg <<< " CREATE TABLE t1_bak LIKE t1; FLUSH TABLE t1 FOR EXPORT; system cp data/test_jfg/t1.cfg data/test_jfg/t1.cfg.bak system pv -btrae data/test_jfg/t1.ibd > data/test_jfg/t1.ibd.bak" { time ./use test_jfg <<< "SELECT COUNT(*) FROM t1"; } 2>&1 | grep real function import_no_index() { ./use test_jfg <<< " DROP TABLE IF EXISTS t1; CREATE TABLE t1 like t1_bak; ALTER TABLE t1 DISCARD TABLESPACE" ( cd data/test_jfg; pv -btrae t1.ibd.bak > t1.ibd; cp t1.cfg.bak t1.cfg; ) ./use test_jfg <<< "ALTER TABLE t1 IMPORT TABLESPACE" | pv -tN import } } insert: 0:08:34 export: 0:00:00 2.80GiB 0:00:36 [79.6MiB/s] [79.6MiB/s] real 0m2.716s # SELECT LIMIT 1 is fast as expected. # (running three times to account for cache effects) { function query_no_index() { { time ./use test_jfg <<< "SELECT * FROM t1 LIMIT 1"; } 2>&1 | grep real } function query_no_index3() { for i in {0..2}; do query_no_index; done } query_no_index3 } real 0m0.008s real 0m0.007s real 0m0.007s # Blocks the InnoDB Purge, then DELETE 0.1% and 10% of the table, # all these 10 times skipping the last 10% (to keep some rows in there). # We see that the DELETEs take longer and longer, because of scanning delete-marked rows. { function block_purge() { (( ./use test_jfg <<< "BEGIN; SELECT * FROM t1 LIMIT 1; DO SLEEP(60*60*60)" > /dev/null & touch purge_blocked; while sleep 1; do test -e purge_blocked || break; done; kill %1 )&) } function unblock_purge() { rm purge_blocked } function delete_1000_10() { for j in 1000 10; do test $i -ne 9 -o $j -ne 10 || break local sql="DELETE FROM test_jfg.t1 LIMIT $(($n/$j))" { time ./use <<< "$sql"; } 2>&1 | sed -ne "s/real/$i DELETE $(printf "%4d" $j) /p" done } block_purge for i in {0..9}; do echo; delete_1000_10; done | tail -n +2 } 0 DELETE 1000 0m0.228s 0 DELETE 10 0m25.714s 1 DELETE 1000 0m1.493s 1 DELETE 10 0m20.763s 2 DELETE 1000 0m2.745s 2 DELETE 10 0m21.843s 3 DELETE 1000 0m4.039s 3 DELETE 10 0m23.391s 4 DELETE 1000 0m5.328s 4 DELETE 10 0m24.694s 5 DELETE 1000 0m6.635s 5 DELETE 10 0m25.832s 6 DELETE 1000 0m7.983s 6 DELETE 10 0m27.034s 7 DELETE 1000 0m9.232s 7 DELETE 10 0m28.484s 8 DELETE 1000 0m10.507s 8 DELETE 10 0m29.580s 9 DELETE 1000 0m11.804s # SELECT LIMIT 1 is now slow, because scanning delete-marked rows. query_no_index3 real 0m3.256s real 0m3.254s real 0m3.249s # InnoDB History is not even that large, because few transactions were run. ./use -N information_schema <<< "select COUNT from INNODB_METRICS where NAME = 'trx_rseg_history_len'" 36 # Rows Examined does not show scanning delete-marked rows. { ./use -N performance_schema <<< "TRUNCATE events_statements_summary_by_digest" query_no_index > /dev/null ./use performance_schema <<< " SELECT QUERY_SAMPLE_TEXT, COUNT_STAR, SUM_ROWS_SENT, SUM_ROWS_EXAMINED FROM events_statements_summary_by_digest WHERE DIGEST_TEXT like '%t1%'\G" } *************************** 1. row *************************** QUERY_SAMPLE_TEXT: SELECT * FROM t1 LIMIT 1 COUNT_STAR: 1 SUM_ROWS_SENT: 1 SUM_ROWS_EXAMINED: 1 # Innodb_rows_read is not showing scanning delete-marked rows either. { sql_status="SHOW GLOBAL STATUS LIKE 'Innodb_rows_read'" c1="$(./use -N <<< "$sql_status" | awk '{print $2}')" query_no_index > /dev/null c2="$(./use -N <<< "$sql_status" | awk '{print $2}')" echo "Innodb_rows_read: $c2 - $c1 = $(($c2 - $c1))" } Innodb_rows_read: 91000009 - 91000008 = 1 # dml_reads does not show scanning delete-marked rows either. { function init_metric() { local c; metric=$1 for c in disable reset_all enable; do ./use <<< "SET GLOBAL innodb_monitor_$c = $metric"; done sql_metric="SELECT COUNT FROM INNODB_METRICS WHERE NAME = '$metric'" c1="$(./use -N information_schema <<< "$sql_metric")" } function show_metric() { local c2="$(./use -N information_schema <<< "$sql_metric")" echo "$metric: $c2 - $c1 = $(($c2 - $c1))" unset metric sql_metric c1 } init_metric dml_reads query_no_index > /dev/null show_metric } dml_reads: 1 - 0 = 1 # The only hint we have at scanning delete-marked rows is the number of InnoDB Page that are accessed. # (below is a very large number for that query) { init_metric buffer_pool_read_requests query_no_index > /dev/null show_metric } buffer_pool_read_requests: 165007 - 0 = 165007 # Once the InnoDB Purge is unblocked, # it takes a while to clean up the table and for the query to be fast again. { unblock_purge while sleep 1; do echo $(date) $({ time ./use test_jfg <<< "SELECT * FROM t1 LIMIT 1"; } 2>&1 | grep real) trx_hist=$(./use -N information_schema <<< "select COUNT from INNODB_METRICS where NAME = 'trx_rseg_history_len'") test $trx_hist -eq 0 && break done } Mon Mar 2 16:02:49 UTC 2026 real 0m3.254s [...1 minute...] Mon Mar 2 16:03:52 UTC 2026 real 0m2.857s [...1 minute...] Mon Mar 2 16:04:50 UTC 2026 real 0m2.353s [...1 minute...] Mon Mar 2 16:05:49 UTC 2026 real 0m1.808s [...1 minute...] Mon Mar 2 16:06:50 UTC 2026 real 0m1.274s [...1 minute...] Mon Mar 2 16:07:50 UTC 2026 real 0m0.717s [...1 minute...] Mon Mar 2 16:08:49 UTC 2026 real 0m0.192s [...10 seconds...] Mon Mar 2 16:08:59 UTC 2026 real 0m0.122s [...10 seconds...] Mon Mar 2 16:09:09 UTC 2026 real 0m0.007s [...10 seconds...] Mon Mar 2 16:09:19 UTC 2026 real 0m0.007s [...]
Annex #2 : Slower Query Fetching Delete-Marked Rows from Disk
Below needs to be run after Annex #1.
Reminder : these are run on a m6i.xlarge AWS instance with a gp3 EBS volume.
For the commands below to give matching timing (longer SELECT and DELETE from iteration #3 and 4), reading the 2.8 GiB table not fitting in the InnoDB Buffer Pool must generate IOs with high enough latency. On a m6id.xlarge AWS instance (with local SSDs), IOs are too quick. Also, if changing 8.4 to 8.0, or on my MacBook, the results are not the same because of the different value of innodb_flush_method (I cover this subject in a previous post: More than Flushing for innodb_flush_method. For having similar results on 8.0 and on Linux, either innodb_flush_method must be changed, or my trick for simulating a server with less RAM must be used (I have not tested these two, I am making an educated guess). On MacOS and because of caching, I was not able to replicate these results (I did not try hard, I might have been able to reproduce with a table larger than my 24 GB RAM, but this would have taken too much time).
# Redo the DELETEs of annex #1 with a smaller InnoDB Buffer Pool. { import_no_index ./use <<< "SET GLOBAL innodb_buffer_pool_size = 1024*1024*1024" while sleep 1; do sql="SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_resize_status'" ./use -N <<< "$sql" | grep -q -e "Completed resizing" -e "Size did not change" && break done | pv -tN resize block_purge for i in {0..9}; do echo query_no_index | sed -ne "s/real/$i SELECT /p" delete_1000_10 done unblock_purge } 2.80GiB 0:00:11 [ 249MiB/s] [ 249MiB/s] import: 0:01:33 resize: 0:00:02 0 SELECT 0m0.019s 0 DELETE 1000 0m0.200s 0 DELETE 10 0m21.970s 1 SELECT 0m4.142s 1 DELETE 1000 0m1.528s 1 DELETE 10 0m22.769s 2 SELECT 0m4.210s 2 DELETE 1000 0m2.775s 2 DELETE 10 0m24.160s 3 SELECT 0m8.561s 3 DELETE 1000 0m9.148s 3 DELETE 10 0m29.222s 4 SELECT 0m16.747s 4 DELETE 1000 0m16.181s 4 DELETE 10 0m38.464s 5 SELECT 0m23.264s 5 DELETE 1000 0m22.713s 5 DELETE 10 0m44.859s 6 SELECT 0m28.583s 6 DELETE 1000 0m28.348s 6 DELETE 10 0m50.855s 7 SELECT 0m34.600s 7 DELETE 1000 0m34.435s 7 DELETE 10 0m56.723s 8 SELECT 0m40.719s 8 DELETE 1000 0m40.533s 8 DELETE 10 1m2.711s 9 SELECT 0m46.433s 9 DELETE 1000 0m46.491s
Annex #3 : Inefficient Row Deletion Job (via an Index)
Below needs to be run after Annex #1 (or #2).
# Start back from the saved table, add an index, and save it again. { ./use <<< "SET GLOBAL innodb_buffer_pool_size = 12*1024*1024*1024" import_no_index ./use test_jfg <<< "ALTER TABLE t1 ADD INDEX v (v)" | pv -tN index ./use test_jfg <<< "FLUSH TABLE t1 FOR EXPORT" | pv -tN export ./use test_jfg <<< " DROP TABLE IF EXISTS t1_bak2; CREATE TABLE t1_bak2 LIKE t1; FLUSH TABLE t1 FOR EXPORT; system cp data/test_jfg/t1.cfg data/test_jfg/t1.cfg.bak2 system pv -btrae data/test_jfg/t1.ibd > data/test_jfg/t1.ibd.bak2" function import_index() { ./use test_jfg <<< " DROP TABLE IF EXISTS t1; CREATE TABLE t1 like t1_bak2; ALTER TABLE t1 DISCARD TABLESPACE" ( cd data/test_jfg; pv -btrae t1.ibd.bak2 > t1.ibd; cp t1.cfg.bak2 t1.cfg; ) ./use test_jfg <<< "ALTER TABLE t1 IMPORT TABLESPACE" | pv -tN import } } 2.80GiB 0:00:14 [ 197MiB/s] [ 197MiB/s] import: 0:01:45 index: 0:03:34 export: 0:00:00 4.38GiB 0:00:45 [97.8MiB/s] [97.8MiB/s] # SELECT LIMIT 1 via the index is fast as expected. { function query_index() { { time ./use test_jfg <<< "SELECT * FROM t1 FORCE INDEX (v) WHERE v < 9 ORDER BY v, id LIMIT 1"; } 2>&1 | grep real } function query_index3() { for i in {0..2}; do query_index; done } query_index3 } real 0m0.764s real 0m0.007s real 0m0.018s # Do similar DELETEs as in annex #1, but via the index. { block_purge for i in {0..9}; do echo for j in 1000 10; do test $i -ne 9 -o $j -ne 10 || break sql="DELETE FROM test_jfg.t1 WHERE v < 9 ORDER BY v,id LIMIT $(($n/$j))" { time ./use <<< "$sql"; } 2>&1 | sed -ne "s/real/$i DELETE $(printf "%4d" $j) /p" done done | tail -n +2 } 0 DELETE 1000 0m0.869s 0 DELETE 10 1m19.220s 1 DELETE 1000 0m2.130s 1 DELETE 10 1m28.115s 2 DELETE 1000 0m3.502s 2 DELETE 10 1m22.060s 3 DELETE 1000 0m4.930s 3 DELETE 10 1m24.297s 4 DELETE 1000 0m6.371s 4 DELETE 10 1m22.513s 5 DELETE 1000 0m7.831s 5 DELETE 10 1m21.671s 6 DELETE 1000 0m9.267s 6 DELETE 10 1m24.000s 7 DELETE 1000 0m10.592s 7 DELETE 10 1m25.798s 8 DELETE 1000 0m12.087s 8 DELETE 10 1m20.632s 9 DELETE 1000 0m12.884s # SELECT LIMIT 1 via the index is now slow. query_index3; unblock_purge real 0m2.573s real 0m2.574s real 0m2.575s
Annex #4 : Inefficient Batch Job using an Index as a Queue
Below needs to be run after Annex #3.
# Start back from the saved table, show fast query and blocks the InnoDB Purge, # then process / UPDATE all rows WHERE v < 8, show slow query and unblock. # (processing is done in a single transaction for simplicity) # (if doing it via an UPDATE LIMIT, we would see queries slowing down) { import_index query_index3; block_purge ./use test_jfg <<< "UPDATE t1 SET v = v + 10 WHERE v < 8" | pv -tN UPDATE query_index3; unblock_purge } 4.38GiB 0:01:33 [47.8MiB/s] [47.8MiB/s] import: 0:03:15 real 0m0.170s real 0m0.006s real 0m0.006s UPDATE: 0:08:53 real 0m2.304s real 0m2.291s real 0m2.292s
Annex #5 : Efficient Row Deletion Job (via an Index)
Below needs to be run after Annex #3.
# Start back from the saved table, show fast query and blocks the InnoDB Purge, # then delete via the index without scanning delete-marked rows, # and finally show slow query to prove the delete-marked rows are there. # (this is still a suboptimal row deletion job, a follow-up post explains a better way) # (the hint "FORCE INDEX(v)" is important for the query to fail if the index is dropped) # (and it is important to deal with the case where the SELECT OFFSET returns no row) { import_index; query_index3; block_purge; echo sql1="SELECT id, v FROM test_jfg.t1 FORCE INDEX(v) WHERE v < 9" sql2="ORDER BY v, id LIMIT 1" res="$(./use -N <<< "$sql1 $sql2")" id=$(awk '{print $1}' <<< "$res"); v=$(awk '{print $2}' <<< "$res") for i in {0..9}; do echo for j in 1000 10; do test $i -ne 9 -o $j -ne 10 || break where="(v = $v AND id >= $id OR v > $v)" sql="$sql1 AND $where $sql2 OFFSET $(($n/$j))" { time ./use <<< "$sql" > res.txt; } 2>&1 | sed -ne "s/real/$i SELECT $(printf "%4d" $j) /p" res="$(tail -n +2 res.txt)"; rm res.txt id=$(awk '{print $1}' <<< "$res"); v=$(awk '{print $2}' <<< "$res") sql="DELETE FROM test_jfg.t1 WHERE v < 9" test "$res" == "" || sql="$sql AND ($where AND (v < $v OR v = $v AND id < $id))" test "$res" != "" || sql="$sql AND ($where)" { time ./use <<< "$sql"; } 2>&1 | sed -ne "s/real/$i DELETE $(printf "%4d" $j) /p" done test "$res" != "" || break done | tail -n +2 echo; query_index3; unblock_purge } 4.38GiB 0:02:07 [35.1MiB/s] [35.1MiB/s] import: 0:03:15 real 0m0.184s real 0m0.006s real 0m0.006s 0 SELECT 1000 0m0.055s 0 DELETE 1000 0m0.606s 0 SELECT 10 0m3.153s 0 DELETE 10 1m29.189s 1 SELECT 1000 0m0.694s 1 DELETE 1000 0m0.669s 1 SELECT 10 0m11.638s 1 DELETE 10 1m15.231s 2 SELECT 1000 0m0.719s 2 DELETE 1000 0m0.952s 2 SELECT 10 0m12.411s 2 DELETE 10 1m15.090s 3 SELECT 1000 0m0.099s 3 DELETE 1000 0m0.727s 3 SELECT 10 0m13.101s 3 DELETE 10 1m16.836s 4 SELECT 1000 0m0.776s 4 DELETE 1000 0m0.651s 4 SELECT 10 0m11.816s 4 DELETE 10 1m17.654s 5 SELECT 1000 0m0.736s 5 DELETE 1000 0m0.648s 5 SELECT 10 0m10.695s 5 DELETE 10 1m14.648s 6 SELECT 1000 0m0.723s 6 DELETE 1000 0m0.678s 6 SELECT 10 0m12.112s 6 DELETE 10 1m13.690s 7 SELECT 1000 0m0.236s 7 DELETE 1000 0m0.696s 7 SELECT 10 0m13.254s 7 DELETE 10 1m13.730s 8 SELECT 1000 0m0.758s 8 DELETE 1000 0m0.888s 8 SELECT 10 0m11.691s 8 DELETE 10 1m7.675s real 0m3.161s real 0m2.640s real 0m2.601s
No comments:
Post a Comment