Thursday, June 04, 2009

Results of shootout on split page hash in InnoDB

I have now tried out the buffer split page hash patches on
both a Linux/x86 box and a SPARC/Solaris server (tests done
by Dimitri).

The three variants in short description are:
1) The Google v3 derived patch. This introduces a new array
of mutexes that only protect the buffer page hash. Thus some
extra checking is needed to ensure the page hasn't been
removed from the hash before using it. This is a very simple
and attractive patch from that point of view. The patch uses
an array of 64 mutexes.

2) A variant I developed with some inspiration from the Percona
patches. This patch uses an array of page hashes which each has
its own read-write lock. I've tried this with 1, 4 and 16 page
hashes and 4 is the optimum number. The rw-lock protects the
page hash long enough to ensure that the block hasn't been
possible to remove from the hash before the mutex is acquired.

3) The last variant is a mix of the two first which uses the
simplicity of the Google patch, uses a rw-lock instead and
separate page hashes (to ensure read ahead doesn't have to
go into all mutexes). Used an array of 4 page hashes here.

The conclusion is that the only version that has consistently
improved the MySQL 5.4.0 numbers is the version I originally
developed (2 above).

On sysbench read-write all versions improve numbers compared to
MySQL 5.4.0. 2 and 3 improve 2% whereas the original Google
patch improved with 1%.

On sysbench read-only on Linux it was much harder to beat the
MySQL 5.4.0 version. Only 2) did so and only by 0.5%. This is
not so surprising since this mutex is not a blocker for read-only
workloads. 1) gave -1% and 3) gave -0.3%.

On a write intensive workload on Linux 1) and 3) performed 0.5%
better than MySQL 5.4.0 whereas 2) gave 2% improvement.

Finally on a sysbench read-write with less reads on Linux, all
variants lost to MySQL 5.4.0. 1) by 2%, 2) by 0.1% and 3) by
1%.

Also the numbers from SPARC/Solaris give similar data. The major
difference is that the positive impact on SPARC servers is much
bigger, all the way up to 30% improvements in some cases. The
most likely reason for this is that SPARC servers
have bigger CPU caches and are thus more held back by lack of
concurrency and not so much by increased working set. The x86
box had 512kB cache per core and a 2MB L3 cache and is likely
to be very sensitive to any increase of the working set.

So the likely rationale for worse numbers in some cases is that
more mutexes or rw-locks gives more cache misses.

So given the outcome I will continue to see if I can keep the
simplicity of the Google patch and still maintain the improved
performance of my patch.

2 comments:

  1. Great work. I can't wait to try out your patches.

    ReplyDelete
  2. In the last result, where all versions lost to 5.4.0, you should use negative numbers for the percentages. It would be consistent with the rest of the post.

    ReplyDelete