In February I added a new feature to my Sysbench version that I use in
my MySQL Cluster testing. This new feature adds a new column in the
table called filter. It contains the same value as the primary key.
With this new column I can easily change the range scan queries in
sysbench from returning 100 rows to instead scan 100 rows and
return 1 row. This means that sysbench can benchmark the filtering
performance of the underlying database engine.
Next I ran tests where set the number of rows in the range to
10.000 rows. This new test was a perfect vehicle to improve performance
in NDB for scan filtering.
Filtering one row in 7.6.7 in this sysbench tests costs about 750 ns.
When I started out optimising these 750 ns of time I didn't expect so
much improvement, but using perf it was possible to get very
fine-grained pinpointing of the wasted CPU performance. One
interesting thing was that I found a bitmask that had zeroing of the
bitmask in the constructor, it turned out that this constructor was
called twice in filtering a row and neither of them was required.
So fixing this simple thing removed about 20 ns of CPU usage and
in this case about 3-4% performance improvement.
As you can see this is micro-optimisations and for those perf is a
splendid tool.
One of the biggest reasons for bad performance in modern software
applications is instruction cache misses. Most modern software
is packed with features and this requires a lot of code to handle.
The compiler has a hard time knowing which code is the common
path and which path is the error handling path.
In the MySQL code we have two macro's likely and unlikely that
can hint the compiler what code path to optimise for.
In this code path I was optimising I found that I had roughly 1 billion
instruction cache misses over a short period (20 seconds if I remember
correctly). I managed with numerous changes to decrease the number
of instruction cache misses to 100 million in the same amount of time.
I also found some simple fixes that cut away a third of the processing time.
In the end I found myself looking at the cost being brought down to around
250ns. So comparing the performance of this scan filtering with 7.5.10 we
have optimised this particular code path by 240%.
During the development of these improvements of scan filtering, I discovered
that some of the optimisations could be applied also to searching in our
ordered indexes. The impact of this is that the index rebuild phase of a restart
will go faster, I haven't measured the exact impact this has yet. It also means
that any application using ordered indexes will go a lot faster.
For example performance of a standard Sysbench OLTP RW benchmark
with one added ordered index column improves by 70% in 7.6.8
compared to earlier versions of 7.6 and 7.5.
my MySQL Cluster testing. This new feature adds a new column in the
table called filter. It contains the same value as the primary key.
With this new column I can easily change the range scan queries in
sysbench from returning 100 rows to instead scan 100 rows and
return 1 row. This means that sysbench can benchmark the filtering
performance of the underlying database engine.
Next I ran tests where set the number of rows in the range to
10.000 rows. This new test was a perfect vehicle to improve performance
in NDB for scan filtering.
Filtering one row in 7.6.7 in this sysbench tests costs about 750 ns.
When I started out optimising these 750 ns of time I didn't expect so
much improvement, but using perf it was possible to get very
fine-grained pinpointing of the wasted CPU performance. One
interesting thing was that I found a bitmask that had zeroing of the
bitmask in the constructor, it turned out that this constructor was
called twice in filtering a row and neither of them was required.
So fixing this simple thing removed about 20 ns of CPU usage and
in this case about 3-4% performance improvement.
As you can see this is micro-optimisations and for those perf is a
splendid tool.
One of the biggest reasons for bad performance in modern software
applications is instruction cache misses. Most modern software
is packed with features and this requires a lot of code to handle.
The compiler has a hard time knowing which code is the common
path and which path is the error handling path.
In the MySQL code we have two macro's likely and unlikely that
can hint the compiler what code path to optimise for.
In this code path I was optimising I found that I had roughly 1 billion
instruction cache misses over a short period (20 seconds if I remember
correctly). I managed with numerous changes to decrease the number
of instruction cache misses to 100 million in the same amount of time.
I also found some simple fixes that cut away a third of the processing time.
In the end I found myself looking at the cost being brought down to around
250ns. So comparing the performance of this scan filtering with 7.5.10 we
have optimised this particular code path by 240%.
During the development of these improvements of scan filtering, I discovered
that some of the optimisations could be applied also to searching in our
ordered indexes. The impact of this is that the index rebuild phase of a restart
will go faster, I haven't measured the exact impact this has yet. It also means
that any application using ordered indexes will go a lot faster.
For example performance of a standard Sysbench OLTP RW benchmark
with one added ordered index column improves by 70% in 7.6.8
compared to earlier versions of 7.6 and 7.5.
thanks
ReplyDelete