Using MySQL to benchmark OS performance

214

Author: Tony Bourke

It seems to be an exciting time for *nix operating systems, with a number of them recently releasing new versions that bring the addition of expanded features and claims of improved performance. If you’re using GNU/Linux, OpenBSD, NetBSD, FreeBSD, or Solaris as a database server, you’ve probably recently considered an upgrade or switch to another OS in that list due to marketing hype and hearsay. This article will show you how to benchmark operating system performance using MySQL on these OSes so you can find out for yourself if you’re missing out. While this may not necessarily be indicative of overall system performance or overall database application performance, it will tell you specifically how well MySQL performs on your platform.

The following operating systems were used for the comparison testing:

  • FreeBSD 4.11
  • FreeBSD 5.3
  • NetBSD 2.0
  • Linux 2.6
  • Linux 2.4
  • Solaris 10 x86 (build 69)
  • OpenBSD 3.6

To test each operating system, I used a single hardware configuration:

  • (2) 1 GHz Pentium III processors, 256 KB cache
  • 512 MB ECC Registered RAM (2×256 MB)
  • (1) 9 GB SCSI-160 (7200 RPM)
  • SuperMicro 370 Motherboard
  • AIC-7899 SCSI Bus

As with any experiment, this started out with a simple question: Given a common hardware configuration, how would the same database software perform on a variety of operating systems? Since I’ve got a dual-CPU system, I thought I might as well revise the question: Given the same hardware, how would the same database software perform on a variety of operating systems running in single and dual-CPU modes?

Cautionary note

Performance benchmarks involving operating systems are among the
most controversial and contentious articles that are published. There
are a number of reasons for this. Benchmarks have often been used —
both blatantly and deceptively — to push various agendas. Benchmarks are easily criticized because, by nature, they are limited in scope, and that limited scope can be viewed as a weakness when trying to discredit the results.

These MySQL tests cannot possibly encompass all the possible combinations of performance tweaks and real-world variables. Nor can they possibly encompass all potential real-world work loads. MySQL is used in many environments for a variety of tasks, from a web content management system, online message board, to a non-web data warehouse.

Benchmarks, if done well, give an indication of how you can expect a system to perform in your own installation, but unless the tests are an exact duplicate of your infrastructure, the results are probably not going to be the same between different systems.

Choosing a Benchmark

Initially I had intended to benchmark both PostgreSQL and MySQL to have a variety of database applications. The goal was not to compare the two in terms of performance, but to give the operating systems a more diverse application load to gain more insight into the performance characteristics of each. However, as I searched for appropriate benchmarks and the time it took to run and implement them, combined with running them on 6 different operating systems, I quickly realized that time constraints would limit me to MySQL only.

A fundamental challenge to all benchmarks is figuring out how to appropriately replicate real-world scenarios. A logistical challenge was to find tests that would work for a variety of operating systems and environments, and provide consistent results.

PostgreSQL’s pgbench

For testing PostgreSQL, I took a look at pgbench. It only works with PostgreSQL, and while I’ve been successful in getting it to compile on multiple platforms, the results I got varied by 30 -100% or more from run to run. This gives me very little confidence in the results. It’s not just me — other pgbench users and benchmarks have noted this as well. Because of the wide variation, I decided this benchmark was unsuitable.

MySQL’s sql-bench

The sql-bench utility is Perl-based and generally included with MySQL by default. I’ve had little trouble getting sql-bench to run on any operating system I’ve tried thus far. It provides a comprehensive collection of database workloads, although not a workload that’s very realistic. As Peter Zaitsev from MySQL said when I contacted him regarding this article and my choice of sql-bench: “Who would run 1,000 ALTER TABLE commands for the same table in a loop?”

OSDB

The Open Source Database Benchmark was originally designed to test the I/O throughput and general processing power of Linux on Alpha processors, but it is now being extended to other architectures. Unfortunately I’ve not been able to get OSDB to run without crashing on most of the tests in Linux, and I’ve had little luck getting it to compile for other operating systems. OSDB is currently in the alpha stage (as stated on their sourceforge page), and it really shows. It looks quite promising, but it’s unusable for benchmarking purposes at this time.

Super Smack

Super Smack provides a great complement to sql-bench, and performs some fairly intensive select and update operations, which would relate well to Web-based database usage. Getting it to compile can be a bit of a challenge, though. I’ve been able to get binaries compiled for FreeBSD 4.10 and 5.3, OpenBSD 3.6, NetBSD 2.0 and Linux 2.6 and 2.4. In each section below, I’ve documented the changes I needed to make in order to get it to run on each operating system.

SysBench

I had been put into contact with Peter Zaitsev of MySQL when doing research for this article, and he recommended a couple of other benchmarks that didn’t turn up in my initial search. One of them which made it into the benchmark is SysBench. Among other tests, it includes an OLTP benchmark that works with MySQL (as well as other databases, although I did not test them). It’s capable of producing a CPU-bound workload (as with Super Smack) as well as an I/O bound workload.

Remote versus Local

Another consideration is running a benchmark remotely (over TCP) or locally. Running remotely has the benefit of generating a database workload without including the benchmarking workload. However, MySQL runs faster through the local socket than it does through TCP (roughly 25% slower than Super Smack tests, although that’s a high rate of queries).

When I tried running Super Smack remotely, I noticed that the results were nearly uniform across the board for the various operating systems. This struck me as strange, as running them locally had produced a wide variety of results. Even stranger was that local tests were producing in some cases double the performance of remote. Even taking into account the slower access through TCP versus socket, the remote test should have done better than it did.

Upon investigation, it was determined that the packet rate was simply too high, around 8,500 Ethernet frames per second. This caused an extremely high rate of interrupts, around 16,800 interrupts per second, regardless of the NIC I used (I used the built-in Intel 10/100 as well as a SysKonnect-based Gigabit card). The quick query rate was causing a low payload size (around 125 Bytes) and thus a high rate of packets.

I was able, however, to get SysBench running remotely. The generated workload produced a much more manageable packet rate. This is important, as I could not get Super Smack to compile for Solaris, so SysBench results are the only ones I have for that operating system

Methodology

For each operating system, I erased the hard drive from the previous installation and installed fresh. The same hard drive was used for each install. The advantage to wiping clean is that each operating system had the full run of the hard drive, rather than certain operating systems getting “sweet spots” on the hard drive, giving them a slight advantage. By laying out the file systems in a similar manner for the operating systems (swap space first at 512 MB, then root for 1.5 GB, and /usr on the rest) I could further uphold consistency between the operating systems. The disadvantage to this was, if I needed to go back to run additional tests, I would have to re-install, which added to the amount of time this evaluation took.

I installed each operating system “stock” — with default options — and recompiled a kernel here and there when necessary, to add, for instance, SMP support in FreeBSD or increase memory allocation limits. For the most part, I didn’t do any special OS tweaking beyond what was specifically recommended for MySQL. Any changes to the OS I made are documented in their respective sections below.

Every operating system has 512 MB of swap (which was used very little). All of the BSDs had soft updates enabled. Solaris had UFS logging enabled.

MySQL

I used the most recent (at the time I started this evaluation) MySQL 4.0.22 tarball downloaded from MySQL’s site. While some of the operating systems had precompiled or ready-to-compile BSD ports of MySQL available, many were slightly older versions of MySQL (4.0.20 for example) and compiled with a variety of different options which presented problems with consistency. After all, I’m testing the operating systems, not their pre-packaged MySQL distributions or source builds.

I used my-large.cnf as the basic MySQL configuration file, which is what’s recommended for systems with 512 MB of RAM. I also added a few options for InnoDB for the SysBench test as recommended by Peter Zaitsev:

innodb_buffer_pool_size=256M
innodb_log_file_size=128M
innodb_log_buffer_size=8M
innodb_flush_log_at_trx_commit=1

I also disabled log-bin, since I’m not doing replication.

Compiling MySQL from source for each platform allowed me to carefully control compilation options so that the operating systems alone were the variable in this experiment.

Benchmarking procedure

In the end, I chose two benchmarking utilities: SysBench and Super Smack.

For Super Smack, I used the two included smack files update-select.smack and select-key.smack. I used the settings of 10 clients with 10,000 queries, for a total of 200,000 queries per run. For each operating system I ran each smack file 3 times, and averaged out the results.

For SysBench, I ran two different sets of runs as recommended to me by Peter Zaitsev. The first set, with 1 million rows for the table size, would give a CPU-bound result because of caching. These were run 3 times with the results averaged.

The second test used 10 million rows, and produced an I/O-bound result. Disk operations were particularly intense for this test so, as expected, the transaction rate was much lower. Because it took up to an hour for some operating systems to perform each run of these tests, I only ran them twice and averaged the results.

For all tests on all operating systems, the individual results were remarkably consistent (which is why I chose those tests), so the delta between individual runs was very low. All in all, this produced over 100 OpenOffice pages of notes.

Operating System Setup

For NetBSD and the two FreeBSDs, I had to recompile the kernels in order to increase the amount of memory available to the MySQL processes. I was getting several “error 12″s, which indicated that MySQL was trying to malloc more memory but was unable to. The reason is a limit the BSDs all have on process size, which is set at a hard limit of 512 MB. To increase that size requires a re-compile. When running the SysBench 10 million row tests, MySQL was getting into the 600 MB range. Not a lot above 512MB, but just enough to cause problems.

I used these kernel options:

options MAXDSIZ="(896*1024*1024)"
options MAXSSIZ="(896*1024*1024)"
options DFLDSIZ="(896*1024*1024)"

OpenBSD 3.6 has a default limit of 1 GB, so I didn’t need to make those changes. For each BSD, I used two different kernels: One compiled with SMP support and one compiled without SMP support. I used the GENERIC configurations unmodified, expect for above-mentioned changes and adding SMP support.

OpenBSD 3.6

For single processor tests I used the stock BSD kernel and for SMP tests, I used the stock bsd.mp kernel. The file system is OpenBSD’s FFS with soft updates enabled.

Super Smack won’t compile by default under OpenBSD since it lacks an OpenBSD definition, but it can be fixed with a few adjustments to client.cc.

In line 37 of client.cc, change __FreeBSD__ to __OpenBSD__. In line 51 of the same file, change __FreeBSD to __OpenBSD__. From there, Super Smack should compile fine under OpenBSD for versions 3.4, 3.5, and 3.6 on both the x86 and SPARC64 architectures (I didn’t test with other arches).

NetBSD 2.0

Getting MySQL to compile on NetBSD 2.0 was easy and required no adjustments. Super Smack was a little more tricky, as it required the OpenBSD hack mentioned above (changing __FreeBSD__ to __NetBSD__) plus adding a few header files to src/client.cc and
making a new tiny header file to include this bit from Linux’s /usr/include/linux/sem.h:

union semun {
int val; /* value for SETVAL */
struct semid_ds *buf; /* buffer for IPC_STAT & IPC_SET */
unsigned short *array; /* array for GETALL & SETALL */
struct seminfo *__buf; /* buffer for IPC_INFO */
void *__pad;
};

With that adjustment, Super Smack compiled cleanly and I was able to run tests.

The FFS2 file system was used for all of the NetBSD 2.0 partitions, with soft updates enabled. I built two separate kernels, one for single-CPU and one for dual-CPU. They were based on GENERIC and included the process size increases I mentioned above.

The FreeBSDs

The FreeBSDs are the operating system I spent the most amount of time on because of the various options available for running MySQL on this platform. To explain this, it helps to have a little bit of background on the somewhat problematic past FreeBSD has had with respect to MySQL. It’s best summed up by this excerpt from a famous September 2002 blog entry by Jeremy Zawodny (the maintainer of Super Smack and co-author of High Performance MySQL): “FreeBSD is a great operating system, but it has one important weakness that MySQL is very good at highlighting: threading support. FreeBSD’s threads implementation isn’t very good. I won’t say that it sucks, because it could be a lot worse.”

That was in reference to FreeBSD as it existed in 2002; Zawodny has since posted a follow-up showing how those severe threading problems can be resolved by using a separate threading library known as linuxthreads. Still, the debate about FreeBSD performance occasionally flares up on the FreeBSD mailing lists, especially when FreeBSD is compared to Linux.

Today there are a few different threading libraries available for FreeBSD 5.3: KSE, which, as of 5.3, is the default threading library in FreeBSD-5 series; linuxthreads, which is an implementation of Linux threading in FreeBSD; and libc_r, which is the still the native threading library for FreeBSD 4.11. And that’s not even all of the available threading models, but these three are the ones I considered for the benchmarking project. There are also options for preemption and process versus system scope threading in FreeBSD-5.

As you can see, the combinations of threading libraries and other environmental factors under which MySQL can be run are numerous; an entire article could easily be devoted to just testing those combinations. But, as these tests can take several hours, I limited combinations to those that were mainstream and seemed to have worked in tests run by others. For FreeBSD 5.3, I used KSE (the default) and linuxthreads. For FreeBSD 4.11, I used libc_r (the default) and linuxthreads.

I compiled my own MySQL 4.0.22 version with both the default KSE threading (easy) and linuxthreads (a little tricky). FreeBSD does have several versions of the MySQL server in Ports, but again I’m comparing operating systems using the same database, not the operating system’s packaged software.

Compiling MySQL using the standard KSE threading library was simple: I ran the configure script included with the MySQL 4.0.22 tarball to set everything up and it compiled
cleanly with a make command.

For linuxthreads, the Ports version of MySQL 4.0 gives you the option of building a linuxthreads server. That’s a fairly easy process, but again it was a slightly older version of MySQL 4.0, and I wanted to keep things consistent, so I chose the more difficult route of compiling from my source tarball. I did use the Ports system to install the linuxthreads library (located in /usr/ports/devel/linuxthreads). From there, I peeked at the Ports install configuration for a linuxthreads build, and read the linuxthreads documentation, and with a little tweaking I was able to build a linuxthreads MySQL binary from my source tarball. Alternatively, I could have modified /etc/libmap.conf to map linuxthreads to my KSE-built binary (which didn’t work in FreeBSD 4.10, and I didn’t try it in 4.11).

If you do use the Ports build, make sure to give the build option BUILD_OPTIMIZED=yes, as checking the build environment for the Ports build, I saw that the Makefile in /usr/ports/databases/mysql40-server/ used -O as its default GCC compiler optimization. That’s a very low amount of optimization, resulting in a slower MySQL server; specifying BUILD_OPTIMIZED=yes turns on more robust GCC optimization. The MySQL configure script turns on -O3 by default, which is the maximum optimization for GCC, so it’s the optimization I used.

For both FreeBSDs, I included the results for native threading (KSE for 5.3 and libc_r for 4.11) as well as linuxthreads.

Super Smack compiled cleanly under FreeBSD 5.3 with a simple fix to client.cc. FreeBSD 4.11 required the same fix, and it was also looking for getopt.h, so I installed libgnugetop from the Ports collection. Once that was installed and the adjustments made to the Makefile, Super Smack compiled cleanly.

In running certain benchmarks (which weren’t included in this article), I ran into a “Too many connections” issue where the other operating systems ran fine with the same test. Upon investigating, the issue seemed to be related to an issue Jeremy Zawodny posted about on the MySQL lists, but a fix wasn’t mentioned in the thread. Searching Google, I found a MySQL manual entry which includes a fix for running 1000+ concurrent sessions on Linux. Since it’s discussing linuxthreads, it also applies to FreeBSD running linuxthreads, so I applied the fix to /usr/ports/devel/linuxthreads and after that was able to run the connect test successfully. So if you run 1000+ concurrent connections with FreeBSD-4, you may want to keep that in mind.

Solaris Express (build 69)

For Solaris 10, the same kernel was used for both tests. The file system used was UFS with logging enabled (ZFS will not be available for a few more months).

The GCC 3.3.2 package I pulled off of the Sun Freeware site had been built under an older beta release of Solaris Express. It had several header issues and wouldn’t compile anything, and Solaris does not ship with a compiler. I found and used a fix from the Sun forums (it’s also now posted on the Freeware site), and after that, MySQL compiled cleanly.

Getting Super Smack compiled, however, was not a productive endeavour. Super Smack uses flock(), which isn’t very well supported in Solaris (and only then through BSD compatibility). Attempts to use libucb failed along with everything else I tried, so I currently do not have any Super Smack results for Solaris 10.

Linux

For Linux, I used both the 2.4 and 2.6 kernel. Among the multitude of Linux distributions available, I ended up going with Gentoo 2004.3. Some of the other Linux distributions I tried (such as Fedora Core) had trouble with the AGP interface on my SuperMicro motherboard, and wouldn’t boot with the installed kernel. The kernel source I used for 2.4 is 2.4.29-gentoo-r5, and for 2.6 I used 2.6.10-gentoo-r6. With Gentoo it was also relatively easy to install NPTL for 2.6, which I used in the 2.6 tests (although they didn’t make any difference when compared to non-NPTL 2.6 results). MySQL 4.0.22 had no trouble compiling from source.

With Solaris and the BSDs, I had pretty much one file system (UFS/FFS) available. With Linux comes a multitude of potential file systems, including ext2, ext3, ReiserFS, JFS, XFS, and more. For these tests, I chose ReiserFS (version 3, not Resier4).

Conclusion

I’d like to thank Peter Zaitsev of MySQL.com for his excellent input for this article. Initially I’d fully completed the article when I got in touch with him, and he made several recommendations on testing methodology that altered my procedures and results.

Assembling all the pieces took quite a bit of work and quite a bit of time (install OS, setup MySQL, run tests, repeat, and repeat, and repeat…), but all in all I think this represents a comprehensive performance testing scenario where the same hardware and same database is used among various Unix and Unix-like operating systems, both free and commercial. The process took several weeks (including two instances of starting-from-scratch) and about 30 blank CD-Rs.

After having performed all of the tests and compiled the results, there were quite a few surprises that I think will challenge a few tightly-held assumptions in regards to which operating systems are fast and which aren’t. In the next article, I’ll present the results for all six operating systems.