Archive for the ‘instrumentation’ tag
I like to write tools that make hard things easy, when possible. By and large, MySQL is easy and simple. But some simple things are too hard with MySQL. I want to change that, at least for the things that matter the most to me, and which I think I know how to fix.
I will probably write a lot about this. I have already written a number of
rants blog posts about the lack of instrumentation in MySQL, and that is where I’ll probably continue to put most of my energy.
To begin with, imagine this simple scenario. You are a remote DBA. Your client says “New Relic is showing periods of slow response time from the database.” You connect to MySQL at the command line and try to troubleshoot. How do you catch the problem in action, from within the database itself? The following are no good:
- It doesn’t count to see the problem two minutes later by observing the application tier, as New Relic does. That’s too late, and it’s not from within the database.
- It’s also not good enough to catch just after the slow queries finish executing, which you can get from the slow query log if you have a realtime log analyzer running; that is a) also after the fact, and b) again not possible from within the database itself.
- Watching TCP traffic or using a proxy is similarly off limits.
This is a database, a complex piece of software — comparable to an operating system in some respects. It should be possible to know that response time is spiking while the slow queries are executing slowly, before they even finish and return their results to the application. This is a simple question that should be easy to answer.
It’s not currently possible inside MySQL, because MySQL doesn’t tell you how much time queries spend executing. It’s that simple. Fortunately the fix is equally simple: measure how long queries spend executing.
We can look at the lowly iostat (or /proc/diskstats in Linux) for an example of how this is possible to solve. We need a counter that shows the sum of execution time, including currently executing queries. I saw that the Facebook patch adds counters similar to this. Search for “Query timing” on that page. I want something slightly more complex, the way Linux’s IO counters work, because they add memory. Simple is really beautiful; a counter that has memory is an amazing thing, and you can apply Little’s Law to derive a surprising amount of information from it.
I might change my opinion about some of the more complex things that are being added to MySQL; much smarter people have, and I’m probably a hold-out because I’m not that smart. But I still see the need for simple combinations of counters and timers for essential metrics, which do not need complex queries or tools to use. I have a few more things on my wishlist, which I’ll write about later.
In an application such as a database server, instrumentation is like sex: it’s not enough to know how often things happen. You also care about how long they took, and in many cases you want to know how big they were.
“Things” are the things you want to optimize. Want to optimize queries? Then you need to know what activities that query causes to happen. Most systems have at least some of this kind of instrumentation. If you look around at… let’s not pick on the usual targets… oh, say Sphinx, Redis, and memcached. What metrics do they provide? They provide counters that say how often various things happened. (Most of these systems provide very few and coarse-grained counters.) That’s not very helpful. So I read from disk N times, and I read from memory N times, and I compared rows N times… so what? I still don’t know anything relevant to execution time.
That’s why we need to measure how long things took. It’d be great if, for every Handler_X counter variable in MySQL’s SHOW STATUS, we also had a Handler_X_time in microseconds. True, better instrumentation could be designed, but that’d be a huge step forward already. Instead of guessing at the significance of Sort_rows, we could look at Sort_rows_time and see if a lot of time is being consumed sorting rows!.
Next, we need to know how big things are. This is a rather generic term, but in a lot of cases, the size, difficulty, or some other metric of an operation is important. “I wrote a message to the network socket” is okay; “I wrote and it took N microseconds” is better; and “I wrote N bytes and it consumed M microseconds” is best. Made a temp table on disk? Nice — how big was it? Mine’s smaller than yours!
The final way that instrumentation is like sex: more is better, to an extent. You can get too much, but how often has that happened to you?
To program is human, to instrument is divine. Complex systems that will support a heavy workload will eventually have to be tuned for it. There are two prerequisites for tuning: tunability, and measurability.
Tunability generally means configuration settings. Adding configuration settings is a sign of a humble and wise programmer. It means that the programmer acknowledges “I don’t understand how this system will be used, what environment it will run in, or even what my code really does.” Sometimes things are hard-coded. InnoDB is notorious for this, although don’t take that to mean that I think Heikki Tuuri isn’t humble and wise — nobody’s perfect. Sometimes programmers set out to create systems that are self-tuning. I’m not aware of any success stories I can point to in this regard, but I can point to plenty of failures. Perhaps I can’t think of any successes because I don’t need to.
Measurability (instrumentation) is the next sign of a wise and humble programmer. If your system must be tuned, then it needs to be measured to enable wise decisions. There are at least two important kinds of metrics — a subject for another blog post. Most large systems I’ve worked with (primarily database systems, but operating systems too) are seriously lacking in measurability. A programmer who makes the system measurable acknowledges “I might be wrong, and if I am, it’s a good thing to enable people to prove it,” and realizes that “you cannot improve what you cannot measure.”
Complex, high-load systems get micro-optimized, making them even more opaque. By the time an I/O operation in InnoDB reaches the disk, it’s often impossible to blame it on a specific query. Not just because of lack of instrumentation — even with perfect instrumentation, I/O operations wouldn’t be assignable one-to-one with user actions. Optimization does that, because a lot of optimizations are about deferring, anticipating, or combining work. That makes instrumentation even more important.
This weekend, I heard conflicting stories about instrumentation in Postgres. Someone claimed to have offered patches with a detailed set of instrumentation (I’d also heard this story from someone else at the same company, six months ago in a different place). He told me that the maintainers had declined it on the basis of the added overhead. Someone else told me that no such offer had been made, at least not in public where the decision could be taken to the mailing lists. I don’t know what’s true. I do know that stock Postgres is virtually un-instrumented in ways that matter a lot. The same can be said of MySQL, although interestingly the Venn diagram of the ways these two projects are instrumented doesn’t overlap all that much.
The performance and maintenance cost of adding instrumentation to an application pales in comparison to the benefits. There’s a famous quote from Oracle guru Tom Kyte, who when asked about the cost of Oracle’s performance instrumentation, estimated it at negative ten percent. That is, without the ability to measure Oracle and thus improve it, it’d be at least ten percent slower. I think ten percent is a modest estimate for most systems I work with.