Archive for the ‘column store’ tag
I spent the day Thursday with some of Kickfire’s engineers at their headquarters. In this article, I’d like to go over a little of the system’s architecture and some other details.
Everything in quotation marks in this article is a quote. (I don’t use quotes when I’m glossing over a technical point — at least, not in this article.)
Even though I saw one of Kickfire’s engineers running queries on the system, they didn’t let me actually take the keyboard and type into it myself. So everything I’m writing here is still second-hand knowledge. It’s an unreleased product that’s in very rapid development, so this is understandable.
Kickfire’s TPC-H benchmarks are now published, so you can see the results of what I’ve been seeing them work on. They are now #1 in the world, in two categories. Visit them at their booth in the exhibition area at the conference, and you will be able to see more for yourself.
The big picture
At a high level, Kickfire is an appliance consisting of two or more commodity rack-mountable 1U pizza-box units.
One unit contains the Kickfire chip and a lot of standard, high-speed, server-grade ECC memory. This unit is what executes the queries at high speed.
The other unit is connected to the Kickfire chip unit via a standard PCIe interconnect. It runs stock CentOS 5, with MySQL 5.1. Kickfire has their own storage engine, which uses fairly well-known techniques such as column storage and compression.
To the outside world, the unit behaves just like an ordinary MySQL server. You connect to it in the same manner, you issue the same kinds of queries, you manage users and privileges the same way, and so on. However, when you run a query, it doesn’t get executed in the traditional MySQL manner (nested-loop joins with calls to the storage engine via the storage engine API). Instead, the query goes to the Kickfire chip and executes there. The chip is designed to execute queries very fast, through a variety of techiques that a) I’m not allowed to tell you about yet or b) are sometimes unclear to me because Kickfire was being a little protective about some of my technical questions.
I met with quite a few people at Kickfire, but I’ll just mention one: Ravi Krishnamurthy. Before Kickfire approached me, I had not heard of him. Anyway, I’ll just link to Ravi Krishnamurthy on Google Scholar, and let you read up on his papers if you want. It’s enough to say that I really enjoyed speaking with him and the other people at Kickfire.
One of the overall impressions I got was that the Kickfire engineers aren’t the type to do something halfway. When complete, this is not intended to be a system that has only some of the features you’d expect.
The Kickfire chip has no registers. Instead, the Kickfire chip addresses a very large amount of memory directly. Remember, registers are a bottleneck. As I said in my first article on Kickfire, using registers to process large amounts of data is like using a paper cup to fill your bathtub. Allowing the chip to address this memory directly removes a huge bottleneck.
There is still on-disk storage, though. (And no, it’s not SSD.) The interconnect between the on-disk storage and the memory is a standard PCIe connection. Nothing exotic or proprietary. But the system is apparently capable of moving a very large amount of data at very high speed from the disks to the Kickfire chip’s memory, where it can be addressed in O(1) speed like an array lookup.
Another interesting technique is that the system does not decompress the data to operate on it. According to the engineers, the queries run on the data in its compressed form. As Ravi told me, implementing this is “not for the faint of heart.”
Kickfire seems to have really worked hard at removing bottlenecks wherever possible. For example, they’ve rewritten the out-of-the-box drivers for key pieces of the commodity hardware they’re using.
If you know how MySQL executes queries, the statement “Kickfire executes joins directly in the Kickfire chip” implies that the Kickfire system isn’t just a storage engine, because MySQL currently processes many of the most costly parts of queries at the server level, not in the storage engine. Obviously Kickfire is not going to perform well unless it changes that. Kickfire has in fact built their own optimizer, which replaces the MySQL optimizer. It compiles the incoming query into a series of macro-operations, which apparently are very similar to the basic relational operators (project, join, etc). This is then sent to the chip for execution, and as the chip produces results it injects them back into the stream of bytes that the server normally uses to send results back to the client.
The Kickfire chip doesn’t implement everything in hardware. For example, there is no MD5() function in the chip. When it encounters an operation it can’t do in hardware, it makes a call back to the MySQL server to fill in the gaps in its functionality.
The rewritten optimizer sounds like an interesting piece of engineering. Ravi told me with pride that the optimizer is “world-class” and “can stand toe-to-toe with the best optimizers in the database industry.” It is a cost-based optimizer with rewrites (e.g. it transforms the operator tree into the most efficient equivalent structure) and it is exhaustive (e.g. it tries all possible combinations to find the best execution plan, unlike MySQL’s optimizer which by default switches to a greedy search when the number of tables to be joined becomes large [correction: as Timour pointed out to me today, I made it sound like MySQL's optimizer isn't exhaustive; I neglected to mention that you can configure it]).
I asked whether they had benchmarked the optimizer’s performance. (I mean how fast it can find an optimal query plan, not the performance of its results.) Of course, there is no standard benchmark for this, but I think it’s interesting just to compare it against the MySQL optimizer. They had not done this, but I think they will now that I have mentioned it. I think it’s relevant because if you use Kickfire for short queries, a slow-performing optimizer could actually become noticeable.
Is it really stream processing?
I wanted to know whether the chip really does stream processing, or whether it is only conceptually stream processing that’s really implemented some other way. It sounds to me like it’s the genuine article. I asked some pointed questions to this effect, such as “is there a way to interrupt a partially completed query.” As it turns out there is, but only because the stream processor apparently does time-slicing like a standard chip, and when it comes up for air it can check to see if a query should be aborted. In general, I was told, there is no interruption once the data stream starts flowing. That lets the query literally “run at the speed of electrons.”
But what about subqueries, you ask? That’s what I asked too. Stream processing is all very well for joins, but what about a correlated subquery, for example?
It turns out that if you’re clever, you can figure out ways to decorrelate them and then execute them in streaming fashion. The same holds for aggregation over data that’s not in the order needed for streamed aggregation. Pretty interesting ideas; I can’t go into them, because those are proprietary, but Ravi and I talked about them for quite a while.
And very large IN() lists can be turned into a relation and treated like any other.
Storage is obviously crucial to processing extremely large amounts of data very fast. A few of the things I noted about the storage:
- Each column is stored in a fixed width. This is how Kickfire can look up a row as though it’s doing an array access.
- The internal representation is chosen automatically and may not match what you think. Kickfire can profile data as it’s loaded, and choose the type as it goes.
- If you tell Kickfire you’ll only store values that are X large in a column, and it builds its column storage space to hold that large a value, what happens when you then start adding larger values later? Ravi explained how it works, and it’s proprietary right now, but suffice to say that Kickfire does not need to rewrite all the data you’ve already stored if you suddenly start storing values you didn’t anticipate. Yet, it can still maintain O(1) array-lookup performance on the compressed data.
- You can pass the storage engine special comments in the CREATE TABLE statement to tell it what kinds of data each column will get. These comments are part of MySQL’s standard syntax — Kickfire has not changed the MySQL parser, so it should be 100% syntax-compatible with a standard MySQL server.
- Kickfire has a very Oracle-like set of features around tablespaces, extents, and so on. You can have multiple tablespaces, and you can add devices to tablespaces, etc.
- Storage is transactional and ACID-compliant, with logging and ARIES recovery much like Oracle, InnoDB, etc. If it surprises you that a system built for large data warehouses would be transactional and ACID-compliant, welcome to the club. I was expecting the usual special-case behavior, you know, you can load data but you can’t update it, or something like that. But as I said, Kickfire isn’t doing this halfway. Plus, TPC-H requires ACID properties.
Loading, ETL, and star schemas
Loading data is also important to accelerate: executing queries on large amounts of data isn’t good if it takes forever to get the data into the server. Kickfire has their own suite of tools, including one for loading data that accelerates the load process with the SQL chip itself.
Kickfire’s attitude towards star schemas is that you shouldn’t need to build a special schema for your data warehouse. They think their system will be so fast that you can keep your data in the same schema you use for OLTP. If that turns out to be true, that will save a lot of work. (How much effort have you put into building a separate schema for your data warehouse?)
Other notes of interest
Here are some other tidbits I thought I’d share with you:
- The system has support for foreign keys. It automatically creates indexes on foreign keys and primary keys.
- The standard types of indexes don’t really apply. Instead, the indexes are “hardware-friendly” (the other term they used was that the indexes are “impedance-matched to the hardware”). There are special features for indexing ranges of dates and indexing words inside a string (but this is not a full-text index; I’m unclear on how it really works, but it helps accelerate LIKE queries, which is important for the TPC-H benchmarks)
- The deadlock detection is via cycle detection in the waits-for graph, not timeout-based. As a result, it should be fast.
- The system I saw was running in debug mode, and wrote its optimized query plan to a file for every query. I talked with them about making this available via SQL. The plan is much more detailed and informative than MySQL’s EXPLAIN. They asked me whether it would be a good idea to wedge this information into EXPLAIN, and I told them I wouldn’t do that; EXPLAIN is a tabular output that doesn’t make much sense unless you really know how to read it. When you’re trying to understand a query plan, which is generally a tree of relational operators, you need a hierarchical view of it.
- They told me that they use the INFORMATION_SCHEMA extensively, but I did not get a chance to look at it myself.
- They also told me that they use UDFs extensively for system management, but again I can’t confirm.
As you probably know, I’m a strong believer in Free Software. I am not aware of any plans for Kickfire to release the source code for their modified version of MySQL or their storage engine or optimizer. These are the satellite diamonds that surround the crown jewels: open-sourcing them would make it easier to reverse engineer the chip, which they don’t want. However, they’ve promised me that they’re going to open-source some of the migration tools, etc etc. Not initially, but as time permits; and later they’ll look at open-sourcing other parts.
I have made sure that they know where I stand on this: I think the ethical thing to do is GPL all the code that they ship, and I think everyone I talked to heard me say that at least once. If you’re going to buy their magical hardware, you deserve to have the source code for everything that runs on it, too. And they need to release the interface specs for their hardware so people can use it in new and surprising ways. Who knows — someone could use it to find a cure for cancer.
My two days with Kickfire left me with a lot more questions, not surprisingly, and I don’t think that will change until I actually get access to a machine and start testing it myself. I saw a lot of slideshows; I saw some demos; I walked into the server rooms and saw the pretty blinking lights; but I’m not going to tell you that Kickfire will do X or Y because I don’t know a heck of a lot. I was hoping for more hands-on experience and in-depth technical details, but that wasn’t the way it really worked out. However, based on what I’ve seen, I have no reason to believe other than that Kickfire’s system will do what they claim: it will run large, complex queries on very large datasets extremely quickly.
Some of you have noticed Kickfire, a new sponsor at this year’s MySQL Conference and Expo. Like Keith Murphy, I have been involved with them for a while now. This article explains the basics of how their technology is different from the current state of the art in complex queries on large amounts of data.
Kickfire is developing a MySQL appliance that combines a pluggable storage engine (for MySQL 5.1) with a new kind of chip. On the surface, the storage engine is not that revolutionary: it is a column-store engine with data compression and some other techniques to reduce disk I/O, which is kind of par for the course in data warehousing today. The chip is the really exciting part of the technology.
The simplest description of their chip is that it runs SQL natively.
OK, but now you need to do something: get “SQL chip” out of your mind. It doesn’t work the way you think it does, and your pre-conceived ideas may prevent you from understanding how different this really is. (Everyone says their technology is a paradigm shift, so I expect you to be numb to this phrase.)
I can’t explain all of the technology in this post, partially because of NDA, but I want to prepare you for when you do hear the details. If you’re like me, you’ll miss a lot of stuff because you have tunnel vision, and then you’ll say “wait, I get it now! Can you please repeat everything you’ve been saying for the last hour so I can think about it all over again?”
An important note
Very important: I have not seen this technology, tasted it, smelled it, or benchmarked it. This information is based on discussions with their engineering and other staff. I will not pretend to know anything I don’t. I will be spending two days in the lab with the engineers next week, and then I will be able to write in greater detail with more confidence.
How your computer currently works
To understand how Kickfire’s chip works, you need to understand something you probably take for granted: how most chips work. Most computers today use the same architecture they always have: there’s data that is held in the CPU, and data that is not. The CPU has registers, which hold a miniscule bit of data – the data it is currently working with. When the CPU processes an instruction that asks for some more data it doesn’t have, the CPU has to go fetch it. In the meantime, the instruction can’t complete.
As you might imagine, this is not terribly efficient. Fetching data that’s not in the CPU can take hundreds of CPU cycles (or more). To work around this, computer architects have developed a hierarchy of caches: the on-chip cache, the main memory, and the hard drive, to name a few. The caches make it faster to get data when it’s not already on hand. And modern chips have a pipeline, too. The pipeline looks at the instructions as they flow towards the CPU, tries to predict which data they’re going to need, then pre-fetches it.
In the best case, this works okay. Not always — for example, the Pentium 4 has a very long pipeline, so the cost of a wrong branch prediction is very high. Another case is when you simply need a lot of data, such as tens of gigabytes. Suppose for your 10GB operation, you’re only going to look at each byte once (a common occurrence in data warehousing queries). This renders your caches useless, because caches work on the principle that you’re likely to look at recently accessed data again soon.
In these cases, the speed of the computation is constrained by the Von Neumann bottleneck: the inefficient fetch-compute-wait cycle of constantly going to the memory (or disk) for more data, a teeny bit at a time. Remember, even in-memory data is very slow compared to data that’s in the registers. Having a lot of fast memory is not a solution to the Von Neumann bottleneck. It’s a workaround to reduce the cost.
Kickfire is designed to work well where today’s general-purpose computing architectures run queries slowly because they’re sitting on their thumbs much of the time. Think data warehousing: complex queries with lots of data.
What is the industry’s answer to this? So-called massively parallel processing, or MPP. Current MPP data-warehousing solutions are special-purpose database software that runs queries on dozens or hundreds of CPUs, which occupy a lot of storage space and require lots of power, hardware, and cooling. “If you throw enough Von Neumann machines at the problem simultaneously, they can answer your questions faster,” or so the thinking goes. In other words, the current state of the art is to arrange conventional computers in new ways.
Kickfire takes the opposite approach: stream processing. This is a fundamentally different computing architecture. Stream processing is to Von Neumann machines as LISP is to C.
For those of you who aren’t LISP programmers, here’s another analogy: In stream processing, you take a bunch of data and you shove it through the chip without stopping. Rather than the chip asking for data from the storage subsystem as needed, the data actually gets pushed at the chip. That is, it’s push-processing instead of the conventional pull-processing.
Conventional processing is like trying to fill your bathtub from the sink with a paper cup. Stream processing is like putting your tub under the sink and opening the drain.
I’m taking some liberties here, to illustrate the differences. As I said, I haven’t seen the wiring diagrams of the Kickfire chip. But hopefully you get the concept.
This is not a new idea. If you’ve worked with modern graphics cards, you’ve seen this in action. Programming languages like Cg express the stream-processing concepts elegantly. If you’ve ever been in a classroom full of C++ programmers trying to learn Cg, you’ve seen how hard it is to grasp this different approach. Essentially, graphics programming on one of these chips is a series of transformations, not a series of instructions. You input some vertexes at one end of the processor, and you tell the chip to do some matrix multiplies and so on. Out pops the result at the other end.
If this doesn’t sound much different from instructions… well, meditate on it. It’s like an assembly line, but nobody leaves their station along the conveyor belt. In a traditional CPU, the “person” at the conveyor constantly leaves to go get the materials he needs.
Kickfire runs in commodity hardware, and it is just one or two servers, not racks full. Like many other systems designed for large amounts of data, it uses a column data store. Unlike many other systems, it uses an industry standard interconnect and a custom pluggable MySQL storage engine.
What took so long?
Stream processing is the obvious way to run SQL queries. Some readers may never have thought about it this way, but my guess is that a lot of you already think of SQL in a stream-processing way, even though you might know that computers today really implement it in conventional ways. I have always tried to think of it this way, and I always try to explain SQL as a stream, too.
So when I was on a call with the Kickfire engineers and it finally sunk in, I felt really silly. Why didn’t I think of that? It’s so obvious.
But then again, most breakthroughs are really obvious in hindsight.
I have seen initial benchmark results, but I’m under NDA about them. I can’t say any more yet. And I haven’t run any benchmarks myself yet, nor have I had access to the hardware. So this is all theoretical until I get my hands on the system. Caveat emptor, your mileage may vary, etc etc.
One thing I’m interested in is how well the system performs for general-purpose queries. When you take it away from complex queries on lots of data, does it still have an advantage? I’ll be trying to get an answer to that question next week.
They are still in stealth mode and my NDA prevents me from being able to tell you a lot or answer all your questions yet. But someday they will no longer be in stealth mode, and you’ll find out everything you want to then.
Hint: they are going to be giving a keynote address on their technology, but there’s not much detail in the description. Come to the keynote and find out.
Why am I writing this?
Well, they promised me chocolate…
Seriously: I do have an agenda, but there are actually several motivations here. The first is that they initially contacted me because of my involvement with the MySQL community. Of course they’re hoping to gain publicity through me, but they also wanted to let the community have some input. I’ve been sort of a secret liason for you, representing your interests to Kickfire. I’ve advocated pretty strongly for certain things I’ll go into in a later post.
The other reason I’m working with them is that I’m excited about their technology, even though I don’t have hard evidence about their claims and benchmarks yet. If what they’re saying is true, their product will be very good for the environment. It will let people save a lot of energy (power, cooling, the need to build data centers) and it will help avoid the need to build a bunch of servers. Computers are extremely toxic to manufacture.
I’m also interested in seeing them succeed because I anticipate that even if this product isn’t what it claims to be, they’ll prove the concept and there will be a competitive rush into this space. That is guaranteed to produce a lot of changes in how people build computers, probably in more areas than just data warehousing. So I’m happy that they’re starting this, because others will finish it whether they do or not. And that’s good news for the environment, too.
Stay tuned. More details are forthcoming.
PS: if you have questions you’d like me to look into while I’m onsite with the engineers, feel free to post them in the comments. But I probably can’t answer them yet.