I’m attending the Google Test Automation Conference (GTAC 2007) in Manhattan, New York right now. It’s a two-day event hosted by Google, with mostly non-Google speakers.
The conference is by invitation only and very limited; we all had to either have something Google’s team of judges thought was good enough to present, or our essays had to impress them (I’m not bragging about getting in, I’m telling you why I thought the conference would be great). Unfortunately, I have to say I’m a little underwhelmed. Several of the talks have been very good, especially Allen Hutchison’s GTAC keynote on first principles and Simon Stewart’s informative and very fun talk on Web Driver, but some of the others didn’t zing me very much.
You can view the GTAC YouTube Playlist to see the talks yourself. The first day is up already – amazing! The Google multimedia folks really have it down to a science.
Perhaps I’m a little un-impressed because I think the discipline of test automation, or at least some of those speaking, is too heavily influenced by Extreme/Agile methodologies, which often take a very narrow view of testing and have invented some seriously damaging techniques like Mock Objects. Perhaps because there’s a lot of Java in the room, and Java’s insistence on Everything Shalt Be An Object has twisted natural concepts into very difficult implementations, which other programming languages blindly follow even when they have first-class support for the thing (such as a test) Java represents so awkwardly. And perhaps because there’s so much focus on auto-generated tests, which I think are about as useful as auto-generated documentation. They’re often un-tests, just as documentation generated by inspecting and mentioning class names, method names, and parameter types is un-documentation. Not that auto-generated tests don’t have a place in the world – they do – but it’s limited.
The most interesting talk to me was Adam Porter and Atif Memon’s Skoll project (here’s the Skoll homepage), which is developing a distributed means of building and running test suites in different configurations, very smartly. There’s real computer science going into this. And guess what one of their big test projects is? (Perhaps the only big test project, I’m not sure). Building MySQL source code. Yep, they’re finding real bugs by smartly building different configurations and finding test failures, then iterating to find related configurations that fail. Watch the video for the details of how intelligently they’re doing this.
I decided to skip the last talk and the evening’s socializing, and instead headed over to the MySQL Camp, which is happening just a few miles away in Brooklyn. I spent the evening mooching Japanese food and catching up with friends I met at MySQL Camp 2006. I went to bed late, but it was worth it.
Today I’m also going to skip the Google Test Automation Conference and focus on MySQL Camp. I tried to find out more about today’s GTAC talks, but it’s tough. Google has kind of made it a black box – I didn’t even find out in advance who was going to speak, or get any chance to offer a talk myself. A few days ago they sent an email with the schedule, which listed speaker names and talk titles, but no other information. That was the first I knew of the schedule. There are certain things that are great about how they’re running this, such as having just one track (no tough choices of which talk to attend), but not knowing who was speaking on what made it hard to judge whether and why I wanted to attend. Abstracts would have helped a lot. As far as I can see, today’s talks are going to include more mildly promotional material. I’d be interested in the Lightning Talks and maybe a couple of the others, but time is precious, and given that I know MySQL camp is going to be good, I’m not willing to take the risk that today’s GTAC talks will be uninspiring.