Recap: Galactic Puzzle Hunt 2017

(This is a recap/review of the 2017 Galactic Puzzle Hunt, which happened earlier this month. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

Back when I started solving Australian puzzlehunts, I usually played with The Sons of Tamarkin; we were perennial contenders against plugh, The Elite, and [pi] (aka Galactic Trendsetters, the team that generously brought us this hunt). A few years ago, I decided to join Killer Chicken Bones for SUMS, figuring that would be my annual break from my usual team… but in the meantime, some of the Sons of Tamarkin have gotten too busy to solve and others have found better teammates (for this hunt they totally outclassed us as Brown Herrings). So for now I’m with KCB for the immediate future, until we hit a hunt where enough original members are available to hit the team size cap and I get bumped.

The GPH is the second American “Australian rules” puzzlehunt I’m aware of, and the first intended for the public. (Last year some friends at Brown ran an event called CRUMS, which was aimed at the Brown community and doesn’t appear to be online anymore.) Maybe the American “flavor” made it more appealing, but it was definitely my favorite Aussie-style hunt I’ve participated in. Puzzles were generally solid and polished, and there was a very elegant meta structure at the end (and the meta was actually worth points… what a concept!).

On Day 1 and 2, we solved all of the puzzles before I went to sleep; on Day 3 we knew what we were doing on our last puzzle when I turned in, and everything was solved when I woke up. Things took a turn on Day 4; we only solved two puzzles before I went to sleep, and we still only had two the next morning. We hammered away at the rest and solved two of them before I had to leave for a wedding, but we went 24 hours without solving A Basic Puzzle, knocking ourselves out of contention for first place. We failed to solve one additional puzzle on Day 5 before overtime, but we were able to solve both puzzles fairly quickly with hints, so we finished first among the non-perfect teams (fifth overall).

Speaking of hints, being on this end of the yes/no system has changed my opinion of the oracular hint system, which we used in the Mystery Hunt, and which very few teams took advantage of. Galactic Puzzlesetters (sorry guys, I’m too lazy to figure out how to insert the airplanes, but rest assured I made the noises while typing) have posted a great wrap-up dissecting their own hunt, and they have some interesting thoughts on yes/no hints, but here’s mine: One yes/no hint is rarely useful, but a series of them (where you can ask follow-up questions) is potentially very useful. On both of the puzzles we solved in overtime, we figured out what we needed to do not by asking one question, but by  asking several and narrowing down what we shouldn’t be thinking about. So in retrospect, I think we should probably have given out batches of questions in the Mystery Hunt, rather than singletons. (Though I again want to emphasize that earned hints were only intended to sort things out as needed for contender teams… we always intended to be generous to more casual teams, and that was instituted here with the “infinite hint” system, which is much tougher to execute in an online hunt!)

= = = = =

Puzzles I especially liked:

* Zero Space (Day 1)

Took a long time to get the last aha here (exactly how to interpret the first phrase you extract) but once we did, it was hard to believe we didn’t see it for so long. Very clever and elegant solve path.

* Very Fun Logic Puzzle (Day 2)

As advertised.

* How to Best Write an Essay (Day 2)

Completely missed the hidden message in this puzzle. We guessed the interpretation of W quickly (though we were surprised there was no accompanying S), and then recognizing what Y should be without the message was an extremely satisfying aha. I’m almost a little sad that we weren’t supposed to figure it out the way we did, though I get that someone unfamiliar with the artist would have no way in in that case.

* Drive (Day 5)

This mostly got solved while I was at Jenn’s wedding. When I got back, I found a mostly solved puzzle on the spreadsheet, at which point I said to myself, “What a cool data set. And what a cool way to put it into a grid! And it would also be neat if you extracted like this… Yup. Solved.”

* Everything on Day 6 (Day 6)

There was no advertised meta in this set, but given the weirdness of the Day 1 answers and the sixth day that usually doesn’t exist, I predicted the five puzzles on Day 6 would be five five-puzzle metas. I suppose it makes much more sense in a March Madness hunt to have a 2^n-puzzle final meta, but you can’t blame for failing to predict that 25 puzzles would lead to a 16-answer bracket!

For what it’s worth, I do think the “put these answers in a line and modify them step by step,” as seen in Duck Quonundrum, may have jumped the shark. I’ve seen it used in a few places, and I don’t think it’s ever been as cool as it was in MH2015. But given that these answers each had to work in two metas, I get that this is a flexible mechanism to achieve that goal.

The main mechanism for the final meta was something we immediately considered when we saw it… and dismissed, even after solving two conferences. The rest of the team put two and two together while Jackie and I were asleep, but looking over it after it was already solved, I thought everything was fair and satisfying.

Puzzles I especially disliked:

* Famous by Association (Day 4)

I was delighted by the puzzle quality for most of this hunt, but I thought there was a downturn in quality on Day 4. Famous by Association was probably the clunkiest puzzle for me; the matches weren’t clean enough for us to be confident about how the mechanic was intended to work, and even after we sort of knew what we were doing, we frequently had multiple options for some of the items, and we eventually chose things based on giving us good letters. It’s never good when solving individual pieces of a puzzle leads less to an “Aha!” and more to a “Well, maybe? I guess?”

* The Treasure of Apollo (Day 4)

Most of the issues I had with this puzzle are acknowledged in the posted solution. The overall gimmick of the puzzle is neat (though once again the matching wasn’t tremendously clean… we had our third and ninth characters switched for a long time), but there was a lot of extraneous data, and the way you’re expected to parse the data was inconsistent. I also see why they added the enumeration after the fact… given all of that ambiguity, that’s an incredibly indirect phrase to parse without spaces.

* A Basic Puzzle (Day 4)

This was one of our late solves. We immediately figured out what to do with the first line, but we tried a lot of similar approaches to the other lines with no success. Personally, I think the solution space was just too open here… you could do a lot of almost-right things without confirmation if you didn’t know what you were looking for, whereas with some narrowing via hinting, the rest of the team was able to polish this off while I was at the wedding.

* Unaligned (Day 5)

This is the other puzzle where we couldn’t figure out what to do without narrowing the solution space (though we should have, since Kristy suggested doing the right thing and I advised against pursuing it). My complaints about this are that (a) having two identical grids really makes you want to combine them in some way, and (b) confirmation here really relies on seeing that a bunch of three-letter strings are all words, which is something that could easily happen by accident. Our hint requests helped us determine which parts of the completed grids we could completely ignore, which helped a lot.

This was another puzzle where the posted solution revealed a hidden message we never found (two of them, in fact!).

Puzzles I simultaneously reluctantly loved and lovingly despised:

* Scramble for the Stars (Day 3)

This will be a bit more spoilery than the responses above, because I want to get into details. Somebody counted the number of clues and suggested constellations, and after trying a few things, I suggested the puzzle might work exactly how it turned out it did work, and said I really hoped it didn’t, because that sounded like it would be a pain in the ass. Then it occurred to me that this would only work if there was an even number of letters… I added them up and got an odd number, and I breathed a sigh of relief.

We had STRAIGHTLINE instead of STRAIGHTLINES.

Once we made that correction, I feared again that the puzzle would work the way I thought it did. I remembered from my ceiling growing up that Draco was the largest constellation (my incredibly awesome parents not only put a bunch of glow in the dark stars on my ceiling one day while I was at school, they took the time to make an accurate star map with constellations), and Draco didn’t have the right number of neighbors. But then I looked up the actual largest constellation and smallest constellation, and looked up their numbers of neighbors. They matched. Crap.

So, finally believing my original idea was right, four of us spent a looong time scraping the adjacency data from Wikipedia by hand (I considered doing it on a printed-out star map, but it was way too small to see anything). Once we actually got that data down and figured out how to keep track of assignments, the logic puzzle portion was exquisitely elegant. As the posted solution suggested, we started with rare letters and the necessary degrees of their neighboring nodes, and then things got easier and easier. This would have been a nightmare if we weren’t sure of most of the answers, but the clues were clear enough that we had all but two right. One of them, LOAD, was our fault… but having a clue for NOSTALGIA that could just as easily clue NOSTALGIC as just mean.

I went to sleep while Brent and Kristy were still assigning letters, and my only idea for extraction was to alphabetize the constellations and read all the letters in that order… but constraining all 88 letters seemed like an impossible construction feat. Putting the answer phrase on the zodiac was genius.

So as it turns out, this was a magnificent puzzle, and I have tremendous respect for the constructors. But at the same time, I stand by my initial prediction… solving it was a pain in the ass. 🙂

* X-Ray Fish (Day 3)

My only hate for this puzzle came from the sound clip being really annoying after 50 times, and from it being very difficult to count stuff in the video (even pausing it). I did like the overall mechanism, and knowing which part of the song to focus on due to playing it too many times in Rock Band. (Me: “I think those sounds are in the actual song, aren’t they?” Someone I will not name: “No, definitely not.”) And if you haven’t already, be sure to check out the posted solution for this puzzle.

= = = = =

As it happens, part of the reason the Galactic Puzzle Hunt happened (according to the wrap-up) is that the constructors had some downtime after finishing the Mystery Hunt early… So as a member of Setec, I take partial credit for miscalibrating the difficulty of our Hunt! In any case, this was a great addition to the online huntscape. Given that it was free, the constructors have no obligation to give us another one next year. But I hope they will, and I hope they start writing now just in case they don’t have quite as much free time next MLK weekend.

(Note to self: If I’m going to keep writing these recaps regularly, I’m going to have to start making them shorter.)

By Request: All-Time Top Ten Puzzlehunts (#5-#1)

And we’re back! (Sorry this resumed about two weeks later than I intended.)

5. NPL Convention Extravaganza – Small Town News (July 2003)

The annual National Puzzlers’ League convention has three nights of official program activities, culminating in the “extravaganza,” a puzzlehunt that usually runs most teams about 2-4 hours. (I should say that it’s almost always a puzzlehunt; the first convention I attended, in Newark, instead had a puzzle carnival with various competitive midway games. I actually missed the extravaganza that year because I was dealing with a personal crisis, but from what I’m told, I didn’t miss much.) Given the time frame and the audience (many of whom are more into casual individual puzzle solving than interwoven puzzle experiences), extravaganzas don’t tend to have much in the way of sophisticated structure, when they do, there are often complaints. As a result, while I always look forward to the extravaganza, I rarely find them very memorable, with one notable exception.

The 2003 extravaganza, at a convention held in Indianapolis, was written by Rick Rubenstein, Andrew Murdoch, and Andrew Hertz. Teams were given all the puzzles at once, which is not my favorite puzzlehunt structure, but in this case, “all the puzzles” means a newspaper. The entire hunt consisted of a 8-page custom newspaper in which every element of the paper, from the comics to the photos to the horoscope to the bridge column to every article, contained puzzle content. Furthermore, the puzzle answers all fit together in a logical way; rather than having a metapuzzle that just used the answers as inputs, the goal was to help the police department stop a sinister plot, and chunks of the paper combined to reveal different elements of the plot. At the end, rather than giving a final answer, we were required to explain the plot to the moderators, justifying our deductions with proof from the paper. (In fact, if I remember correctly, we had subverted one of the puzzles and were asked to go back and figure out the puzzle we skipped when our explanation wasn’t complete… we still finished first in about ninety minutes, because for some reason, every time Rick co-writes the extravaganza, my team wins.) I’m a big fan of puzzles embedded in other media when they work, and in this case, everything was assembled in a very elegant and satisfying manner.

So far, I have co-written two NPL Con extravaganzas: an award-show-themed one in Los Angeles with Francis Heaney and Dave Tuller, and an auction-themed one in Seattle with Todd McClary, Kevin Wald, and Mike Selinker. Check with me again in five months and the count will be up to three.

4. MIT Mystery Hunt – 20,000 Puzzles Under the Sea (January 2015)

2015 was my first year returning to Setec Astronomy after a nine-year hiatus. I wrote the 2005 Hunt (Normalville) with them, and they decided to become the Mystery Hunt Writer’s Retirement Home or the Mystery Hunt Tavern, depending on who you ask, while I went off to win a few Hunts with Evil Midnight and then join a bunch of my college friends on the Tetazoo team (whose name changes every year) until we ran the Hunt in 2014. I was ready for a change in pace after that, and it turned out that most of my best friends had settled on Setec, so Jackie and I joined them once I was assured that, while not everyone on the team was ready to win, if we did finish first we would not run away from the coin screaming.

I didn’t care for the 2013 Mystery Hunt and helped write 2014, so in 2015 I was looking for my first enjoyable Mystery Hunt solve in a while. After an initial group of puzzles that looked like a traditional round structure, we assembled our submarine and started moving downward, with a super-long linear Hut web page in which every puzzle solve helped us dive deeper, and we encountered new puzzle links as we approached. I think this was a great example of structure matching theme; not every Hunt story lends itself to traveling further and further along a linear path, but diving to the bottom of the sea obviously does. This also meant that you wouldn’t know what was going to unlock next, but you could sometimes see the next thing coming… Some of these we could identify by silhouette, and some were exciting to reveal.

There was also a very novel round of physical objects puzzles that were given to us in a locked treasure chest. As it turned out, we secured this chest at a time when few people were awake, and when I showed up early in the morning I was not ready to process a batch of items no one else had made progress on. I didn’t love the late portions of the story of this Hunt, and I thought the endgame was waaaaay too long (I actually slept through it due to a delay, but I’m going by conversations with people on my team and on others), but it’s one of the more smooth and satisfying Hunts I’ve solved in recent years.

3. The Haystack (August 2006)

Once upon a time, Eric Berlin contacted me and asked if I wanted to come to New York City to do a puzzlehunt with him. I had heard of The Haystack (presumably named after the idea that you’re looking for a needle in one) but had never really considered playing, since this was a decade ago when my threshold for puzzle travel was higher (as my salary as lower).

I don’t remember a ton of details about the puzzle structure; I remember there were nine pairs of puzzles, and in each pair, you needed to be in a particular Manhattan location to solve the puzzle. I think solving the first gave you the location, which potentially helped you make progress on the second, but I won’t commit to that being right. What I do remember is finding the location tie-ins much more satisfying than in other walkaround hunts. New York City is nothing if not data-rich, and the author(s) found really creative ways to require information from the surrounding environment to make the puzzles solvable. The final metapuzzle somehow involved filling in a sudoku grid with data from the nine criminals and crimes we’d identified over the course of the day… or in our case, seven or so of those criminals, and at the bar where we were meeting at the end of the line, I was struggling to try to short-circuit the final puzzle with partial information. I was convinced I was in a race against time, until with a minute or so left, one of the people who had solved the meta confirmed I wasn’t doing close to the right thing. (I’m not sure I ever actually figured out what to do. It’s sad that these puzzles aren’t archived anywhere, as far as I know.)

I really enjoyed The Haystack, and after it ended, I was very excited to participate again in the next one. So of course, 2006 was the last Haystack.

2. The Famine Game (September 2013)

When Scott asked me to list my top ten puzzlehunts, I knew the top two within seconds. The questions that remained were (a) what are the other eight, and (b) what order would the top two go in? After some relection, I’m declaring The Famine Game second by a razor-thin margin, even though it was one of my most exciting puzzle experiences.

The Famine Game was the first and only first-run Game I’ve done; it’s also, to my knowledge, the only one so far on the east coast. The event had a Hunger Games theme and thus took place in The Capital (Washington, DC and the surrounding area). Our team was called Apetitius Giganticus (one of the various scientific names for Wile E. Coyote), and we rented a van that was much much too large, which made driving and parking very challenging at times, though thankfully my awesome teammates never made me drive.

I could go on for hours about all the features I loved about the Famine Game: The consistently great puzzles. The creative thematic locations. The “kill” videos our app played every time we defeated another team (puzzles yielded methods of murder, and when you solved a puzzle the game app told you which team you’d defeated… naturally, our app claimed every team except ours was eventually knocked out. The weird hallucinogenic effect on our app when we were stung by trackerjackers. The simulation of the second book’s “clock”-structured Games that stuffed twelve rotating mini-puzzle challenges in an elementary school after hours. The team evaluation challenges the night before the Game officially began. The XBox, which remains the most technically dazzling physical puzzle I’ve ever solved. The fantastic improv performances from several parodies of Hunger Game characters. I don’t remember sleeping, and yet I don’t remember getting very tired… most of it was just that damn good.

The reason I say “most” is the same reason I decided to rank this as #2; the first half to two-thirds of the event, with the goal of eliminating the opposition and then navigating the Clock, was really enthralling, with heavy puzzle variety and compelling immersion. Once we got to the part of the plot where we were assaulting the Capital, it felt like the puzzles got a little more average and the story felt less exciting. The Famine Game came in like a lion and went out like a lamb, but it was a freaking awesome lion. It was Mufasa. (There was also another negative that wasn’t the organizers’ fault… we had one of multiple vans that was broken into when we parked in DC for the last phase. Another team had all their computers stolen… I believe we lost a computer, a tablet, and a power cord. I lugged all my electronics around for much of the last part of the event, thinking we’d be returning to the van soon. I felt awful for the people who were robbed but I’ve gotten over it. If my computer had been stolen, I would have still been holding a grudge.

It occurs to me that Eric Berlin was on my team for #2 and #3. Maybe puzzles are just more fun when he’s around.

1. MIT Mystery Hunt – Video Games (January 2011)

When we wrote the Escape From Zyzzlvaria Mystery Hunt (2009), there were a lot of elements we incorporated that I was very excited about. Opening a new round is one of the most exciting parts of a Mystery Hunt, and because of that, I really like distinctly themed rounds (which were one of the strong elements of the 2004 Time Bandits Hunt). With Zyzzlvaria, we wanted those rounds to feel distinct both in terms of round and structure. We also liked the idea of advancing based on a point system, so that we could eventually grant point boosts that would give larger benefits to the teams in the back that needed them than to the teams in contention.

I think we did a decent job with these elements in Zyzzlvaria. But the 2011 constructors had a lot of the same goals and showed their true potential with the video game Hunt. First of all, they utilized a more sophisticated point system that also accounted for continuous passage of time (the other Hunt in my top ten, 2015, used a variation on that function). As for the rounds, the Hunt opened with a Super Mario Brothers theme with no indication that there were any other video games coming, so the first time we opened the Mega Man round, it was super-exciting. I don’t remember many individual puzzles from this Hunt (these days that’s a good sign, because with so many Hunt puzzles in the modern era, I remember the lowlights more than the highlights) but I remember the metapuzzles vividly as creative constructions that reflected the unique structures of their rounds. Yet unlike the Zyzzlvaria round structures, which we cooked up without constraints, these structures perfectly suited the video games they were based on. I was floored by the Mega Man round structure and meta, and had I actually played Civilization before this Hunt (I have since) I would have gone crazy over that round as well. And I should note that the beautiful website really brought all these varied themes to life.

The year before this Hunt was my first with the team that would become Alice Shrugged, and it was the dreaded year where someone on my team scoffed at me when I wanted to keep solving after another team found the coin. That year all but about a dozen of our team members abandoned ship early, but the rest of us pressed on, reached the end of the Hunt… and were told there were no plans to run the endgame for us because the people involved had gone to sleep. (Craig Kasper came and described it to us, which was nice of him, but it felt likea serious bait and switch.) We weren’t the first team to finish the 2011 Hunt, but the organizers were ready to give us the same rich endgame experience that the winners got, including a very high-production-value GlaDoS confrontation. I’m grateful to them for that, and I’ve tried to give back by making sure the Hunts I’ve co-run since had endgames that could be reproduced for everybody that earned them.

Podcast: Room Escape Divas

While you’re waiting on the edge of your seat for my top five puzzlehunts post, an interview I did a few weeks ago with Room Escape Divas has just hit the internet. I haven’t listened to it yet, but we talked for about two hours, and from the episode length, it looks like they didn’t cut very much. Topics may or may not include:

  • The Mystery Hunt!
  • The Cambridge Puzzle Hunt!
  • BAPHL!
  • Duck Konundrums!
  • Pet peeves about puzzlehunts and escape rooms!
  • My puzzle competition archnemeses!
  • The World Puzzle Championships!
  • How great my wife is!

Also, I recorded a karaoke Radiohead parody for the opening, so show up for that at minimum. And I don’t remember much of what I said, so if I said anything offensive, let me know so I can begin damage control.

Upcoming: Galactic Puzzle Hunt 2017

Are you ready for some puzzleball?

Perennial puzzlehunt contenders [three airplanes] Galactic Trendsetters [three more airplanes] are bestowing upon us a six-day Aussie-style puzzlehunt starting on March 14. It has a year on it, which makes it sound like the event might be recurring, and the debut theme is the “Puzzleball Championships.”

There are some interesting rules innovations, such as replacing the canned hints with yes/no questions (potentially an improvement, but this sounds like a potential bear for the organizers) and breaking ties via adjusted average solve time, where the adjustment is that anything in the first day counts as a full day (I actually hate this idea, because it means ties will likely be broken by teams’ times on the most flawed puzzles, but we’ll see what happens!).

I enjoyed BAPHL 11, which I believe came from some of the constructors involved in this project, so I have high hopes. Now if it had just been a week later to coincide with my spring break…

By Request: All-Time Top Ten Puzzlehunts (#10-#6)

In a thread on Facebook where I asked what sort of posts other than recaps readers would like to see, Scott Weiss asked for my top ten puzzlehunts of all time, which apparently I proposed doing at some point in the past. I’m a bit obsessive about ranking things (though not as much as I was when I was younger, and also not as much as Craig Cackowski is), so I couldn’t turn this suggestion down.

Below are #10 through #6 on my list at the moment; ask me in two weeks and the ranking could be totally different. It’s also a bit hard to separate the objective quality of a puzzlehunt from my personal experience… for example, I loved some of my earliest MIT Mystery Hunt experiences because they were novel and exciting at the time, but compared to modern hunts, the puzzles in many of them are a bit flat. I’m disqualifying any puzzlehunt I helped write for obvious reasons, and there are probably many hunts that were great that won’t make it because I didn’t participate. I also admit to leaning toward options that created a good variety of hunt sources. I’ll note one example of that in the first entry.

The top half of my list is etched in stone and will appear in a follow-up post in the near future. Feel free to post your own all-time top 5 or 10 or 100 in the comments, and if you want to try to guess my top five, that might be fun too. (I’ll throw in a hint about those at the end.)

= = = = =

10. BAPHL 13 – Monkey Island (July 2015)

I’m going to start the list by immediately cheating, because there are almost certainly some MIT Mystery Hunts that yielded more total enjoyment than BAPHL 13 did; that’s inevitable when you compare a 40-hour puzzling experience to a 3-hour puzzle experience. But I’m giving BAPHL 13 for a number of reasons. First, BAPHL, the Boston area’s series of walk-around puzzlehunts, is otherwise unrepresented on this list (probably due to the aforementioned shortness), and I love BAPHL enough that I wanted it to appear. Second, most BAPHLs occur in Boston/Cambridge/Somerville, with stops on the red line particularly frequently used, and I really like when the organizers think outside the box and take me to an unfamiliar location (as we did when we made everybody trek to Providence for BAPHL 9: Forbidden Rhode Island). BAPHL 13 was held on one of the harbor islands, which I’d never visited, and the ferry trip to the hunt site made the whole thing feel more like an adventure. And finally, I grew up with Sierra and LucasArts adventure games and especially enjoyed the latter, and The Secret of Monkey Island is one of my favorite computer game series. So this was a theme that hit my nostalgia button, and it made the experience even more fun. The puzzles here were solid if not tremendously memorable, but the overall staging made this my favorite BAPHL I’ve solved (slightly edging out 12), and it squeaks onto the list.

9. The Puzzle Boat 2 (March 2014)

I am a devoted solver of P&A Magazine (you should be too… see the sidebar for a link), and I enjoyed solving the first Puzzle Boat (Foggy Brume’s more Mystery Hunt-sized epics hosted on the P&A website) with Chris Morse, though it felt a little bit unpolished. The next two, which I’ve solved with Mystik Spiral, have been a lot sleeker, with intriguing meta-structures and high production values. PB2 stands out as having a rather subtle theme that emerged over the course of solving, and as a result it’s one of few hunts where I distinctly remember what our solving spreadsheet looked like; solving (and sometimes backsolving) the last handful of puzzles felt very much like clicking the last pieces into a jigsaw puzzle. The puzzles themselves were the usual quality I expect from P&A: fairly clued, elegant, and not always groundbreaking or super-challenging, but almost always entertaining. As a side-note, I was the main proponent behind having SHORT thematic flavortext on every puzzle in the 2017 Mystery Hunt; Foggy’s style on P&A was a big influence on that feeling right to me.

8. MIT Mystery Hunt – SPIES (January 2006)

I’ve been participating in the Mystery Hunt since 1998, and there have been a lot of innovations since then, some of which have been good, some bad, and some well-intentioned but not perfected until later. SPIES stands out as a Hunt that set out to “ground” things… It came after two Hunts that were way too hard for different reasons (2003’s Matrix being way too long for the era, and 2004’s Time Bandits suffering from some poor testing/editing choices that made a lot of the puzzles unfair) and then 2005’s Normalville, which was mostly better-tuned puzzlewise, but which suffered from a particular nasty meta that bottlenecked front-runner teams for uncomfortable amounts of time. The SPIES Hunt didn’t re-invent the wheel, but it featured consistently clean puzzles and metas, a very pretty and cleanly designed website, and a fun theme and character interactions. In addition, while the round structure was not incredibly novel, there was a nice feature referred to as “antepuzzles,” in which new rounds were not opened by solving the standard metapuzzles, but rather by solving separate metas based on environmental information that became available as you solved round puzzles. It’s a simple mechanic, and it’s not one that has become a mainstay in Hunt design, but for this Hunt it was great.

I’m also naturally biased toward my experience solving this Hunt, because it was my first year solving with the Evil Midnight Bombers What Bomb At Midnight, the Hunt team I co-founded with Jenn Braun. A lot of people complained at the time that we had put together a super-team due to my being very competitive, but honestly, the primary recruitment goals were to solve with people we’d enjoy writing with if we won, and to keep the team size fairly lean and mean (so that we wouldn’t need to track things on a wiki or spreadsheet… if you wanted to know something about a puzzle, we were small enough that you could just ask the room). I was excited but not sure how it would go, but it turned out we had really good chemistry, and there’s a reason we won both the Hunts we competed in (2006 and 2008) before we went our separate ways. I had won the Hunt three previous times with Setec, but that felt like a group I latched onto, whereas Evil Midnight felt like something we had built from the ground up. 2006 is the only year I cried when we found the coin.

(This is as good a time as any to address what you’ve probably already noticed in this blog… I capitalize Hunt when referring to the Mystery Hunt, and I usually leave it lowercase otherwise. It comes from years of referring to the Mystery Hunt for short as just “Hunt,” and it’s an idiosyncrasy I fully embrace.)

7. The Eleventh Hour (published in 1988)

Out-of-left-field pick! I’m counting a book as a puzzlehunt. Graeme Base’s The Eleventh Hour: A Curious Mystery, along with the Usborne Puzzle Adventures series, were the coolest books I stumbled upon in my youth. The latter were a series of illustrated stories in which there was a puzzle to solve after every two pages; most of these puzzles were self-contained, so the books weren’t really puzzlehunts, though occasionally there was a puzzle that would require to pay attention to details from earlier in the story (I remember Escape From Blood Castle being particularly cohesive). There was also a spinoff series called “Superpuzzles” that were much more puzzlehuntesque, and I remember these being much more intriguing and challenging. They’re out of print and I can’t say or sure whether I’d still be excited by them as a seasoned solver, but if you can get your hands on any of the Superpuzzles volumes I recommend them.

The Eleventh Hour is a gorgeously illustrated story of an elephant’s birthday party, during which one of a plethora of animal guests eats the birthday feast, an the reader is invited to figure out who it was. The pictures are dense in secrets, with tons of coded messages and also traditional mystery clues. One of the nice features of the book is that you can solve the mystery either as a traditional whodunnit, based on visual cues, or by combining all of the hidden messages, which is enough for me to qualify this as a puzzlehunt and put it on this list. (Though the high position on the list is undoubtedly nostalgia-based.) There is also a fun bonus challenge presented in the back even after you have the final answer.

As a warning, if you’re thinking of buying this book on Amazon and you have a tendency to use the “Look Inside” option before purchasing, don’t do it! Part of the book consists of detailed spoilers (which were actually sealed by a sticker in the hardback edition I had as a child). Also, there is a code in the back of the book that you’re supposed to use to confirm your answer (the name of the guilty party is used to decrypt the message). If you’re adept at puzzles, steer clear of the code until you have a legitimate guess… even as a kid, the code was simple enough for me to accidentally solve, which spoiled the ending (though I still enjoyed trying to figure out why the final answer was correct). Having said all this, the book is worth a look for any puzzle enthusiasts who haven’t seen it, and if you have kids who like puzzles, you should buy this yesterday.

6. WarTron Boston (June 2013)

One of the oldest puzzlehunt traditions is The Game, the sporadic series of west coast drive-around puzzlehunts that was mostly developed at Stanford (though Wikipedia says it originated earlier). As someone who has lived on the east coast my entire life and went to MIT as an undergrad, the Mystery Hunt was always the gold standard of puzzlehunting for me, but I know many Californians whose puzzling worlds revolved around The Game. (It also doesn’t help that the significant entry fees associated with typical Games had too many digits for my blood when I was growing up, even if I’d had the connections to find a team.) Now I’ve participated in one-and-a-half Games, and man, do I want to do more (but there haven’t been any since the ones I’ve done!). I’d also like to help run one someday, because helping run a Mystery Hunt apparently isn’t enough masochism for me, but I’d like to solve a few more first.

Wait, did he say one-and-a-half? Sort of. WarTron was originally run in August 2012 in Portland, Oregon, and a group of wonderful people volunteered to organize a second run of the content (with some changes) in the Boston area. When I first heard it was running in Boston, I wasn’t that interested in doing a second-run event, but a few teammates from the Mystery Hunt invited me to join a team; they actually hadn’t completed the application process, but the organizers asked them to play anyway because they were short on participants. That’s another reason I haven’t played as many Games as I’d like; there’s usually a limited capacity and an application process to get the slots. The first Game I wanted to play was Ghost Patrol, and the team that invited me to join was rejected, which was a lousy experience.

So anyway, I’m counting this as half a Game because (a) I didn’t participate in the real version, and some of the content in WarTron Boston was retrofit to go in a new setting (and the main electronic devices the event revolved around didn’t work properly), (b) since we didn’t do much planning, we decided to squeeze five team members into a regular-sized car rather than the traditional van, which I can tell you is a TERRIBLE idea, and (c) I started experiencing cold symptoms about six hours in, which made about 12 hours of the event hellish, including one part where I took a nap in the car during what would otherwise have been the coolest and most thematic location (Funspot in Laconia, New Hampshire). Eventually after I got a little sleep, which was harder in a five-person car than it would be in a more appropriate vehicle, adrenaline overcame whatever virus I had, and I felt more myself on the second day.

But despite the health issues I was grappling with, WarTron Boston helped me get what is so neat about the whole Game concept. Walk-around puzzlehunts are good for a change in scenery, and it’s neat when the puzzles are embedded in the surroundings in some way, but when you literally have to drive miles to the next location and you have no idea what it’s going to look like and what you’re going to have to do when you arrive… that’s an adventure. And while I structurally prefer the Puzzle Boat/Mystery Hunt model where you can work on things in parallel and put a puzzle down if it’s annoying you, the Amazing Race aspects really make up for the linearity of the puzzles. More, please!

= = = = =

So for anybody who wants to guess #5 through #1, I’ll give you the additional info that the remaining five hunts are all from different years, and no two of those years are consecutive. Have fun, and I’ll post the rest early next week.

Recap: Cambridge Puzzle Hunt 2017

(This is a recap/review of the 2017 Cambridge Puzzle Hunt, which happened in January. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

I want to emphasize that this was a first-time event, and that a lot of the things I didn’t like about it commonly occur with first-time constructors. I consider it part of my job here to complain about those things, but it’s not intended to hurt the constructors’ feelings… Hopefully they, along with other future constructors, can learn from the discussion here.

You can probably guess from that last paragraph that I didn’t care for this event very much. In my last post, I talked about how I’m often unclear how much Australian puzzlehunts go through testing and editing. This was one of the first “Australian-style” puzzlehunts hosted by a school outside Australia, and I’m pretty confident that testing and editing was minimal (registration only opened a few days before the event, and after an initial announcement, the event was pushed back a few days and had fewer puzzles, so I suspect that a lot of what did come in did so at the last minute). Unfortunately, that last-minute nature was reflected in a lot of the puzzles.

Incidentally, due to the short notice and the fact that they advertised a “lone wolf” division, I didn’t bother to join a team and instead competed solo as Mystereo Cantos. I was actually announced as winning the division, despite the fact that I didn’t register as a UK student. (To those who actually placed in the main division, if you were actually eligible, did the organizers contact you about prizes? I don’t want anything, but I’m curious about how much follow-through there was.)

In addition to puzzle issues, there were some aesthetic/logistical issues that made it hard to get too engaged in the competition:

* There was very little consistency in puzzle format: Different fonts, title in different places, some puzzles with the puzzle number and some with just the title, one puzzle made in TeX, one appearing as an image file rather than a PDF, and so forth. This might not seem like a big deal, but it’s a bit like the brown M&M’s in the Van Halen rider… when an experienced solver sees that the constructors haven’t taken the time to give the puzzles a look that’s at least minimally consistent, it immediately makes them suspicious about whether there’s been attention to detail in other places.

* I found the website pretty clunky, especially the fact that the scoreboard listed tied teams in seemingly random order, rather than breaking ties by last solve time as specified in the rules. This means, for example, that if you were tied with a team on points, there was no way to see which team was actually in the lead, and since the top two teams did tie on points, that seems problematic. As a bit of a stats nut, one of the things I like about Aussie hunts is looking at the statuses of other teams and figuring out what we have to solve and when to pass or stay ahead of Team X.

* Also missing from the website compared to other Aussie hunts: Information on which puzzles have been solved and how often. Not all puzzlehunts have this feature, but Aussie hunts do, and it’s often important because sometimes a puzzle is flawed and unsolvable without hints… When that happens, it’s nice to know it’s not just your team that’s stuck. It’s probably pretty difficult to build a website from scratch with these features, but there are at least three organizations that already have a functioning one… why not ask them to share their code? (I hope they’d be willing to do so, for the good of the community.)

* It was also a little weird that the links to puzzles themselves were marked “Question”… That seemed nonstandard, and that and other idiosyncrasies in the website test suggest English might not be the first language of some of the designers, not that there’s anything inherently wrong with that.

* Several corrections were made during the hunt (to puzzles or hints) and there were no indications of that sent to solvers. So unless you happened to randomly reload puzzles and notice the change, the constructors were content to let you keep work on puzzles with errors in them.

* Finally, as is sometimes the case with Aussie-style hunts, the predetermined hints were sometimes helpful and sometimes staggeringly unhelpful. More frequently the latter, and I suspect that was due to the constructor guessing where solvers would get stuck, rather than actually having people solve the puzzle in advance and give feedback.

Puzzles I especially liked:

* A Martian Makes a Phone Call Home: This used an interesting data set in which some bits were easier to process than others, and I like puzzles where you have to gradually hack away with partial information. The answer was admittedly kind of random.

* Lips, Secure: Simple, well-executed. It’s a shame the right column had to have so many repeats, but I get that that’s a constraint of the puzzle mechanism.

* Colour: The first step of this was a bit awkward, but once you knew what mechanism to use at the beginning (which was one I was previously unfamiliar with), the rest of the puzzle worked in a really elegant way. I needed a hint to interpret the hint already given in the puzzle, but after that point, this was my favorite puzzle in the hunt.

* Lingo: Here I got stuck on the last step rather than the first step, and I think the use of the numbers 1 to 7 appearing in the grid in order is pretty misleading (since they’re ultimately used in a way where they could have been almost any numbers, and they’re not used in 1-7 order). But I thought the picture cluing was a lot of fun, so this gets a B+ from me.

Puzzles I especially disliked:

* Metathesis: I did lots of work on this puzzle, including the tedious part, which involved looking up a lot of dates. I then tried to do what the puzzle told me to do (in a few different ways) and got gibberish. I then decided that if I had a mistake in my ordering, it would just give me a different substitution cipher, and so I threw the encoded sentence into a cryptogram solver… which spit out the clue phrase without any need to look at the rest of the puzzle.

To quote the posted solution: “There was also a mistake here (which no solvers seemed to be bothered by) where the writer mixed up the dates, so the final phrase obtained is something else. However, the impact is minimal, and it’s easily deduced what the phrase should be.” I was actually extremely bothered by it, and the only reason it’s “easily deduced” is that you can bypass the entire puzzle by using a cryptogram solver. Here’s a tip for both puzzlehunt constructors and escape room operators. When your creation has errors and you’re defensive about it afterwards, it makes a bad solving experience much worse.

* Th Ilid: After solving the mostly de-voweled clues, I pretty quickly got the phrase COUNT VOWEL. Putting aside the fact that “COUNT VOWEL” doesn’t make any grammatical sense, as the solution acknowledges, there are many ways to interpret that phrase: counting the given vowels, the removed vowels, the vowels in the answers, the vowels in the answers that match the given vowel, the vowels in the answer that don’t, et cetera. With that big a solve space, this becomes a “guess what I’m thinking” puzzle; you only know you’ve done the right thing once it turns into something (deciding you want an ISBN based on the formatting helps, but that just tells you you want numbers less than ten). If anything, as a solver, you’re drawn to the extractions that would involve the answers, because that was the part of the puzzle you actually had to solve, and the words generated seem way too random for only their first letters to matter… But in fact, every letter in the answers except the first one is just noise.

According to the solution, the constructor thinks maybe the clue “COUNT VOWEL” possibly shouldn’t have been there (in favor of BOOK NUMBER). I think this shows a fundamental misunderstanding of what made the puzzle hard; having a hint that you wanted an ISBN could help narrow the searchspace, but the searchspace is only narrowed in the first place by telling solvers to count the vowels. There’s also no reason the answers couldn’t point at more than one phrase, since they’re otherwise unconstrained.

* Dionysia: First of all, I’m not sure how many people loaded Round 4 right when it was released, but the PDF went up appeared to be a solution rather than the puzzle itself (it had some anagrams with their solutions also given in a different color, and while the final answer wasn’t given, I figured out what it was supposed to be by applying one more step). This was then taken down due to “technical difficulties” and replaced shortly after by Dionysia. I’m not sure if the latter was a backup, or if it got written in a hurry. At least not having a metapuzzle (or any other constraints on answers) makes it a lot easier to throw in a replacement puzzle. A similar production error happened in a Mystery Hunt puzzle around the turn of the century (1998, maybe?), where instead of giving a grid of 49 clues that would resolve to 49 of the states, where the answer was the missing one, we were given a list of the 49 states. This was very confusing for a moment, but then very easy to solve.

Solving this puzzle required you to completely disregard most of the data the puzzle gave you (Oscar years, the number that was removed from one film in each group, which movie or pair of movies was missing from each list, which one won), ignore the fact that the “number” film jumped from the first position to the last position in the last set, and most egregiously, interpret the opposite of “sense” as “sensibility.” Reading the solution, there was a very meandering path you were intended to follow to justify this last step, but it’s inconsistent with everything else in the puzzle. Boo.

* Trojan Hippo, Archimedes’ Calculus: In a set of sixteen puzzles, there’s no need to have two different puzzles that both revolve around the Greek alphabet.

* Calligraphy: I came nowhere close to solving this, and I think few if any teams ended up getting points for it (if the website gave those stats, I’d tell you for sure). Looking at the solution, I would say the very last step makes an already-difficult puzzle much much harder for no good reason.

Despite my complaints, I would still love to see this event become an annual mainstay on the puzzlehunt calendar; we can always use more puzzle competitions! But for it to be successful, the people in charge have to make sure the puzzles are written well in advance, and then spend time editing and testing to make sure they’re fair and reasonable. Consistent puzzle formatting and a more robust website will also help make this a more user-friendly puzzlehunt, and there’s a year to work on that starting now, but the puzzles need to be clean or the rest won’t matter.