By Request: All-Time Top Ten Puzzlehunts (#5-#1)

And we’re back! (Sorry this resumed about two weeks later than I intended.)

5. NPL Convention Extravaganza – Small Town News (July 2003)

The annual National Puzzlers’ League convention has three nights of official program activities, culminating in the “extravaganza,” a puzzlehunt that usually runs most teams about 2-4 hours. (I should say that it’s almost always a puzzlehunt; the first convention I attended, in Newark, instead had a puzzle carnival with various competitive midway games. I actually missed the extravaganza that year because I was dealing with a personal crisis, but from what I’m told, I didn’t miss much.) Given the time frame and the audience (many of whom are more into casual individual puzzle solving than interwoven puzzle experiences), extravaganzas don’t tend to have much in the way of sophisticated structure, when they do, there are often complaints. As a result, while I always look forward to the extravaganza, I rarely find them very memorable, with one notable exception.

The 2003 extravaganza, at a convention held in Indianapolis, was written by Rick Rubenstein, Andrew Murdoch, and Andrew Hertz. Teams were given all the puzzles at once, which is not my favorite puzzlehunt structure, but in this case, “all the puzzles” means a newspaper. The entire hunt consisted of a 8-page custom newspaper in which every element of the paper, from the comics to the photos to the horoscope to the bridge column to every article, contained puzzle content. Furthermore, the puzzle answers all fit together in a logical way; rather than having a metapuzzle that just used the answers as inputs, the goal was to help the police department stop a sinister plot, and chunks of the paper combined to reveal different elements of the plot. At the end, rather than giving a final answer, we were required to explain the plot to the moderators, justifying our deductions with proof from the paper. (In fact, if I remember correctly, we had subverted one of the puzzles and were asked to go back and figure out the puzzle we skipped when our explanation wasn’t complete… we still finished first in about ninety minutes, because for some reason, every time Rick co-writes the extravaganza, my team wins.) I’m a big fan of puzzles embedded in other media when they work, and in this case, everything was assembled in a very elegant and satisfying manner.

So far, I have co-written two NPL Con extravaganzas: an award-show-themed one in Los Angeles with Francis Heaney and Dave Tuller, and an auction-themed one in Seattle with Todd McClary, Kevin Wald, and Mike Selinker. Check with me again in five months and the count will be up to three.

4. MIT Mystery Hunt – 20,000 Puzzles Under the Sea (January 2015)

2015 was my first year returning to Setec Astronomy after a nine-year hiatus. I wrote the 2005 Hunt (Normalville) with them, and they decided to become the Mystery Hunt Writer’s Retirement Home or the Mystery Hunt Tavern, depending on who you ask, while I went off to win a few Hunts with Evil Midnight and then join a bunch of my college friends on the Tetazoo team (whose name changes every year) until we ran the Hunt in 2014. I was ready for a change in pace after that, and it turned out that most of my best friends had settled on Setec, so Jackie and I joined them once I was assured that, while not everyone on the team was ready to win, if we did finish first we would not run away from the coin screaming.

I didn’t care for the 2013 Mystery Hunt and helped write 2014, so in 2015 I was looking for my first enjoyable Mystery Hunt solve in a while. After an initial group of puzzles that looked like a traditional round structure, we assembled our submarine and started moving downward, with a super-long linear Hut web page in which every puzzle solve helped us dive deeper, and we encountered new puzzle links as we approached. I think this was a great example of structure matching theme; not every Hunt story lends itself to traveling further and further along a linear path, but diving to the bottom of the sea obviously does. This also meant that you wouldn’t know what was going to unlock next, but you could sometimes see the next thing coming… Some of these we could identify by silhouette, and some were exciting to reveal.

There was also a very novel round of physical objects puzzles that were given to us in a locked treasure chest. As it turned out, we secured this chest at a time when few people were awake, and when I showed up early in the morning I was not ready to process a batch of items no one else had made progress on. I didn’t love the late portions of the story of this Hunt, and I thought the endgame was waaaaay too long (I actually slept through it due to a delay, but I’m going by conversations with people on my team and on others), but it’s one of the more smooth and satisfying Hunts I’ve solved in recent years.

3. The Haystack (August 2006)

Once upon a time, Eric Berlin contacted me and asked if I wanted to come to New York City to do a puzzlehunt with him. I had heard of The Haystack (presumably named after the idea that you’re looking for a needle in one) but had never really considered playing, since this was a decade ago when my threshold for puzzle travel was higher (as my salary as lower).

I don’t remember a ton of details about the puzzle structure; I remember there were nine pairs of puzzles, and in each pair, you needed to be in a particular Manhattan location to solve the puzzle. I think solving the first gave you the location, which potentially helped you make progress on the second, but I won’t commit to that being right. What I do remember is finding the location tie-ins much more satisfying than in other walkaround hunts. New York City is nothing if not data-rich, and the author(s) found really creative ways to require information from the surrounding environment to make the puzzles solvable. The final metapuzzle somehow involved filling in a sudoku grid with data from the nine criminals and crimes we’d identified over the course of the day… or in our case, seven or so of those criminals, and at the bar where we were meeting at the end of the line, I was struggling to try to short-circuit the final puzzle with partial information. I was convinced I was in a race against time, until with a minute or so left, one of the people who had solved the meta confirmed I wasn’t doing close to the right thing. (I’m not sure I ever actually figured out what to do. It’s sad that these puzzles aren’t archived anywhere, as far as I know.)

I really enjoyed The Haystack, and after it ended, I was very excited to participate again in the next one. So of course, 2006 was the last Haystack.

2. The Famine Game (September 2013)

When Scott asked me to list my top ten puzzlehunts, I knew the top two within seconds. The questions that remained were (a) what are the other eight, and (b) what order would the top two go in? After some relection, I’m declaring The Famine Game second by a razor-thin margin, even though it was one of my most exciting puzzle experiences.

The Famine Game was the first and only first-run Game I’ve done; it’s also, to my knowledge, the only one so far on the east coast. The event had a Hunger Games theme and thus took place in The Capital (Washington, DC and the surrounding area). Our team was called Apetitius Giganticus (one of the various scientific names for Wile E. Coyote), and we rented a van that was much much too large, which made driving and parking very challenging at times, though thankfully my awesome teammates never made me drive.

I could go on for hours about all the features I loved about the Famine Game: The consistently great puzzles. The creative thematic locations. The “kill” videos our app played every time we defeated another team (puzzles yielded methods of murder, and when you solved a puzzle the game app told you which team you’d defeated… naturally, our app claimed every team except ours was eventually knocked out. The weird hallucinogenic effect on our app when we were stung by trackerjackers. The simulation of the second book’s “clock”-structured Games that stuffed twelve rotating mini-puzzle challenges in an elementary school after hours. The team evaluation challenges the night before the Game officially began. The XBox, which remains the most technically dazzling physical puzzle I’ve ever solved. The fantastic improv performances from several parodies of Hunger Game characters. I don’t remember sleeping, and yet I don’t remember getting very tired… most of it was just that damn good.

The reason I say “most” is the same reason I decided to rank this as #2; the first half to two-thirds of the event, with the goal of eliminating the opposition and then navigating the Clock, was really enthralling, with heavy puzzle variety and compelling immersion. Once we got to the part of the plot where we were assaulting the Capital, it felt like the puzzles got a little more average and the story felt less exciting. The Famine Game came in like a lion and went out like a lamb, but it was a freaking awesome lion. It was Mufasa. (There was also another negative that wasn’t the organizers’ fault… we had one of multiple vans that was broken into when we parked in DC for the last phase. Another team had all their computers stolen… I believe we lost a computer, a tablet, and a power cord. I lugged all my electronics around for much of the last part of the event, thinking we’d be returning to the van soon. I felt awful for the people who were robbed but I’ve gotten over it. If my computer had been stolen, I would have still been holding a grudge.

It occurs to me that Eric Berlin was on my team for #2 and #3. Maybe puzzles are just more fun when he’s around.

1. MIT Mystery Hunt – Video Games (January 2011)

When we wrote the Escape From Zyzzlvaria Mystery Hunt (2009), there were a lot of elements we incorporated that I was very excited about. Opening a new round is one of the most exciting parts of a Mystery Hunt, and because of that, I really like distinctly themed rounds (which were one of the strong elements of the 2004 Time Bandits Hunt). With Zyzzlvaria, we wanted those rounds to feel distinct both in terms of round and structure. We also liked the idea of advancing based on a point system, so that we could eventually grant point boosts that would give larger benefits to the teams in the back that needed them than to the teams in contention.

I think we did a decent job with these elements in Zyzzlvaria. But the 2011 constructors had a lot of the same goals and showed their true potential with the video game Hunt. First of all, they utilized a more sophisticated point system that also accounted for continuous passage of time (the other Hunt in my top ten, 2015, used a variation on that function). As for the rounds, the Hunt opened with a Super Mario Brothers theme with no indication that there were any other video games coming, so the first time we opened the Mega Man round, it was super-exciting. I don’t remember many individual puzzles from this Hunt (these days that’s a good sign, because with so many Hunt puzzles in the modern era, I remember the lowlights more than the highlights) but I remember the metapuzzles vividly as creative constructions that reflected the unique structures of their rounds. Yet unlike the Zyzzlvaria round structures, which we cooked up without constraints, these structures perfectly suited the video games they were based on. I was floored by the Mega Man round structure and meta, and had I actually played Civilization before this Hunt (I have since) I would have gone crazy over that round as well. And I should note that the beautiful website really brought all these varied themes to life.

The year before this Hunt was my first with the team that would become Alice Shrugged, and it was the dreaded year where someone on my team scoffed at me when I wanted to keep solving after another team found the coin. That year all but about a dozen of our team members abandoned ship early, but the rest of us pressed on, reached the end of the Hunt… and were told there were no plans to run the endgame for us because the people involved had gone to sleep. (Craig Kasper came and described it to us, which was nice of him, but it felt likea serious bait and switch.) We weren’t the first team to finish the 2011 Hunt, but the organizers were ready to give us the same rich endgame experience that the winners got, including a very high-production-value GlaDoS confrontation. I’m grateful to them for that, and I’ve tried to give back by making sure the Hunts I’ve co-run since had endgames that could be reproduced for everybody that earned them.

Podcast: Room Escape Divas

While you’re waiting on the edge of your seat for my top five puzzlehunts post, an interview I did a few weeks ago with Room Escape Divas has just hit the internet. I haven’t listened to it yet, but we talked for about two hours, and from the episode length, it looks like they didn’t cut very much. Topics may or may not include:

  • The Mystery Hunt!
  • The Cambridge Puzzle Hunt!
  • BAPHL!
  • Duck Konundrums!
  • Pet peeves about puzzlehunts and escape rooms!
  • My puzzle competition archnemeses!
  • The World Puzzle Championships!
  • How great my wife is!

Also, I recorded a karaoke Radiohead parody for the opening, so show up for that at minimum. And I don’t remember much of what I said, so if I said anything offensive, let me know so I can begin damage control.

Upcoming: Galactic Puzzle Hunt 2017

Are you ready for some puzzleball?

Perennial puzzlehunt contenders [three airplanes] Galactic Trendsetters [three more airplanes] are bestowing upon us a six-day Aussie-style puzzlehunt starting on March 14. It has a year on it, which makes it sound like the event might be recurring, and the debut theme is the “Puzzleball Championships.”

There are some interesting rules innovations, such as replacing the canned hints with yes/no questions (potentially an improvement, but this sounds like a potential bear for the organizers) and breaking ties via adjusted average solve time, where the adjustment is that anything in the first day counts as a full day (I actually hate this idea, because it means ties will likely be broken by teams’ times on the most flawed puzzles, but we’ll see what happens!).

I enjoyed BAPHL 11, which I believe came from some of the constructors involved in this project, so I have high hopes. Now if it had just been a week later to coincide with my spring break…

By Request: All-Time Top Ten Puzzlehunts (#10-#6)

In a thread on Facebook where I asked what sort of posts other than recaps readers would like to see, Scott Weiss asked for my top ten puzzlehunts of all time, which apparently I proposed doing at some point in the past. I’m a bit obsessive about ranking things (though not as much as I was when I was younger, and also not as much as Craig Cackowski is), so I couldn’t turn this suggestion down.

Below are #10 through #6 on my list at the moment; ask me in two weeks and the ranking could be totally different. It’s also a bit hard to separate the objective quality of a puzzlehunt from my personal experience… for example, I loved some of my earliest MIT Mystery Hunt experiences because they were novel and exciting at the time, but compared to modern hunts, the puzzles in many of them are a bit flat. I’m disqualifying any puzzlehunt I helped write for obvious reasons, and there are probably many hunts that were great that won’t make it because I didn’t participate. I also admit to leaning toward options that created a good variety of hunt sources. I’ll note one example of that in the first entry.

The top half of my list is etched in stone and will appear in a follow-up post in the near future. Feel free to post your own all-time top 5 or 10 or 100 in the comments, and if you want to try to guess my top five, that might be fun too. (I’ll throw in a hint about those at the end.)

= = = = =

10. BAPHL 13 – Monkey Island (July 2015)

I’m going to start the list by immediately cheating, because there are almost certainly some MIT Mystery Hunts that yielded more total enjoyment than BAPHL 13 did; that’s inevitable when you compare a 40-hour puzzling experience to a 3-hour puzzle experience. But I’m giving BAPHL 13 for a number of reasons. First, BAPHL, the Boston area’s series of walk-around puzzlehunts, is otherwise unrepresented on this list (probably due to the aforementioned shortness), and I love BAPHL enough that I wanted it to appear. Second, most BAPHLs occur in Boston/Cambridge/Somerville, with stops on the red line particularly frequently used, and I really like when the organizers think outside the box and take me to an unfamiliar location (as we did when we made everybody trek to Providence for BAPHL 9: Forbidden Rhode Island). BAPHL 13 was held on one of the harbor islands, which I’d never visited, and the ferry trip to the hunt site made the whole thing feel more like an adventure. And finally, I grew up with Sierra and LucasArts adventure games and especially enjoyed the latter, and The Secret of Monkey Island is one of my favorite computer game series. So this was a theme that hit my nostalgia button, and it made the experience even more fun. The puzzles here were solid if not tremendously memorable, but the overall staging made this my favorite BAPHL I’ve solved (slightly edging out 12), and it squeaks onto the list.

9. The Puzzle Boat 2 (March 2014)

I am a devoted solver of P&A Magazine (you should be too… see the sidebar for a link), and I enjoyed solving the first Puzzle Boat (Foggy Brume’s more Mystery Hunt-sized epics hosted on the P&A website) with Chris Morse, though it felt a little bit unpolished. The next two, which I’ve solved with Mystik Spiral, have been a lot sleeker, with intriguing meta-structures and high production values. PB2 stands out as having a rather subtle theme that emerged over the course of solving, and as a result it’s one of few hunts where I distinctly remember what our solving spreadsheet looked like; solving (and sometimes backsolving) the last handful of puzzles felt very much like clicking the last pieces into a jigsaw puzzle. The puzzles themselves were the usual quality I expect from P&A: fairly clued, elegant, and not always groundbreaking or super-challenging, but almost always entertaining. As a side-note, I was the main proponent behind having SHORT thematic flavortext on every puzzle in the 2017 Mystery Hunt; Foggy’s style on P&A was a big influence on that feeling right to me.

8. MIT Mystery Hunt – SPIES (January 2006)

I’ve been participating in the Mystery Hunt since 1998, and there have been a lot of innovations since then, some of which have been good, some bad, and some well-intentioned but not perfected until later. SPIES stands out as a Hunt that set out to “ground” things… It came after two Hunts that were way too hard for different reasons (2003’s Matrix being way too long for the era, and 2004’s Time Bandits suffering from some poor testing/editing choices that made a lot of the puzzles unfair) and then 2005’s Normalville, which was mostly better-tuned puzzlewise, but which suffered from a particular nasty meta that bottlenecked front-runner teams for uncomfortable amounts of time. The SPIES Hunt didn’t re-invent the wheel, but it featured consistently clean puzzles and metas, a very pretty and cleanly designed website, and a fun theme and character interactions. In addition, while the round structure was not incredibly novel, there was a nice feature referred to as “antepuzzles,” in which new rounds were not opened by solving the standard metapuzzles, but rather by solving separate metas based on environmental information that became available as you solved round puzzles. It’s a simple mechanic, and it’s not one that has become a mainstay in Hunt design, but for this Hunt it was great.

I’m also naturally biased toward my experience solving this Hunt, because it was my first year solving with the Evil Midnight Bombers What Bomb At Midnight, the Hunt team I co-founded with Jenn Braun. A lot of people complained at the time that we had put together a super-team due to my being very competitive, but honestly, the primary recruitment goals were to solve with people we’d enjoy writing with if we won, and to keep the team size fairly lean and mean (so that we wouldn’t need to track things on a wiki or spreadsheet… if you wanted to know something about a puzzle, we were small enough that you could just ask the room). I was excited but not sure how it would go, but it turned out we had really good chemistry, and there’s a reason we won both the Hunts we competed in (2006 and 2008) before we went our separate ways. I had won the Hunt three previous times with Setec, but that felt like a group I latched onto, whereas Evil Midnight felt like something we had built from the ground up. 2006 is the only year I cried when we found the coin.

(This is as good a time as any to address what you’ve probably already noticed in this blog… I capitalize Hunt when referring to the Mystery Hunt, and I usually leave it lowercase otherwise. It comes from years of referring to the Mystery Hunt for short as just “Hunt,” and it’s an idiosyncrasy I fully embrace.)

7. The Eleventh Hour (published in 1988)

Out-of-left-field pick! I’m counting a book as a puzzlehunt. Graeme Base’s The Eleventh Hour: A Curious Mystery, along with the Usborne Puzzle Adventures series, were the coolest books I stumbled upon in my youth. The latter were a series of illustrated stories in which there was a puzzle to solve after every two pages; most of these puzzles were self-contained, so the books weren’t really puzzlehunts, though occasionally there was a puzzle that would require to pay attention to details from earlier in the story (I remember Escape From Blood Castle being particularly cohesive). There was also a spinoff series called “Superpuzzles” that were much more puzzlehuntesque, and I remember these being much more intriguing and challenging. They’re out of print and I can’t say or sure whether I’d still be excited by them as a seasoned solver, but if you can get your hands on any of the Superpuzzles volumes I recommend them.

The Eleventh Hour is a gorgeously illustrated story of an elephant’s birthday party, during which one of a plethora of animal guests eats the birthday feast, an the reader is invited to figure out who it was. The pictures are dense in secrets, with tons of coded messages and also traditional mystery clues. One of the nice features of the book is that you can solve the mystery either as a traditional whodunnit, based on visual cues, or by combining all of the hidden messages, which is enough for me to qualify this as a puzzlehunt and put it on this list. (Though the high position on the list is undoubtedly nostalgia-based.) There is also a fun bonus challenge presented in the back even after you have the final answer.

As a warning, if you’re thinking of buying this book on Amazon and you have a tendency to use the “Look Inside” option before purchasing, don’t do it! Part of the book consists of detailed spoilers (which were actually sealed by a sticker in the hardback edition I had as a child). Also, there is a code in the back of the book that you’re supposed to use to confirm your answer (the name of the guilty party is used to decrypt the message). If you’re adept at puzzles, steer clear of the code until you have a legitimate guess… even as a kid, the code was simple enough for me to accidentally solve, which spoiled the ending (though I still enjoyed trying to figure out why the final answer was correct). Having said all this, the book is worth a look for any puzzle enthusiasts who haven’t seen it, and if you have kids who like puzzles, you should buy this yesterday.

6. WarTron Boston (June 2013)

One of the oldest puzzlehunt traditions is The Game, the sporadic series of west coast drive-around puzzlehunts that was mostly developed at Stanford (though Wikipedia says it originated earlier). As someone who has lived on the east coast my entire life and went to MIT as an undergrad, the Mystery Hunt was always the gold standard of puzzlehunting for me, but I know many Californians whose puzzling worlds revolved around The Game. (It also doesn’t help that the significant entry fees associated with typical Games had too many digits for my blood when I was growing up, even if I’d had the connections to find a team.) Now I’ve participated in one-and-a-half Games, and man, do I want to do more (but there haven’t been any since the ones I’ve done!). I’d also like to help run one someday, because helping run a Mystery Hunt apparently isn’t enough masochism for me, but I’d like to solve a few more first.

Wait, did he say one-and-a-half? Sort of. WarTron was originally run in August 2012 in Portland, Oregon, and a group of wonderful people volunteered to organize a second run of the content (with some changes) in the Boston area. When I first heard it was running in Boston, I wasn’t that interested in doing a second-run event, but a few teammates from the Mystery Hunt invited me to join a team; they actually hadn’t completed the application process, but the organizers asked them to play anyway because they were short on participants. That’s another reason I haven’t played as many Games as I’d like; there’s usually a limited capacity and an application process to get the slots. The first Game I wanted to play was Ghost Patrol, and the team that invited me to join was rejected, which was a lousy experience.

So anyway, I’m counting this as half a Game because (a) I didn’t participate in the real version, and some of the content in WarTron Boston was retrofit to go in a new setting (and the main electronic devices the event revolved around didn’t work properly), (b) since we didn’t do much planning, we decided to squeeze five team members into a regular-sized car rather than the traditional van, which I can tell you is a TERRIBLE idea, and (c) I started experiencing cold symptoms about six hours in, which made about 12 hours of the event hellish, including one part where I took a nap in the car during what would otherwise have been the coolest and most thematic location (Funspot in Laconia, New Hampshire). Eventually after I got a little sleep, which was harder in a five-person car than it would be in a more appropriate vehicle, adrenaline overcame whatever virus I had, and I felt more myself on the second day.

But despite the health issues I was grappling with, WarTron Boston helped me get what is so neat about the whole Game concept. Walk-around puzzlehunts are good for a change in scenery, and it’s neat when the puzzles are embedded in the surroundings in some way, but when you literally have to drive miles to the next location and you have no idea what it’s going to look like and what you’re going to have to do when you arrive… that’s an adventure. And while I structurally prefer the Puzzle Boat/Mystery Hunt model where you can work on things in parallel and put a puzzle down if it’s annoying you, the Amazing Race aspects really make up for the linearity of the puzzles. More, please!

= = = = =

So for anybody who wants to guess #5 through #1, I’ll give you the additional info that the remaining five hunts are all from different years, and no two of those years are consecutive. Have fun, and I’ll post the rest early next week.

Recap: Cambridge Puzzle Hunt 2017

(This is a recap/review of the 2017 Cambridge Puzzle Hunt, which happened in January. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

I want to emphasize that this was a first-time event, and that a lot of the things I didn’t like about it commonly occur with first-time constructors. I consider it part of my job here to complain about those things, but it’s not intended to hurt the constructors’ feelings… Hopefully they, along with other future constructors, can learn from the discussion here.

You can probably guess from that last paragraph that I didn’t care for this event very much. In my last post, I talked about how I’m often unclear how much Australian puzzlehunts go through testing and editing. This was one of the first “Australian-style” puzzlehunts hosted by a school outside Australia, and I’m pretty confident that testing and editing was minimal (registration only opened a few days before the event, and after an initial announcement, the event was pushed back a few days and had fewer puzzles, so I suspect that a lot of what did come in did so at the last minute). Unfortunately, that last-minute nature was reflected in a lot of the puzzles.

Incidentally, due to the short notice and the fact that they advertised a “lone wolf” division, I didn’t bother to join a team and instead competed solo as Mystereo Cantos. I was actually announced as winning the division, despite the fact that I didn’t register as a UK student. (To those who actually placed in the main division, if you were actually eligible, did the organizers contact you about prizes? I don’t want anything, but I’m curious about how much follow-through there was.)

In addition to puzzle issues, there were some aesthetic/logistical issues that made it hard to get too engaged in the competition:

* There was very little consistency in puzzle format: Different fonts, title in different places, some puzzles with the puzzle number and some with just the title, one puzzle made in TeX, one appearing as an image file rather than a PDF, and so forth. This might not seem like a big deal, but it’s a bit like the brown M&M’s in the Van Halen rider… when an experienced solver sees that the constructors haven’t taken the time to give the puzzles a look that’s at least minimally consistent, it immediately makes them suspicious about whether there’s been attention to detail in other places.

* I found the website pretty clunky, especially the fact that the scoreboard listed tied teams in seemingly random order, rather than breaking ties by last solve time as specified in the rules. This means, for example, that if you were tied with a team on points, there was no way to see which team was actually in the lead, and since the top two teams did tie on points, that seems problematic. As a bit of a stats nut, one of the things I like about Aussie hunts is looking at the statuses of other teams and figuring out what we have to solve and when to pass or stay ahead of Team X.

* Also missing from the website compared to other Aussie hunts: Information on which puzzles have been solved and how often. Not all puzzlehunts have this feature, but Aussie hunts do, and it’s often important because sometimes a puzzle is flawed and unsolvable without hints… When that happens, it’s nice to know it’s not just your team that’s stuck. It’s probably pretty difficult to build a website from scratch with these features, but there are at least three organizations that already have a functioning one… why not ask them to share their code? (I hope they’d be willing to do so, for the good of the community.)

* It was also a little weird that the links to puzzles themselves were marked “Question”… That seemed nonstandard, and that and other idiosyncrasies in the website test suggest English might not be the first language of some of the designers, not that there’s anything inherently wrong with that.

* Several corrections were made during the hunt (to puzzles or hints) and there were no indications of that sent to solvers. So unless you happened to randomly reload puzzles and notice the change, the constructors were content to let you keep work on puzzles with errors in them.

* Finally, as is sometimes the case with Aussie-style hunts, the predetermined hints were sometimes helpful and sometimes staggeringly unhelpful. More frequently the latter, and I suspect that was due to the constructor guessing where solvers would get stuck, rather than actually having people solve the puzzle in advance and give feedback.

Puzzles I especially liked:

* A Martian Makes a Phone Call Home: This used an interesting data set in which some bits were easier to process than others, and I like puzzles where you have to gradually hack away with partial information. The answer was admittedly kind of random.

* Lips, Secure: Simple, well-executed. It’s a shame the right column had to have so many repeats, but I get that that’s a constraint of the puzzle mechanism.

* Colour: The first step of this was a bit awkward, but once you knew what mechanism to use at the beginning (which was one I was previously unfamiliar with), the rest of the puzzle worked in a really elegant way. I needed a hint to interpret the hint already given in the puzzle, but after that point, this was my favorite puzzle in the hunt.

* Lingo: Here I got stuck on the last step rather than the first step, and I think the use of the numbers 1 to 7 appearing in the grid in order is pretty misleading (since they’re ultimately used in a way where they could have been almost any numbers, and they’re not used in 1-7 order). But I thought the picture cluing was a lot of fun, so this gets a B+ from me.

Puzzles I especially disliked:

* Metathesis: I did lots of work on this puzzle, including the tedious part, which involved looking up a lot of dates. I then tried to do what the puzzle told me to do (in a few different ways) and got gibberish. I then decided that if I had a mistake in my ordering, it would just give me a different substitution cipher, and so I threw the encoded sentence into a cryptogram solver… which spit out the clue phrase without any need to look at the rest of the puzzle.

To quote the posted solution: “There was also a mistake here (which no solvers seemed to be bothered by) where the writer mixed up the dates, so the final phrase obtained is something else. However, the impact is minimal, and it’s easily deduced what the phrase should be.” I was actually extremely bothered by it, and the only reason it’s “easily deduced” is that you can bypass the entire puzzle by using a cryptogram solver. Here’s a tip for both puzzlehunt constructors and escape room operators. When your creation has errors and you’re defensive about it afterwards, it makes a bad solving experience much worse.

* Th Ilid: After solving the mostly de-voweled clues, I pretty quickly got the phrase COUNT VOWEL. Putting aside the fact that “COUNT VOWEL” doesn’t make any grammatical sense, as the solution acknowledges, there are many ways to interpret that phrase: counting the given vowels, the removed vowels, the vowels in the answers, the vowels in the answers that match the given vowel, the vowels in the answer that don’t, et cetera. With that big a solve space, this becomes a “guess what I’m thinking” puzzle; you only know you’ve done the right thing once it turns into something (deciding you want an ISBN based on the formatting helps, but that just tells you you want numbers less than ten). If anything, as a solver, you’re drawn to the extractions that would involve the answers, because that was the part of the puzzle you actually had to solve, and the words generated seem way too random for only their first letters to matter… But in fact, every letter in the answers except the first one is just noise.

According to the solution, the constructor thinks maybe the clue “COUNT VOWEL” possibly shouldn’t have been there (in favor of BOOK NUMBER). I think this shows a fundamental misunderstanding of what made the puzzle hard; having a hint that you wanted an ISBN could help narrow the searchspace, but the searchspace is only narrowed in the first place by telling solvers to count the vowels. There’s also no reason the answers couldn’t point at more than one phrase, since they’re otherwise unconstrained.

* Dionysia: First of all, I’m not sure how many people loaded Round 4 right when it was released, but the PDF went up appeared to be a solution rather than the puzzle itself (it had some anagrams with their solutions also given in a different color, and while the final answer wasn’t given, I figured out what it was supposed to be by applying one more step). This was then taken down due to “technical difficulties” and replaced shortly after by Dionysia. I’m not sure if the latter was a backup, or if it got written in a hurry. At least not having a metapuzzle (or any other constraints on answers) makes it a lot easier to throw in a replacement puzzle. A similar production error happened in a Mystery Hunt puzzle around the turn of the century (1998, maybe?), where instead of giving a grid of 49 clues that would resolve to 49 of the states, where the answer was the missing one, we were given a list of the 49 states. This was very confusing for a moment, but then very easy to solve.

Solving this puzzle required you to completely disregard most of the data the puzzle gave you (Oscar years, the number that was removed from one film in each group, which movie or pair of movies was missing from each list, which one won), ignore the fact that the “number” film jumped from the first position to the last position in the last set, and most egregiously, interpret the opposite of “sense” as “sensibility.” Reading the solution, there was a very meandering path you were intended to follow to justify this last step, but it’s inconsistent with everything else in the puzzle. Boo.

* Trojan Hippo, Archimedes’ Calculus: In a set of sixteen puzzles, there’s no need to have two different puzzles that both revolve around the Greek alphabet.

* Calligraphy: I came nowhere close to solving this, and I think few if any teams ended up getting points for it (if the website gave those stats, I’d tell you for sure). Looking at the solution, I would say the very last step makes an already-difficult puzzle much much harder for no good reason.

Despite my complaints, I would still love to see this event become an annual mainstay on the puzzlehunt calendar; we can always use more puzzle competitions! But for it to be successful, the people in charge have to make sure the puzzles are written well in advance, and then spend time editing and testing to make sure they’re fair and reasonable. Consistent puzzle formatting and a more robust website will also help make this a more user-friendly puzzlehunt, and there’s a year to work on that starting now, but the puzzles need to be clean or the rest won’t matter.

Recap: SUMS 2016

(This is a recap/review of the 2016 SUMS Puzzle Hunt, which happened in late December. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

Once upon a time, there was a yearly triumvirate of Australian puzzlehunts: MUMS (based at the University of Melbourne), SUMS (based at the University of Sydney), and CiSRA (sponsored by a research company in Australia). CISRA stopped running their event in 2014, which brought the yearly number down to two, but then some CiSRA Puzzle Competition alums created the mezzacotta Puzzle Competition, which ran for the first time this year. However, in mid-December, it was still looking like this would be a two-Aussie-Hunt year, because SUMS had not occurred. Then just before Christmas, there was an announcement that there would be a 2016 SUMS just under the wire between Christmas and New Year’s, on less than a week’s notice.

Daily puzzle releases for Aussie hunts have traditionally been at noon in Australia, but the most recent mezzacotta and SUMS both released in the evening. This is pretty awful for Americans, especially if you’re on the east coast; for me, it’s gone from releases at 9pm or 10pm, meaning I can get a few hours of solving in before I need to sleep, to releases around 3am. That almost certainly means waiting to solve until the next day (after any west coast teammates have picked off the low-hanging fruit), though mezzacotta happened in the summer when my schedule is flexible and I was crazy enough to go to sleep early and wake up for the release. In any case, just as I think the MIT Mystery Hunt should be designed for students, and anybody from outside the MIT community should be an afterthought, I feel the same way here… if the new release time is better for Australians, Americans (including myself) should suck it up. But I won’t be sad if MUMS sticks with noon releases this year.

I solved SUMS 2016 with Killer Chicken Bones, a team that usually consists of some subset of Brent Holman, Rich Bragg, John Owens, Kenny Young, Todd Etter, Ian Tullis, and myself (if space allows, since I’m the most recent addition). This time around Todd and Ian sat out, and the five of us remaining came in seventh, solving 15 out of 20 puzzles. That’s pretty low for us, as we usually solve most if not all of the puzzles, and often we solve them all with no hints; this year, even with three hints, five of the puzzles eluded us. In fact, only one team, one of the usual plugh subteams [obligatory fist shake at plugh] solved all twenty puzzles, so I don’t think it’s a stretch to say this year was hard.

I am always curious how much testing is done with the Australian hunts… My guess is not a lot. The posted solutions often have commentary about what the constructors expected and what actually happened, and when there is, there isn’t any mention of what happened in testing. If any constructors from SUMS/MUMS/mezzacotta are reading, I’d love to hear about your puzzle testing process, and if there isn’t any internal testing, I’d strongly encourage you to introduce it… I can tell you that virtually every American puzzlehunt gets solved by someone unspoiled (either as a unit or in bits and pieces) before it’s released to the public.

Aussie hunts have a static hint system (everybody gets the same hint after 24 hours, then another, and then another), and the helpfulness of these hints varies from hunt to hunt (and even year to year within the same hunt). In my personal opinion, the best possible first hint is one that both helps teams get started but also helps them if they’ve gotten the aha and are stuck on extraction (since both those teams are sad for different reasons), and that the third hint should pretty much tell teams how to solve the puzzle. There were several puzzles that were solved by very few teams even with three hints… in my opinion, that’s very unfortunate, and if that wasn’t intentional, testing should have shown that it was likely to be the case.

We also didn’t solve the metapuzzle, though I suspect we could have (at least with hints) if we’d tried… but by the time the metapuzzle was out, particularly since it was delayed due to tech difficulties, we had been worn down by the difficulty of the puzzles and pretty much threw n the towel. SUMS, like mezzacotta, has a prize for the first meta solve but doesn’t incorporate the meta in team rankings, which really minimizes motivation to solve it.

Puzzles I especially liked:

* 1.3 (Big Break), 4.4 (Knit-Picking): On a recent podcast interview (coming soon to a device near you) I mentioned I don’t tend to like puzzles where there’s nothing to do immediately. Aussie puzzles often have this issue, in that you’re given some kind of abstracted form of data, and there’s not much to do but look through it until you have an idea. But the best examples of these puzzles have clear enough patterns/repetitions that you’re immediately drawn to something you can look at, which then draws your attention to other patterns, and so you make gradual but satisfying progress. I’d put both of these puzzles in that category.

I won’t spoil either of them further because I found both very satisfying. I will say that if you solve Knit-Picking alone, you will have some fairly tedious work to do once you know what you’re doing, and the final answer might not be easy to identify.

* 5.3 (A Way Out!): This puzzle is based on a pop culture property I first encountered in the Mystery Hunt and then ran into on various websites since. That said, the puzzle only relies on that property to a small degree; the meat of the puzzle is a set of subpuzzles riffing on a very specific theme, and the puzzle uses that theme on multiple levels in many creative ways. I think this was the most satisfying puzzle I encountered in this Hunt.

* 1.2 (Daze): This got solved while I was asleep, but I think it’s nifty.

* 3.4 (xxXWord): I also wasn’t involved in solving this, but the constraints satisfied are dazzling (although the final step is generally considered a no-no in puzzle construction).

Puzzles I especially disliked (sorry):

* 2.2 (Schoenflies When You’re Having Fun), 3.2 (Die Wohltemperierte Sequenz): These were solved by the fewest teams (three and two respectively despite three hints) and both were intensely frustrating because they were what I usually refer to as “Guess what I’m thinking” puzzles (which I’ll abbreviate as GWIT here, since it’ll come up in the future). These are puzzles where the puzzle itself gives you a lot of information, and the answer extraction is achieved by doing one of many possible things with that data, with no indication of what you should do. Rookie constructors often create puzzles like this innocently, because the extraction is the first thing they thought of, and it doesn’t occur to them there are lots of other reasonable ways to proceed. An elegant puzzle, in my opinion, will give you motivation to do the right thing.

For 3.2 in particular, I did a lot of legwork determining with WTC segments were transposed, and by how many steps; that data took a long time to collect, and once you had it, I feel that what you were expected to do with it required some pretty big leaps. (Similarly, we knew we needed crystal structure symmetry groups in 2.2, but it wasn’t at all clear how to use the “tags”). It also didn’t help that the hints for these two puzzles were overstuffed; they contained a whole bunch of nouns that were clearly intended to be useful, but if you already knew to manipulate these things, it wasn’t clear how. Again, good playtesting will help bring these things to light in advance.

* 4.3 (Hexiamonds): I wanted to like this puzzle much better than I did, but again, once you have the twelve grids filled (and I’m impressed that they could all be filled uniquely) there were many things you could do with the completed boards. The intersection letters you were supposed to get read as garbage until you do a further action with them, and that’s to me the essence of a GWIT puzzle. If there are ten things you can do, and one of them yields an answer in one step, that might be okay. If there’s only one thing you can do (or one thing most clearly clued) that yields an answer in multiple steps, that’s okay to. But in this case, the solver is expected to take one of many possible paths, and then look at one of those paths much more closely, and that’s not a reasonable expectation.

The hints were admittedly better on this one, and fourteen teams (including KCB) eventually solved it. But nobody solved it without at least two hints, which probably means the information in those two hints needed to be in the puzzle itself.

3.3 (Transmutation): This might not belong here, because I actually liked most of this puzzle… But I think it was severely underclued. If there was a reference to chemistry (the title is one, but it’s very subtle) or especially to making three changes instead of just one (which was given in Hint 1), I think the aha would have been more reasonable, and once we had the aha, the part after that was super-fun. It makes me sad that the fun part was hidden behind a wall, and then the author didn’t really give you the right tools to break through the wall. Props to the four teams that still did so without the hint.

I’ll add that SUMS has a very user-friendly website, well-presented puzzles, and a good scoreboard that tells you which puzzles are causing teams trouble, and when teams are solving things (allowing all teams to see the “competitive story” of the hunt). These are nice features I’ve come to take for granted in Aussie hunts, and having just completed a similar event that lacked these bells and whistles, I appreciate them much more. Polish really makes a difference… more on that in an upcoming post.

Some of the puzzles were frustrating, and the scheduling was not great for my American team, but I certainly enjoyed aspects of SUMS 2016. It was not my favorite Aussie hunt ever, and I think it was certainly pitched on the hard side (possibly unintentionally due to the rush to get it up in 2016), but I thank the constructors for their hard work in putting it together.