(This is a recap/review of the 2017 Cambridge Puzzle Hunt, which happened in January. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)
I want to emphasize that this was a first-time event, and that a lot of the things I didn’t like about it commonly occur with first-time constructors. I consider it part of my job here to complain about those things, but it’s not intended to hurt the constructors’ feelings… Hopefully they, along with other future constructors, can learn from the discussion here.
You can probably guess from that last paragraph that I didn’t care for this event very much. In my last post, I talked about how I’m often unclear how much Australian puzzlehunts go through testing and editing. This was one of the first “Australian-style” puzzlehunts hosted by a school outside Australia, and I’m pretty confident that testing and editing was minimal (registration only opened a few days before the event, and after an initial announcement, the event was pushed back a few days and had fewer puzzles, so I suspect that a lot of what did come in did so at the last minute). Unfortunately, that last-minute nature was reflected in a lot of the puzzles.
Incidentally, due to the short notice and the fact that they advertised a “lone wolf” division, I didn’t bother to join a team and instead competed solo as Mystereo Cantos. I was actually announced as winning the division, despite the fact that I didn’t register as a UK student. (To those who actually placed in the main division, if you were actually eligible, did the organizers contact you about prizes? I don’t want anything, but I’m curious about how much follow-through there was.)
In addition to puzzle issues, there were some aesthetic/logistical issues that made it hard to get too engaged in the competition:
* There was very little consistency in puzzle format: Different fonts, title in different places, some puzzles with the puzzle number and some with just the title, one puzzle made in TeX, one appearing as an image file rather than a PDF, and so forth. This might not seem like a big deal, but it’s a bit like the brown M&M’s in the Van Halen rider… when an experienced solver sees that the constructors haven’t taken the time to give the puzzles a look that’s at least minimally consistent, it immediately makes them suspicious about whether there’s been attention to detail in other places.
* I found the website pretty clunky, especially the fact that the scoreboard listed tied teams in seemingly random order, rather than breaking ties by last solve time as specified in the rules. This means, for example, that if you were tied with a team on points, there was no way to see which team was actually in the lead, and since the top two teams did tie on points, that seems problematic. As a bit of a stats nut, one of the things I like about Aussie hunts is looking at the statuses of other teams and figuring out what we have to solve and when to pass or stay ahead of Team X.
* Also missing from the website compared to other Aussie hunts: Information on which puzzles have been solved and how often. Not all puzzlehunts have this feature, but Aussie hunts do, and it’s often important because sometimes a puzzle is flawed and unsolvable without hints… When that happens, it’s nice to know it’s not just your team that’s stuck. It’s probably pretty difficult to build a website from scratch with these features, but there are at least three organizations that already have a functioning one… why not ask them to share their code? (I hope they’d be willing to do so, for the good of the community.)
* It was also a little weird that the links to puzzles themselves were marked “Question”… That seemed nonstandard, and that and other idiosyncrasies in the website test suggest English might not be the first language of some of the designers, not that there’s anything inherently wrong with that.
* Several corrections were made during the hunt (to puzzles or hints) and there were no indications of that sent to solvers. So unless you happened to randomly reload puzzles and notice the change, the constructors were content to let you keep work on puzzles with errors in them.
* Finally, as is sometimes the case with Aussie-style hunts, the predetermined hints were sometimes helpful and sometimes staggeringly unhelpful. More frequently the latter, and I suspect that was due to the constructor guessing where solvers would get stuck, rather than actually having people solve the puzzle in advance and give feedback.
Puzzles I especially liked:
* A Martian Makes a Phone Call Home: This used an interesting data set in which some bits were easier to process than others, and I like puzzles where you have to gradually hack away with partial information. The answer was admittedly kind of random.
* Lips, Secure: Simple, well-executed. It’s a shame the right column had to have so many repeats, but I get that that’s a constraint of the puzzle mechanism.
* Colour: The first step of this was a bit awkward, but once you knew what mechanism to use at the beginning (which was one I was previously unfamiliar with), the rest of the puzzle worked in a really elegant way. I needed a hint to interpret the hint already given in the puzzle, but after that point, this was my favorite puzzle in the hunt.
* Lingo: Here I got stuck on the last step rather than the first step, and I think the use of the numbers 1 to 7 appearing in the grid in order is pretty misleading (since they’re ultimately used in a way where they could have been almost any numbers, and they’re not used in 1-7 order). But I thought the picture cluing was a lot of fun, so this gets a B+ from me.
Puzzles I especially disliked:
* Metathesis: I did lots of work on this puzzle, including the tedious part, which involved looking up a lot of dates. I then tried to do what the puzzle told me to do (in a few different ways) and got gibberish. I then decided that if I had a mistake in my ordering, it would just give me a different substitution cipher, and so I threw the encoded sentence into a cryptogram solver… which spit out the clue phrase without any need to look at the rest of the puzzle.
To quote the posted solution: “There was also a mistake here (which no solvers seemed to be bothered by) where the writer mixed up the dates, so the final phrase obtained is something else. However, the impact is minimal, and it’s easily deduced what the phrase should be.” I was actually extremely bothered by it, and the only reason it’s “easily deduced” is that you can bypass the entire puzzle by using a cryptogram solver. Here’s a tip for both puzzlehunt constructors and escape room operators. When your creation has errors and you’re defensive about it afterwards, it makes a bad solving experience much worse.
* Th Ilid: After solving the mostly de-voweled clues, I pretty quickly got the phrase COUNT VOWEL. Putting aside the fact that “COUNT VOWEL” doesn’t make any grammatical sense, as the solution acknowledges, there are many ways to interpret that phrase: counting the given vowels, the removed vowels, the vowels in the answers, the vowels in the answers that match the given vowel, the vowels in the answer that don’t, et cetera. With that big a solve space, this becomes a “guess what I’m thinking” puzzle; you only know you’ve done the right thing once it turns into something (deciding you want an ISBN based on the formatting helps, but that just tells you you want numbers less than ten). If anything, as a solver, you’re drawn to the extractions that would involve the answers, because that was the part of the puzzle you actually had to solve, and the words generated seem way too random for only their first letters to matter… But in fact, every letter in the answers except the first one is just noise.
According to the solution, the constructor thinks maybe the clue “COUNT VOWEL” possibly shouldn’t have been there (in favor of BOOK NUMBER). I think this shows a fundamental misunderstanding of what made the puzzle hard; having a hint that you wanted an ISBN could help narrow the searchspace, but the searchspace is only narrowed in the first place by telling solvers to count the vowels. There’s also no reason the answers couldn’t point at more than one phrase, since they’re otherwise unconstrained.
* Dionysia: First of all, I’m not sure how many people loaded Round 4 right when it was released, but the PDF went up appeared to be a solution rather than the puzzle itself (it had some anagrams with their solutions also given in a different color, and while the final answer wasn’t given, I figured out what it was supposed to be by applying one more step). This was then taken down due to “technical difficulties” and replaced shortly after by Dionysia. I’m not sure if the latter was a backup, or if it got written in a hurry. At least not having a metapuzzle (or any other constraints on answers) makes it a lot easier to throw in a replacement puzzle. A similar production error happened in a Mystery Hunt puzzle around the turn of the century (1998, maybe?), where instead of giving a grid of 49 clues that would resolve to 49 of the states, where the answer was the missing one, we were given a list of the 49 states. This was very confusing for a moment, but then very easy to solve.
Solving this puzzle required you to completely disregard most of the data the puzzle gave you (Oscar years, the number that was removed from one film in each group, which movie or pair of movies was missing from each list, which one won), ignore the fact that the “number” film jumped from the first position to the last position in the last set, and most egregiously, interpret the opposite of “sense” as “sensibility.” Reading the solution, there was a very meandering path you were intended to follow to justify this last step, but it’s inconsistent with everything else in the puzzle. Boo.
* Trojan Hippo, Archimedes’ Calculus: In a set of sixteen puzzles, there’s no need to have two different puzzles that both revolve around the Greek alphabet.
* Calligraphy: I came nowhere close to solving this, and I think few if any teams ended up getting points for it (if the website gave those stats, I’d tell you for sure). Looking at the solution, I would say the very last step makes an already-difficult puzzle much much harder for no good reason.
Despite my complaints, I would still love to see this event become an annual mainstay on the puzzlehunt calendar; we can always use more puzzle competitions! But for it to be successful, the people in charge have to make sure the puzzles are written well in advance, and then spend time editing and testing to make sure they’re fair and reasonable. Consistent puzzle formatting and a more robust website will also help make this a more user-friendly puzzlehunt, and there’s a year to work on that starting now, but the puzzles need to be clean or the rest won’t matter.