Recap: Cambridge Puzzle Hunt 2017

(This is a recap/review of the 2017 Cambridge Puzzle Hunt, which happened in January. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

I want to emphasize that this was a first-time event, and that a lot of the things I didn’t like about it commonly occur with first-time constructors. I consider it part of my job here to complain about those things, but it’s not intended to hurt the constructors’ feelings… Hopefully they, along with other future constructors, can learn from the discussion here.

You can probably guess from that last paragraph that I didn’t care for this event very much. In my last post, I talked about how I’m often unclear how much Australian puzzlehunts go through testing and editing. This was one of the first “Australian-style” puzzlehunts hosted by a school outside Australia, and I’m pretty confident that testing and editing was minimal (registration only opened a few days before the event, and after an initial announcement, the event was pushed back a few days and had fewer puzzles, so I suspect that a lot of what did come in did so at the last minute). Unfortunately, that last-minute nature was reflected in a lot of the puzzles.

Incidentally, due to the short notice and the fact that they advertised a “lone wolf” division, I didn’t bother to join a team and instead competed solo as Mystereo Cantos. I was actually announced as winning the division, despite the fact that I didn’t register as a UK student. (To those who actually placed in the main division, if you were actually eligible, did the organizers contact you about prizes? I don’t want anything, but I’m curious about how much follow-through there was.)

In addition to puzzle issues, there were some aesthetic/logistical issues that made it hard to get too engaged in the competition:

* There was very little consistency in puzzle format: Different fonts, title in different places, some puzzles with the puzzle number and some with just the title, one puzzle made in TeX, one appearing as an image file rather than a PDF, and so forth. This might not seem like a big deal, but it’s a bit like the brown M&M’s in the Van Halen rider… when an experienced solver sees that the constructors haven’t taken the time to give the puzzles a look that’s at least minimally consistent, it immediately makes them suspicious about whether there’s been attention to detail in other places.

* I found the website pretty clunky, especially the fact that the scoreboard listed tied teams in seemingly random order, rather than breaking ties by last solve time as specified in the rules. This means, for example, that if you were tied with a team on points, there was no way to see which team was actually in the lead, and since the top two teams did tie on points, that seems problematic. As a bit of a stats nut, one of the things I like about Aussie hunts is looking at the statuses of other teams and figuring out what we have to solve and when to pass or stay ahead of Team X.

* Also missing from the website compared to other Aussie hunts: Information on which puzzles have been solved and how often. Not all puzzlehunts have this feature, but Aussie hunts do, and it’s often important because sometimes a puzzle is flawed and unsolvable without hints… When that happens, it’s nice to know it’s not just your team that’s stuck. It’s probably pretty difficult to build a website from scratch with these features, but there are at least three organizations that already have a functioning one… why not ask them to share their code? (I hope they’d be willing to do so, for the good of the community.)

* It was also a little weird that the links to puzzles themselves were marked “Question”… That seemed nonstandard, and that and other idiosyncrasies in the website test suggest English might not be the first language of some of the designers, not that there’s anything inherently wrong with that.

* Several corrections were made during the hunt (to puzzles or hints) and there were no indications of that sent to solvers. So unless you happened to randomly reload puzzles and notice the change, the constructors were content to let you keep work on puzzles with errors in them.

* Finally, as is sometimes the case with Aussie-style hunts, the predetermined hints were sometimes helpful and sometimes staggeringly unhelpful. More frequently the latter, and I suspect that was due to the constructor guessing where solvers would get stuck, rather than actually having people solve the puzzle in advance and give feedback.

Puzzles I especially liked:

* A Martian Makes a Phone Call Home: This used an interesting data set in which some bits were easier to process than others, and I like puzzles where you have to gradually hack away with partial information. The answer was admittedly kind of random.

* Lips, Secure: Simple, well-executed. It’s a shame the right column had to have so many repeats, but I get that that’s a constraint of the puzzle mechanism.

* Colour: The first step of this was a bit awkward, but once you knew what mechanism to use at the beginning (which was one I was previously unfamiliar with), the rest of the puzzle worked in a really elegant way. I needed a hint to interpret the hint already given in the puzzle, but after that point, this was my favorite puzzle in the hunt.

* Lingo: Here I got stuck on the last step rather than the first step, and I think the use of the numbers 1 to 7 appearing in the grid in order is pretty misleading (since they’re ultimately used in a way where they could have been almost any numbers, and they’re not used in 1-7 order). But I thought the picture cluing was a lot of fun, so this gets a B+ from me.

Puzzles I especially disliked:

* Metathesis: I did lots of work on this puzzle, including the tedious part, which involved looking up a lot of dates. I then tried to do what the puzzle told me to do (in a few different ways) and got gibberish. I then decided that if I had a mistake in my ordering, it would just give me a different substitution cipher, and so I threw the encoded sentence into a cryptogram solver… which spit out the clue phrase without any need to look at the rest of the puzzle.

To quote the posted solution: “There was also a mistake here (which no solvers seemed to be bothered by) where the writer mixed up the dates, so the final phrase obtained is something else. However, the impact is minimal, and it’s easily deduced what the phrase should be.” I was actually extremely bothered by it, and the only reason it’s “easily deduced” is that you can bypass the entire puzzle by using a cryptogram solver. Here’s a tip for both puzzlehunt constructors and escape room operators. When your creation has errors and you’re defensive about it afterwards, it makes a bad solving experience much worse.

* Th Ilid: After solving the mostly de-voweled clues, I pretty quickly got the phrase COUNT VOWEL. Putting aside the fact that “COUNT VOWEL” doesn’t make any grammatical sense, as the solution acknowledges, there are many ways to interpret that phrase: counting the given vowels, the removed vowels, the vowels in the answers, the vowels in the answers that match the given vowel, the vowels in the answer that don’t, et cetera. With that big a solve space, this becomes a “guess what I’m thinking” puzzle; you only know you’ve done the right thing once it turns into something (deciding you want an ISBN based on the formatting helps, but that just tells you you want numbers less than ten). If anything, as a solver, you’re drawn to the extractions that would involve the answers, because that was the part of the puzzle you actually had to solve, and the words generated seem way too random for only their first letters to matter… But in fact, every letter in the answers except the first one is just noise.

According to the solution, the constructor thinks maybe the clue “COUNT VOWEL” possibly shouldn’t have been there (in favor of BOOK NUMBER). I think this shows a fundamental misunderstanding of what made the puzzle hard; having a hint that you wanted an ISBN could help narrow the searchspace, but the searchspace is only narrowed in the first place by telling solvers to count the vowels. There’s also no reason the answers couldn’t point at more than one phrase, since they’re otherwise unconstrained.

* Dionysia: First of all, I’m not sure how many people loaded Round 4 right when it was released, but the PDF went up appeared to be a solution rather than the puzzle itself (it had some anagrams with their solutions also given in a different color, and while the final answer wasn’t given, I figured out what it was supposed to be by applying one more step). This was then taken down due to “technical difficulties” and replaced shortly after by Dionysia. I’m not sure if the latter was a backup, or if it got written in a hurry. At least not having a metapuzzle (or any other constraints on answers) makes it a lot easier to throw in a replacement puzzle. A similar production error happened in a Mystery Hunt puzzle around the turn of the century (1998, maybe?), where instead of giving a grid of 49 clues that would resolve to 49 of the states, where the answer was the missing one, we were given a list of the 49 states. This was very confusing for a moment, but then very easy to solve.

Solving this puzzle required you to completely disregard most of the data the puzzle gave you (Oscar years, the number that was removed from one film in each group, which movie or pair of movies was missing from each list, which one won), ignore the fact that the “number” film jumped from the first position to the last position in the last set, and most egregiously, interpret the opposite of “sense” as “sensibility.” Reading the solution, there was a very meandering path you were intended to follow to justify this last step, but it’s inconsistent with everything else in the puzzle. Boo.

* Trojan Hippo, Archimedes’ Calculus: In a set of sixteen puzzles, there’s no need to have two different puzzles that both revolve around the Greek alphabet.

* Calligraphy: I came nowhere close to solving this, and I think few if any teams ended up getting points for it (if the website gave those stats, I’d tell you for sure). Looking at the solution, I would say the very last step makes an already-difficult puzzle much much harder for no good reason.

Despite my complaints, I would still love to see this event become an annual mainstay on the puzzlehunt calendar; we can always use more puzzle competitions! But for it to be successful, the people in charge have to make sure the puzzles are written well in advance, and then spend time editing and testing to make sure they’re fair and reasonable. Consistent puzzle formatting and a more robust website will also help make this a more user-friendly puzzlehunt, and there’s a year to work on that starting now, but the puzzles need to be clean or the rest won’t matter.

22 thoughts on “Recap: Cambridge Puzzle Hunt 2017

  1. Like Dan I am loathe to pile on to a bunch of first-time constructors who gave me a puzzle hunt for free; unlike Dan I haven’t even tried to build a puzzle hunt, which puts the CPH constructors one step ahead of me.

    One minor technical note about the website, in the vein of constructive criticism for future hunt designers: All of the links including hints were only available as PDFs (or sometimes other formats), and they were all configured to be presented to browsers as downloads. (I think this was due to using the HTTP response header “Content-Disposition: attachment” which was also used to present the file with a better filename than the /question URLs would yield.) This was a big pain: I pretty frequently just wanted to look at the three hints for a puzzle, and not being able to just quickly view them in three tabs via command-click was annoying. My downloads folder is pretty full of CPH PDFs right now…

    Does anyone understand how the colors were supposed to be sorted in Calligraphy? The solution doesn’t explain it in a way I understand.

    Like

  2. I mostly agree with the issues raised here, but I really hope they do it again next year, as someone who has solved remotely in the MIT hunt for several years it’s great to finally have something similar here in the UK.

    I just hope they tighten up and test solve a lot more next time. I spent far too long on what was probably one of the easiest puzzles in the hunt, Trojan Hippo, because one of the triples wasn’t in ascending order for no apparent reason. I thought this meant that there was another mechanism at work, or it wasn’t Pythagorean triples at all, just disguised to look like them. When I shrugged it off I finally got the clue phrase straight after.

    Which brings me to another issue I had with the hunt, a lot of the clue phrases were still really vague once extracted. In other hunts like the MIT one this would be merely annoying, but with the limited number of attempts we had to submit answers for each puzzle it became incredibly frustrating.

    Also I’m surprised at “A martian…” being in the good category, while the grunt work was fine, we were completely stumped by trying to get the final answer. When my teammate correctly guessed the answer a couple of days after it was enough to make me give up on solving any more of the hunt. It almost felt akin to those riddles along the lines of “A body is found in the middle of a field, how did they die?” lateral thinking nonsense, where it’s a matter of guessing what random thing the setter was thinking of.

    Like

    • [Spoilers on “A Martian…” follow:]

      Something about the wide-eyed narrative made me assume that the Martian would be misunderstanding his surroundings, and so it was likely that the “animal” wasn’t really an animal. The line about removing its belly helped me narrow it down, though I admit I guessed “anteater” before I translated that part (which according to the solution was the most common wrong guess). Mileage often varies, but while the last step was essentially solving a riddle, it felt sufficiently clued to me. I’d love to know if I was the only one, but whoops, no solving stats.

      Like

      • That said, the final answer did feel unsatisfying to me. I didn’t really see a need for the answer to be in the form it was in, unless it was to throw off guessing.

        While it is nice to see a new hunt, several aspects definitely needed polishing like you said. Within the team I was solving/following along with, Calligraphy seemed to be one of the worst offenders, and I personally feel Dionysia and the last step of Crack the Phone needed some tweaking as well. Some interesting ideas at play, and I hope the organizers take the feedback into consideration.

        Thanks for doing these recaps Dan. It’s nice to have a place to talk about/comment on the puzzlehunts.

        Like

  3. Thanks for posting your comments. Happened to be on the winning team this time; yes, we got paid (via our UK member). Don’t know how it’s working out for other folks.

    I agree with much of your commentary above, though I think perhaps I saw more promise in a lot of the puzzles than seems to be the consensus. I definitely want the hunt to continue, as I see some real promise here.

    The only complaint I’ll add to yours is that the guess limit turned out to be much too low for some of the clue phrases. 20 per question would be fine with (much) tighter clue phrases.

    I’ll add comments here on all the puzzles, since I’m pretty sure I saw most of the hunt:

    1.1 Symmetry — ok, I didn’t see this one. Our UK guy solved it before I woke up. Seems reasonable enough; decent entry puzzle.

    1.2 Archimedes’ Calculus — I liked this puzzle, though we got it somewhat randomly. Well, I mean I happened to write information I thought was useless (greek letters that associated with the roman numerals) on our google doc. Was asleep when a brighter person turned that into something useful (like an answer).

    1.3 Metathesis — I really had no issues with this until you said something. We solved it reasonably enough; yes we had some letters swapped, but I’d presumed that that was because we had some of the publishing dates incorrect (I’d allowed for that since many of them were close). It was wrong and not corrected? :S OK, things like that absolutely need to be cleaned up. But I didn’t have a problem with this puzzle (in the playtested/working form).

    1.4 Calligraphy — yeah, this was a mess. We were finally able to piece together the idea of making English letters and one of our guys was able to put together enough to get to “Water Margin” (on day 3 or 4), which with the hints was enough to point toward the answer (sort of; it still took a bunch of guesses as there were many ways to use ‘numbers’). But this in general looks like it requires knowledge of making chinese characters that no person on a 4-person team should be necessarily expected to possess. The site they say they used looks like it’s in Chinese.

    2.1 Th Ilid — ok, I didn’t get too angry at this one. Saw the initial stuff, got nowhere trying to apply it, and did some google search of the subject that happened to point to a book that used only one vowel per chapter and had a chapter on the Iliad. So (not yet discerning the proper answer) I tried that as an answer. Woohoo!

    2.2 One Last Strike — a teammate figured it out after the first hint and us talking about it a bit. But I thought this was a good puzzle, though I didn’t bother to verify if it wasn’t ambiguous.

    2.3 Trojan Hippo — aside from the Greek overlap, I thought this was a good puzzle, too. Though the answer seemed a little loose (I did put together horse=mare (and that the horse came at night), but still it seemed like a bit of a leap at the end. Making the answer the greek name and then explaining that this was the Greek God of *nightmares* makes sense to me.)

    2.4 Puzzle 4.9 — enjoyed this one; pretty much solo’ed it. Someone pointed out very early that the Rubik’s record was 4.9 seconds; then they all went to solve other stuff. I got imaginative with the cube and figured out how it went together. Then there was a struggle as I wanted it to be a Kakuro but there were these big number 10s and 20s that were making the math pretty damn hard. Eventually I reasoned my way out of that and constructed the puzzle to be solved. And got that done with some assistance from teammates.

    But aside from a good puzzle, the answer was a bit unsatisfying — first the randomgram (when I wanted them to either be in the order of the picture or something else logical) and then again that you needed both names of the guy. Which of course was also guessable at the start of the puzzle..

    3.1 — A Martian Makes a Phone Call Home — this was fun and a good puzzle to work on with teammates. We all contributed a bit, and while I’d have struggled a lot with the end riddle, someone else clued in on it quickly. I’ve got no real issues here.

    3.2 — Also enjoyed the puzzle, but again got frustrated by the answer. Or, as you can see our guess list in order:

    1. little thistle
    2. little sister
    3. Theophilus Thistle
    4. Thistle Sifter
    5. SITHLETHIFTER
    6. tongue twister
    7. Theodore Oswaldtwistle
    8. The Kings Speech
    9. theophilus

    So we had the correct answer — as given by the solution page — submitted 3rd, but that was ‘wrong’ so we had to keep trying other things that seemed reasonable or related to it until we came back to the ‘right’ answer. But it’s not consistent with the answer to 4.9…

    …anyway, I liked the puzzle until the very end.

    3.3 — thought this was a good puzzle, too. Though I’ve been fortunate to have seen this code in several geocaching puzzles. But you could also back into it in other ways; I expect via google image search or from the rebus or from what I remembered the original title was. Liked the way the code was reordered for the answer.

    3.4 — this went from one of my least favorite puzzles to one of the ones I liked the best, just from the hints, which in this case I thought were excellent. The puzzle as presented puts out too much superfluous information, IMO, especially as some of the apps are chinese sites or in chinese. Is that important? And then the guitar strings looked to be the way to spell the answer rather than a checker. And what about the grouped apps — do we need those, too? Anyway, again the hints were great — I think the first *should* have been enough but wasn’t since I wasn’t ready to rethink my wrong conceptions as yet. Plus the A looked like an arrow – why didn’t I just realize that was an A?

    We streamed toward it pretty readily after hint two. And the only real complaint I have about it after solving it (aside from those noted) is the typical being pissed off at detectives listed in puzzle texts. Really – the detective not only remembered all of those apps in order, but was able to glean what to do *and* knew of the easter eggs in the obscure app in order to nod and get an answer?

    4.1 Dionysia — yes, this was a wreck. Wonder if the other puzzle was better? There are too many antonyms of sense that aren’t sensibility. We finally got this one after trying many answers I thought were a lot better.

    4.2 Lingo — Liked this one. Clear how to start, and I don’t mind an intentional red herring (1-7) if it’s also given in a ‘fair’ presentation (where there’s an alternate interpretation) and the herring leads pretty quickly to dead ends.

    4.3 Short — Uh, yeah, we didn’t get this. I’m guessing no team did. We came I guess a little bit close with some of our ideas, but couldn’t put them all together. “Short” to me implies we should be shorting the circuit — especially given the flavor text, not finding the shortest path to anything. And the hints were next to worthless.

    4.4 A Bare Bones Permutation — the original presentation had different squares highlighted, so the wrong clue phrase was obtained. As you note, when you have errata (which you should struggle not to have), that you fix, you have to notify all the teams in the event in a public forum. We were lucky to get this one all done in the first day; I think the clue phrase was also a bit ambiguous so in addition to all of the answers we blew on the errored clue phrase, we blew some on the correct one. Numbering the squares to be pulled would have helped. Learning about the bones themselves was actually kind of interesting.

    Anyway, more than anything else, the contest screamed for playtesting. And it definitely seemed like there were several different creators that made the puzzles not look completely unified. The greek overlaps, the varied fonts in the pdfs, and the hints that ranged from really good to completely useless all pointed at that.

    But those are all items that can/will be overcome with a little more time. I definitely look forward to the next one.

    And congratulations on your iron brain victory! I can only imagine how much this contest must have taken out of you (it was hard enough with a full team).

    Like

    • I am boggling at how “Crack the Phone” was solvable. We got to the point of being able to (ambiguously) spell out the answer, but we had never heard of Crossy Road so “DISNEY CROSSY ROAD” did not seem like much of a real thing. And… even if it was a real thing, I don’t think we would have ever considered “play the game for a while until you find a character with a guitar”. Did you actually get that from the “in the end, did you find him?” clue?

      “Lips, Secure” was nice but in my opinion it should have spelled SITHTLETHIFTER rather than SITHLETHIFTER.

      Like

      • Got to Disney Crossy Road (which I’d never heard of) and figured it must be right because it was an app. And then of course tried it as a solution and failed, and so we sat and tried to learn about the game. After about 1/2 hour, we decided the guitar on the front screen might not have been completely “used”. Eventually I googled DCR along with “guitar” and the answer popped up almost immediately. It was probably an extra step removed from where it should have been, but I thought it was solvable as is (once you got past the other steps). Still, that’s one well versed detective.

        Like

  4. Our team is quite small, and it seems like based on our doc we solved 6 puzzles out of the 13 we actually have tried on. Out of the 7 that are unsolved that we tried, we were bottlenecked at 4 of them, and when we read the answers there was a collective groan.

    I appreciate the effort put into this, but I still do need to vent about my frustration with some puzzles.

    Specific puzzles, ordered in the degree of frustration: (Spoiler Warning!)

    * Calligraphy: We have two native Chinese speakers on our team (including myself), and I instantly recognized that it definitely has something to do with strokes. I isolated the stroke and wrote down stroke order – for instance, for the first character, light green was 2 and brown was 8 – and got stuck. Not very frustrated yet, we tried to write characters out of the strokes we isolated, but none came out since there’s no clear order to it. We thought we were just wrong here – until the last hint said “Did you know Chinese has stroke orders?” or something to that effect. We knew that there would be an order here, but nothing jumped out to us.

    When reading the solution, we realized that trying to write the characters using the strokes of the same color was on the right track – except it’s nearly impossible without actual stroke orders, and too many false positives. Also, the spontaneous switch from Traditional in the puzzle and the Simplified in the actual coded message.

    The use of stroke order is a good idea, but this is very, very poorly executed. Also, reading the solution, I would assume the little teams that somehow got pass the Herculean task would submit WATER MARGIN, and be very confused why that’s not the answer. Kudos to whoever actually solved this.

    * Dionysia: I would assume we did, like many teams, recognize the movie patterns very early on (probably from a combination of Red Kilometer and European Ugly), find that there’s missing numbers from certain nominees from certain years (and question why some years have 1 movie dropped while some have 2). I’m sure most teams tried to index the number from the years into the name of the missing movies, which yields a nonsense phrase. Someone on our team came up with the idea of indexing missing numbers into the Best Picture winner that year, and our (admittedly misguided) way of solving gave us a probably unintended red herring phrase of A TORCH – but that’s not why it’s frustrating, wrong answers come up all the time.

    The frustrating part is the uselessness of the hints:

    Hint 1: “Have fun picking those raspberries!” – I would assume is trying to say Oscars as the opposite of Razzies
    Hint 2: “Seconds a Freeman, Him, European Honesty” – which is just another set hinting (12) Years a Slave, Her, American Hustle, which also confused everyone since 9 films were nominated that year
    Hint 3: “Every year, the Dionysia awards were won by a lady.” – I honestly don’t know what this is trying to hint

    The hints assume that we have trouble figuring out “ohhh these are Oscar nominated films!” – which is not the case. We got all of this data that no one knows what to do – and it turns out, it’s actually irrelevant. Sense Sensibility is probably a good logical leap, but using Apollo 13 is a really roundabout way of doing so.

    * Crack the Phone: We needed the 3rd hint to solve this, but I don’t like that we needed to download the game to find the answer just for extraction. Not a terribly bad puzzle, just terrible extraction.

    * Lips, Secure: Enjoyed the cluing a lot (“pair of songs” anyone?) but the final phrase could be a bit more elegant, a good puzzle nonetheless

    I liked Trojan Hippo and A Martian Makes a Phone Call Home (even though the final cluing could be better it wasn’t a giant issue). Th Ilid has a bad final cluing with COUNT VOWEL but I liked it nonetheless.

    I liked Symmetry, and thought while it’s simple, it is elegant and worked great.

    Other little things: The weirdest part is how the puzzles and hints are all in PDF form that you are forced to download – which is especially strange for the hints that literally are one line of text. I understand hosting constraints, but it just struck me as a little bit strange.

    All in all, an alright attempt for a first hunt, but a lot of problems could be avoided if the editors don’t make the solvers do giant leaps in logic to guess what the puzzle writer was thinking of.

    Like

    • Just a quick note – we solved Crack the Phone without downloading the game. Did you actually find the character in question? Dedication.

      Like

      • We did not – I think we gave up out of frustration at that point, because it’s the 4th bottleneck with useless hints

        I wonder if Cambridge is reading these comments, they really could benefit from more feedback

        Like

  5. I assume the top UK solo puzzler will get their prize. We came in fourth, but a team ahead of us didn’t have someone from the UK, so we got the third place prize.

    I guess I do have a rather amusing story that our team had for Day 2.

    On day 1, for Archimedes, we had two wrong ideas, one trying to fit ISBN’s to the X’s and one trying to use the Greek numeral system. Both of these ended up being the themes for Day 2 puzzles.

    Very early after Day 2’s release, someone in our team quickly realizes that 4.9 is a reference to Lucas Etter’s cubing time, and goes, “ok, what if the answer turns out to be the most obvious answer?” and guesses “Lucas Etter”. This is marked wrong. Ok, we work on some of the other puzzles, noting the coincidence above which helped clear out two puzzles quickly. Then we go back to the Rubik’s cube, and I start assembling the cube and trying to work out the Kakuro (I’m not certain it’s unique, but eh). We also have the correct idea of extraction midway through the extraction, and thus get “Lucas”. We submit this. This is *also* marked wrong.

    Then we send an email. We spend several guesses doing unmotivated anagrinds. Anyway, our UK guy doesn’t check their email until the next day, and it turns out that CPH, upon seeing that “Lucas Etter” was guessed, changed the answer to match that (it was originally “Lucas”). However, they did this before we guessed “Lucas”, so by the time we guess “Lucas” the correct answer is now “Lucas Etter”… (They told us to resubmit the newly changed answer.)

    Like

  6. That’s an interesting anecdote about Lucas Etter; it explains a bit.

    I’d rather they just accepted both (or in general any apt answer that is what the puzzle asks for).

    Like

    • I’ve found this is often a difference between west coast puzzling philosophy and east coast puzzling philosophy, but I seldom agree with the solution, “make your answer checker accept all plausible answers.” I favor the solution, “Make sure your puzzle has one canonically correct answer.” Often this involves tightening up clue phrases (and getting the author to accept that the solver might have a different idea than the author’s first one) but sometimes it’s a matter of adding some text that says “This puzzle has a ten-letter answer” or “This puzzle has a two-word answer.” Of course, in this case, that text might just make the answer even more guessable without solving the puzzle. But arguably, for that reason, this puzzle probably shouldn’t have had that answer.

      I think we can all agree that the combination of puzzles where the answer form isn’t well-disambiguated and a limit of five guesses (on average) per puzzle is a recipe for disaster.

      Like

      • It was 20 guesses per puzzle, right? I’ve got no real argument with the rest (am I easily ID’ed as a west coast puzzler?)

        Like

      • Oh, I misremembered… I thought it was 20 per round.

        I think you once mentioned in a blog post that your favorite Mystery Hunt was 2013… in my book, that ID’s you as a some-other-planet puzzler. 😉 But mileage may vary.

        Like

  7. I found particular issue with Short. Not only was the answer an unclued anagram of the first letters of the path, but the path was nonunique (TAKEN and ISLES are the same connection, as are CARE and FIST), so solvers could easily have gotten the wrong set of letters to unclued anagram even if they took the intended approach.

    And then the first sentence of the solution was “If you like doing word-search and know how a breadboard works, this puzzle is
    practically solved.”

    Like

    • Most of the puzzles in my “disliked” list are puzzles that I either solved or spent a lot of time with. I never made the breadboard connection on Short, so I didn’t spend much time on it apart from finding some random words in the grid… as a result, I didn’t really have time to get mad at it.

      Like

  8. I think I may use the title “other-planet puzzler” in the future. But yeah, I’ll always admire the audacity of 2013.

    I also sent the Cambridge people a note pointing out this blog (since it’s public I presumed you wanted it read) and that it might be helpful for them to read the comments here. But I also mentioned that some of it was critical; I do know how that can sting.

    Like

    • In the interest of disclosure, wolf and Simon were part of my team (ducksoup); there was definitely a general feeling of malaise as the hunt went on for everyone though. I am grateful to the organizers, and I hope our comments are at least useful, if a bit critical.

      Like

Leave a comment