2023 Mystery Hunt, Part 2: More is Less

(This is a recap/review of the 2023 MIT Mystery Hunt, which happened this month. Puzzles, solutions,and solving stats can currently be found here. This recap will contain spoilers.)

Before I start talking about length issues, after posting Part 1, I remembered something else I loved about the pre-AI portion of the Hunt; the automated vote-on-a-response team interactions were brilliant. They were a great way to enforce a team bonding experience and immediately advance the story without requiring the constructing team to invest live person-hours, and I think my team laughed out loud much more than we would in a typical live interaction. The idea was great and the writing was great. I’m torn because this felt really specific to this year’s theme and artistic design, but at the same time I want everyone to steal it.

Now.

In the preamble to a recent Hunt writeup, a member of Cardinality amusingly said, “I am not a titan of the community and I will not share anecdotes about how this puzzle reminds me of this meta back in the 1926 Mystery Hunt where they gave us 3 rocks which we had to bang together in the right way to make the correct fires.” Of course not! That’s MY job. So let’s start with a brief history (at least within my time spent with Mystery Hunt, which is up to 25 years now) of Hunts that were too f***ing long.

If you analyze Mystery Hunts that went long from at least one team’s perspective, you’ll find that they generally fall into one of two categories:

  • Hunts where one or more metapuzzles end up being killers and block teams’ progress for disproportionate periods of time. Let’s call these mettlenecks (short for meta bottlenecks).
  • Hunts where the act of solving puzzles to get to the metapuzzles was so overwhelming that the construction team has to modify Hunt procedures so that even the teams in contention get credit for them without solving them entirely on their own. I don’t have as catchy a name for these, but I’m going with forcefeeds.

I’ve been on the constructing end of my share of mettlenecks, including 2009 (Zyzzlvaria, where different teams got stuck on different one-meta-left situations, which made for an exciting finish), 2019 (Holiday Forest, where the last two metas were very hard to solve without almost all of the feeder answers), and 2005 (Normalville, where one team had nothing left to solve but one metapuzzle for 24 hours).

The interesting thing about a mettleneck is that often it gets bad reviews from the lead teams that spent a lot of time staring at the same meta (which is booooooring), but disproportionately good reviews from casual/middle-tier teams, because they’re able to proceed through a lot of the Hunt before a winner is announced, as there’s not much bottlenecking before the bottleneck. I do think this is still a negative result despite positive side effects, because while it’s good for a majority of teams, you don’t want anybody to work hard solving your Hunt and then end up frustrated.

Forcefeeds, on the other hand, seem to rotate into Mystery Hunt periodically. There have been a lot of jokes recently about years that end in threes, because three of the most notable forcefeeds are 2003 (The Matrix, which was ahead of its time in that it would NOT seem too long today), 2013 (Coin Heist), and now 2023 (MATE? Puzzle Factory? Relentless AI Assault?). One of the other most notorious examples was 2004 (Time Bandits), which managed to follow up Matrix with something even more forcefeedy. The same team that won in 2003 also won in 2013 and went in determined NOT to repeat mistakes of the past, and I feel that was a success. I was on the writing team for 2014 and not 2004, but I can’t take credit for the changes… the leadership of that team made very good choices, and I mostly served in an advisory role. I did insist that we keep our endgame operating in full for as many teams as possible once the coin was found, even though it was a pain in the ass, and I’m very glad we did so. 2008 (Murder Mystery) had some forcefeed elements as well, though I don’t remember the details as vividly.

The hallmark of a forcefeed Hunt is that at some point, the construction team realizes things are not proceeding on pace, and that something has to be done. In 2003/2004, that involved hinting puzzles liberally when teams reached even the smallest bit of resistance; in 2003 we were actually assigned a dedicated in-HQ hinter for an extended period of time. In 2013 and 2023, the constructing team took things one step further and gave out a large quantity of what I call “nukes,” the ability to get free answers for puzzles without any idea of how to solve them. I want to be clear that given the pace of both Hunts, this practice was probably necessary, since the Hunt would have gone well past Monday if teams were going to solve what they were expected to solve. But I want to highlight why you don’t want to find yourself in a situation where this is necessary.

When Setec first earned a couple of nukes, we met and talked about our strategies for using them, and on a related note, what we wanted to get out of Hunt. I said that my priority was solving metapuzzles and opening rounds, because that’s what I find exciting. Several other team members agreed. But cut to 24 hours later, and opening rounds wasn’t fun anymore, because we weren’t doing it by solving puzzles… we were doing it by giving up on puzzles and pressing buttons. When I saw the notifications that Ascent and Conjuri’s Quest, I had a “meh” feeling I’ve never had when opening new rounds before, because I didn’t feel like we’d earned access.

This is why I didn’t have fun with the AI rounds. It seemed pointless to work on a puzzle when we could just as easily flip a switch and disappear. At one point we opened Flooded Caves, which is a set of seventeen Cave logic puzzle variants. I love abstract logic puzzles, and one of our captains, Tanis, asked me if I was going to solve this, or if we should nuke it. We counted the puzzles and realized I’d probably spend the rest of Hunt solving it, or worse, I’d spend hours on it and then we’d give up and get the answer for free anyway. We nuked it immediately. We basically spent Sunday looking at puzzles, deciding whether they seemed approachable enough to bother with, and often deciding no. This wasn’t just us… the Hunt stats indicate that the entire Ascent ROUND (meta not included) had 18 successful solves, and 147 nukes. Teams didn’t complete this round. They took an elevator that went past it.

Even when we’re stuck on a meta I usually enjoy Mystery Hunt, but this year’s Sunday afternoon was the second time I remember legitimately not having fun. The other time was 2013, and with both data points in hand, I now assume the free answers are to blame. Solving feels pointless when puzzles are spontaneously combusting around you.

So what do I think Teammate could have done to avoid this? It’s easy for me or for anyone to criticize from afar, because writing Hunt is time-consuming and sometimes thankless work, and tuning/pacing Hunt is extremely difficult (as I stated above, I’ve been on teams that have messed it up, though more frequently through problematic metapuzzles). But we grow as a community by sharing insight, and I’ve been around long enough to have a lot of perspective, so here were some of my observations.

Many individual puzzles were bigger than they should have been. I already mentioned the seventeen caves above that caused me not to attempt to solve any of them. I also mentioned the number 147, which coincidentally happens to be the number of morals you had to identify in Moral of the Story, after finding a message in 147 typos. It’s unlikely that any single solver wants to do anything 147 times. Hunt is, of course, a team activity, so you probably won’t have a single solver doing it. But that still means you’re devoting multiple people to stare at a single puzzle for an extended period of time. When you have lots of puzzles that are really big (the one with the quiz bowl questions also comes to mind, which looked like a fun idea iterated way too many times for me to want to solve), you are spreading non-giant teams thin, and the vast majority of teams solving Hunt are non-giant.

Testing may have needed to take into account that most teams are not like Teammate. This is very similar to what I said in 2013, when I felt like Manic Sages wrote the perfect Hunt to be solved by Manic Sages… who were unfortunately the only team not solving that year. After the solutions were posted, I saw a lot of people reference the authors’ note for Terminal, which begins, “This puzzle being solvable at all was honestly a huge surprise to me.” That is a MASSIVE red flag, and the note goes on to explain that the puzzle was made harder because testsolvers got better and better at solving clues. I am curious how long this process took, and whether it was considered that in practice, this would only be one of many puzzles teams were contending with at once. For what it’s worth, we thought Terminal was a fun idea, and we had at least a dozen people put hours into trying to solve it. After expending those hours, we still had less than half the grid filled. We nuked it. Moral: Don’t make puzzles harder because your testers are surprisingly brilliant; your testers won last year’s Mystery Hunt. (There are similar “let’s make it harder” stories from 2004 and 2013, which is not a coincidence.)

Number of puzzles is not a good gauge of size/difficulty. The number of puzzles in Hunt has oscillated, but overall it’s grown close to linearly over the last few decades… that might be okay, because solving resources and ability are also growing. But the definition of what a puzzle is is also growing; the puzzles I wrote in 2000/2002/2005 would barely qualify in the modern era. And with really chunky online hunts like Galactic and Teammate and QoDE and Silph pushing boundaries throughout the year, people’s expectations for how long a puzzle can take and be reasonable are expanding. The problem with this is that if the number of puzzles is O(n), and the size of a puzzle is O(n), the total size of the Hunt is actually O(n^2), which is a terrifying rate of growth. The team that won has been reported publicly to have over 160 people, though someone on the team reported that they had about 170 unique solvers, and only 120 of them were active solvers. ALL THREE OF THOSE NUMBERS ARE TOO BIG. And it is vital that TFKA…TTBNL does not write a Hunt with a team of that size in mind, because few of those teams exist (and in my opinion, none should).

The first part of the Hunt did not feel friendly to casual solvers. I was genuinely surprised at wrap-up when Teammate said one of their goals was to support casual teams, because I found the first round and meta to be much less accessible than the intro phases of recent Hunts. The first puzzle I worked on, Museum Rules, immediately subverted the expectation that the copy-to-clipboard feature would consistently work, and the aha was very challenging (and even once you got it you still had a bunch of nontrivial superimposing to do). Apples Plus Bananas required two ahas (you need PLUs and you need to get prime totals) and turned into a significant logic puzzle if you assumed those things, which you could not necessarily confirm before solving. I think these are tough puzzles as an experienced solver, and I can’t imagine how an MIT frosh who wandered in and wants to see what all the puzzling was about would navigate them. As for the meta, I recognized what to do with it quickly because I’ve solved many Anglers/Numberlinks. Most newbies haven’t. I encountered very few easy puzzles in this Hunt in general, and I was surprised that the ones I did find most approachable weren’t generally at the beginning.

The recap over at Fort & Forge predicts that my Part 2 thesis will be that Teammate should have cut Part 3 of the Hunt entirely. Actually that was my Part 1 thesis, so maybe I was too subtle about it? As it stands, if Part 1 and Part 2 were going to be what they ended up being, then yes, I think Part 3 should have been cut (or more specifically, replaced with a single round or more involved endgame that introduced and neatly resolved the multiple-AI story). I think if Teammate really wanted to have Part 3, all the parts needed to be smaller, both in terms of number of rounds, number of puzzles, and the size/complexity of the puzzles themselves. I get the desire to mess with structure… Zyzzlvaria had a second phase that was all about messing with structure, and while Holiday Forest only had one structural innovation, carrying it out to the extent we wanted caused us to include more puzzles than we should have. But there were lots of fun structural things in the Museum and Factory, and frankly, I found the Innovations and Factory Floor metas far more interesting than anything I saw in the AI answer format gimmicks or metas. But even if you really like Part 3, I don’t think it’s defensible to say that Parts 1, 2, and 3 all fit into this Hunt as is, because the stats show that teams didn’t solve Part 3.

At the beginning of this rant, I pointed out that forcefeeding is a pattern that occurred ten and twenty years ago. One team responded to it by making things even bigger (and arguably more poorly edited). Another team–okay, the same team ten years later–looked at the issues and intentionally tried to fix them. I’m a little worried that, because this year’s winners are a giant team, they will all want to contribute and will write something that can only be solved by giant teams. Giant Hunts are what cause first-time constructors to be scared of winning; they assume they also have to write something giant. You don’t. Setec has had a writing philosophy for years to “write for the middle.” During Wrap-Up, someone said Mystery Hunt is growing and growing, and so it needs money or it will die. Donating to Hunt is a good thing, but please understand that if it does not keep growing, it will not die. In fact, it could use some shrinking and breathing room.

Despite my criticisms, thank you to Teammate for making something creative and cool. I thought the last Teammate Hunt excelled, especially for an online puzzlehunt, in terms of cohesive puzzles, art, and story, and I found the same to be true for the portion of this Hunt I was able to enjoy before time intervened and puzzles started disintegrating all around us. I know from experience that I can have a lot of fun your puzzles and structures, and I hope you recover enough to write another Teammate Hunt soon. But since you effectively just wrote 1.5 Mystery Hunts, I won’t blame you if you need a break.

22 thoughts on “2023 Mystery Hunt, Part 2: More is Less

  1. I was wondering if this was the year that teammate would shrink the length of hunt; the decreased recommended team size gave me hope; and then our opening experience on Friday did not. So it goes.

    I agree that Atrium was downright ridiculous. The more I sit with that set of puzzles, the more I fail to comprehend how it was considered a good introduction for small teams; especially for incredibly talented hunt writers that had previously been so successful in writing short, cute fish puzzles for their previous hunts. We are a team which usually skates well below the waterline of hints; when we weren’t unlocking hints all day on Friday and early Saturday and felt like we were going much slower than average, I openly speculated that the algorithm for hints had changed somehow. Nope! We were just doing significantly better than we were expecting, and the lack of hints was intended behaviour for a team with our placement. So it goes.

    However, I want to provide a slightly different perspective, and that is from a team that is currently postsolving the AI rounds. When we finished Reactivation, we were aware that we were receiving huge free answer nukes, and that we could (if we wanted) speed past all of the feeders to get directly to the metas. We are also a team which has a strong “whatever we don’t finish, we solve after Hunt weekend” mentality. So we decided during hunt to deliberately hold onto the nukes, and experience the AI rounds afterwards more or less as we think they were intended: no free solves, no hints at all so far, one puzzle at a time.

    This process is not yet finished; we are up to the third layer of Wyrm, and we have (I would guess) about half of Ascent unlocked so far. What I will say is that, in this format, we’re finding the rounds *really freaking cool* – well, mostly, anyway. Wyrm’s puzzles so far are pretty hard, but not in a bad way, and the round gimmick is interesting so far (although has been done in some other formats). I’m waiting for a specific thing to happen, and excited to see how, specifically, it happens. Bootes we solved during the weekend (with the meta falling just after wrap-up). However, the feeders were *tough*. We free-answered most of them before we decided to stop with the MAD, and I think the only one we actually forward-solved was 5D Chess With Multiverse Time Travel. (We were only one of two teams to do so. However, the people working it told me that it was actually a really cool puzzle and worthy of inclusion. Hooray!)

    Less worthy is a lot of Ascent, which feels… very, very grindy. Some of this is down to the round gimmick, which necessarily makes handling the puzzles at all hard for people without a specific skill set. That’s a hard thing to do for an entire round, but possible. What isn’t necessary is the length of these puzzles: Moral of the Story has been called out, and for good reason, but I’d also like to give a shoutout to the final step of Mosaic, which takes an otherwise cute and snacky puzzle and makes it a slog juggernaut rather unnecessarily. But we haven’t seen the meta yet, or half of the round, so by a grain of salt go I. Maybe it gets better.

    At the end of the day, I do have to speak very very highly of Conjuri’s Quest – it has a gimmick that instantly captivated most of the team, and feeders which are, if not always easy, certainly a nice change of pace from the preceding rounds and unerringly very cute. With the benefit of the meta nerfs that it received during Hunt I think it would be safe to call it one of our team’s favourite rounds alongside Hall of Innovation. Had Museum been easier or shorter or both, and the previous AI rounds more reasonable, I think it would have been a wonderful experience for teams which could make it that far.

    Ah, well. So it goes.

    Liked by 2 people

  2. This is missing the forest for the trees, but as someone who (with a few others) solved that caves puzzle:
    The caves part was really fun and we demolished it reasonably quickly with a few of our power logic puzzlers splitting up the work. (I’m sure Dan can guess most of these folks.) Dan, you and Jackie should definitely go back and solve the caves part. A+ puzzle there.

    But then…

    After the caves part, there was more involving a video game that I had never heard of (never mind that I have never heard of most video games, somewhat by choice — Jason was also not familiar with it, which I consider a big sign of obscurity). Luckily for us, it turns out we had one person on the team who *loves* this particular game and he was brought in to help save us from a bunch of time on the relevant Fandom wiki (although that was still somewhat required). As a video game curmudgeon, did I love this? No, but something like this isn’t uncommon in long logic puzzles, because you have to extract an answer somehow… although combining it with niche data certainly isn’t necessary.

    Combining the video game + caves got us to a clue phrase — which in an average hunt you’d think would be the clue phrase to the answer which would have made this a long-ish, but reasonable puzzle — so we sent our super fan back to whichever part of discord we had pulled him from and prematurely celebrated.

    But wait, there’s more!

    That cluephrase was actually an instruction to *yet another* layer of the puzzle. The next step involved more asking-our-expert-back/scrolling-Fandom-wiki-and-or-reddit for data in a way that didn’t seem to add much to the puzzle other than… more time and grunt work. As someone who was there for the logic puzzles and not the research-about-the-author’s-favorite-thing-that-isn’t-my-thing part, this was a bit disappointing.

    This “Wait, we should be done, why is there an additional step?!” was something that was a bit of a running theme/frustration. See also: Quilted Squares, where we solved a perfectly fine puzzle, designed a freaking crossword dress (which I just happened to have on hand, so that was nice serendipity), and then instead of getting the answer, got even more puzzle!

    Both Caves and Quilted Squares *could* have been my favorite puzzles of the hunt, but in both cases, that third step added little to the satisfaction of solving (and in Quilted Squares, we ended up just buying it) and ended the puzzle with a sour taste in the mouth. The good news is, this problem is easily fixable by an editor who is a bit more ruthless with the metaphorical pen: In caves, make that first clue phrase give the answer; in Quilted Squares, just give teams the answer after they create the dress.

    And as someone who was a writer in 2004 and the director of 2014: “If a puzzle has more than 2 a-has, pare it back” is a really good rule of thumb for any future puzzle editors out there. You don’t want to be maximizing difficulty — you want to be maximizing fun.

    Liked by 2 people

    • I’ll also call out Think Fast, which had a cool in-person interaction as a capstone, and completing it gives an abstract hint for the final answer that baffled our team for hours.

      Liked by 1 person

      • Ditto. Think Fast (and to a lesser extent, Parsley Garden) were puzzles where I found myself thinking, “Shouldn’t we be done by now?” The Think Fast coda could have been slightly improved by moving the X to below the blank with a “length” bracket… an X in a blank doesn’t clearly communicate (this blank has X things in it). Also, using numbers up to and not exceeding 7 made it hard to tell whether we should be applying a number to each round, or applying Rounds 3, 5, and 7 when appropriate.

        Liked by 1 person

    • The main problem with overly long puzzles is that it breaks the feedback loop of “work really hard on a puzzle, get rewarded!” Getting stuck after you thought you’d already solved the puzzle is the greatest enemy of fun. Either split it into two puzzles, or find a way to make it clearer from early on that there’s a missing aha. This frustration goes double for ambiguous clue phrases, there’s few things worse than spending hours solving a puzzle getting a clue phrase and then not being able to interpret it. There the main fix is to whenever possible just give the answer directly.

      Liked by 1 person

    • There’s something poetic about the puzzles having a frustrating and unnecessary third act in a hunt with a frustrating and unnecessary third act.s

      Like

  3. I informed our team leadership in no uncertain terms that doing the Caves was the most motivated I’d been in hours, and others were also solving it effectively, so they’d better not nuke it. I was prepared to make my terms even less uncertain if need be.

    I didn’t know that there would be another aspect to the puzzle that was entirely beyond me, but fortunately my colleagues handled it very efficiently, and my declaration of confidence was ultimately validated.

    Like

  4. Dan, as a member of the constructing team in 2008 with a good memory, I can explain the forcefeedy aspen of 2008’s hunt, which ended up not playing out. There was actually a planned bit where when a team got down to one metapuzzle remaining, in an interaction a team member was going to be arrested as a suspect in the murder. While this team member was “jailed”, a character representing another suspect was going to give him the answer to that meta, while his teammates are doing some sort of “posting bond” interaction to get him out of jail.

    But as you can see here: http://puzzles.mit.edu/2008/team_data/teams/evil_midnight_bombers/index.html
    (scroll all the way to the end of the History section at the bottom of the page) you guys solved the last two metas 1 minute apart, so this was skipped and you went directly into the runaround.

    Like

  5. My not-so-hot take (informed by others’) is that this was a pretty good hunt minus some badly needed editing — both at the puzzle level and the higher structure level.

    You show a common thread that these draggy hunts tend to come from teams without past hunt-writing experience who came off of winning long/difficult hunts. This can lead to a mindset that puzzles should be constructed for maximum difficulty and that clean testsolving is a sign of weakness. There’s a vast difference between solving mystery hunt vs. individual puzzles or even other online hunts due to the size and the endurance factor of weekend-long continuous solving. Maybe someone should write a guide to explain how to do this recalibration for teams that haven’t seen hunt from the back end.

    Like

    • What’s weird is that Teammate does have writing experience, from their online puzzle hunt! My guess is that the error was thinking that MH puzzles needed to be harder than their online hunts. The Carnival Conundrum was very difficult! MH puzzles don’t need to be any harder than The Carnival Conundrum (and honestly the first third of MH should be easier than most of the Carnival Conundrum).

      Like

    • I’d take my hot take even farther: it was a very good hunt minus editing (length/difficulty control) at the puzzle level, and if that had been applied, the higher structure would have been just about perfect.

      I’m imagining a world where our team solved Mate’s Meta at Saturday (Friday night) 1am instead of Sunday 1am (with some significant progress within the Factory in parallel), and Reactivation at Saturday 4pm instead of Sunday 5am. That feels like a pretty good hunt to me, without changing anything about the order or puzzle count of rounds! (I mean, maybe a few fewer puzzles in Office and Basement.)

      Liked by 1 person

    • I would add that the puzzles were constructed not only to be difficult, but difficult specifically in un-fun, antagonistic ways. This was no accidental miscalibration. There are puzzles that are difficult because they require complex skillsets – advanced math, logic puzzle skills, the ability to untangle interactions – puzzles that are difficult because they HAD to be by their nature. One can imagine accidentally writing too many of these difficult puzzles: still a mistake, but reasonable. That is not what happened this year.

      This year’s difficult puzzles were deliberately made difficult in un-fun ways. Some, because they were longer than needed (like Dan’s 147 example). Others had deliberately antagonistic and unnecesssary changes. For example, the Singaporean cryptic in Ascent which was also /diagramless/ for no puzzle-related reason. The grid shape is not used in extraction. Or: the quiz bowl puzzle where the authors notes even say that they falsely changed everything to locations “to avoid grouping sentences solely by category of answer word, and to increase puzzliness”: in other words, just to make it harder, which somehow equates to puzzliness. This a TERRIBLE mindset to have, let alone proudly admit to. It changes my mindset from sympathetic to disappointed and even upset. I am happy to spend my time working on difficult puzzles. I am not happy to have my time intentionally wasted.

      Like

      • This feels really harsh. I agree that a lot of puzzle choices led to solvers finding them to be less fun, but I find it really hard to imagine that teammate imagined that these choices would make things less fun.

        Like

      • And for what it’s worth, “advanced math, logic puzzle skills, the ability to untangle interactions” are very much among the specialized skills associated with MIT students and graduates. So while I agree that a lot of the 2023 puzzles were too long and/or arcane, these are not the categories of arcaneness I would want to see absent in a Mystery Hunt.

        (This brings to mind something I wanted to say in my post but forgot to… is Mystery Hunt (a) a event created for MIT students/community that is also open to the public, or (b) and event presented for the public facilitated for MIT? Historically it’s (a), but it has certainly been drifting toward (b); ultimately I think this is solely the MIT Puzzle Club’s decision, but I encourage them to make a conscious decision either way, as there is some tension as long as different people think different things.)

        Like

      • I absolutely agree with you, Dan. In the spirit of fairness I will highlight two difficult puzzles that look great, that I wish I had gotten to during hunt:

        The World’s Largest Logic Puzzle looks like it deserves its title, but it also looks like it has an ingenious construction that deserves to be solved.

        My husband has played the game in 5D Barred Diagramless with Multiverse Time Travel, but did not join my team for Mystery Hunt. I plan to work on this puzzle with him.

        Like

      • I think that while it is true that teammate miscalibrated their puzzles, suggesting that miscalibration was deliberate or teammate was not trying to make people have fun is just very directly wrong. If you did not have fun, then I’m sorry about that, and I can entirely see why. But suggesting people were deliberately trying to make an un-fun puzzle is similar to feedback I got on my very first puzzles – ones that put me off puzzle writing for several years afterward. *No-one* sets out to make a puzzle that isn’t fun or engaging.

        For the quiz bowl puzzle, the puzzle started as “48 clues, pairs to contiguous United States, get a final America-themed answer”. There was a draft that did not obfuscate the clues, but the disentangling step was labeled not very interesting or fun, like solving a 100-piece jigsaw where you paired based on the contour of the sentences rather than their content and then just Googled each question 48 times, none of which required real thinking. The obfuscation fixed the “no thinking” part, got better fun reviews in testing, but also radically increased the length and difficulty. It seemed long but we decided to keep it given that it was near the end of the round and we were behind schedule.

        Odds are the right decision would have been to redesign and rewrite the whole thing – remove the pair to contiguous US step, so that it could be shorter than 48 clues, then redo extraction + construction. But, well, time pressure can make it hard to tell yourself you should start over, even if it would have been correct.

        Like

  6. For me, The One That Got Away was The Book of Fixed Stars. I know a bit of Arabic, and think historical astronomy is cool, so theme-wise this was an ideal puzzle for me. And having read the solution I think it’s really elegant — the theming and the logic parts of the puzzle came together so beautifully.

    Unfortunately we never got to those parts, because without a canonical list of stars-and-their-arabic-names (did we just fail to find the right reference?) it was taking us hours just to look up the star names and we bought the answer (which was absolutely the right call). It’s frustrating that a puzzle that could have been one of my all-time favourites got stuck behind such a tedious grind. Congrats to the setters and to the one team who solved it though.

    Like

    • The Orange “Super Meta.”

      2005 had six colored paired sets of answers where the first set comprised a “normal” metapuzzle, and the second set, to puzzles unlocked after developing a super power, could be used in conjunction with the first set to feed into a “super” metapuzzle. Solving this meta required some mouseover text that appeared on the Hunt’s main map, and we unwisely failed to make it clear that this data was associated with this puzzle. Some teams did not notice the text, and at least one team did but assumed it would be used later (which is a weird assumption to continue to make when you’ve been stuck on something for a full day, but that said, this was still our fault). The moral of the story is to be aware of things that are being presented differently than when you tested them.

      Like

      • This is basically right, but I feel like “assumed it would be used later” is slightly overstating it. Certainly that was by far our leading theory, but at least some people did try looking at them again at some point, but you still have to solve the puzzle (and of course sleep deprivation plays a big role here, no one got good sleep when you all think you’re on the verge of winning). Of course that raises the obvious question of why a hint saying to use that part results in the puzzle falling so quickly, but there’s a big difference between trying out a theory you don’t really believe in and knowing it’s right. This is a bit of a cliche with counterexamples in math, where once you really believe a statement is false you’ll find a counterexample very quickly even if you’d been trying to prove it for weeks. Also, iirc, there was an unfortunate wrinkle where someone tried to see if there was a secret round attached to those scroll-overs by guessing urls for the puzzles and we got yelled at by the writing team (reasonably so!) which a lot of people took to be confirmation that they were in fact attached to puzzles we hadn’t opened yet.

        Like

Leave a comment