(This is a recap/review of the 2016 SUMS Puzzle Hunt, which happened in late December. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)
Once upon a time, there was a yearly triumvirate of Australian puzzlehunts: MUMS (based at the University of Melbourne), SUMS (based at the University of Sydney), and CiSRA (sponsored by a research company in Australia). CISRA stopped running their event in 2014, which brought the yearly number down to two, but then some CiSRA Puzzle Competition alums created the mezzacotta Puzzle Competition, which ran for the first time this year. However, in mid-December, it was still looking like this would be a two-Aussie-Hunt year, because SUMS had not occurred. Then just before Christmas, there was an announcement that there would be a 2016 SUMS just under the wire between Christmas and New Year’s, on less than a week’s notice.
Daily puzzle releases for Aussie hunts have traditionally been at noon in Australia, but the most recent mezzacotta and SUMS both released in the evening. This is pretty awful for Americans, especially if you’re on the east coast; for me, it’s gone from releases at 9pm or 10pm, meaning I can get a few hours of solving in before I need to sleep, to releases around 3am. That almost certainly means waiting to solve until the next day (after any west coast teammates have picked off the low-hanging fruit), though mezzacotta happened in the summer when my schedule is flexible and I was crazy enough to go to sleep early and wake up for the release. In any case, just as I think the MIT Mystery Hunt should be designed for students, and anybody from outside the MIT community should be an afterthought, I feel the same way here… if the new release time is better for Australians, Americans (including myself) should suck it up. But I won’t be sad if MUMS sticks with noon releases this year.
I solved SUMS 2016 with Killer Chicken Bones, a team that usually consists of some subset of Brent Holman, Rich Bragg, John Owens, Kenny Young, Todd Etter, Ian Tullis, and myself (if space allows, since I’m the most recent addition). This time around Todd and Ian sat out, and the five of us remaining came in seventh, solving 15 out of 20 puzzles. That’s pretty low for us, as we usually solve most if not all of the puzzles, and often we solve them all with no hints; this year, even with three hints, five of the puzzles eluded us. In fact, only one team, one of the usual plugh subteams [obligatory fist shake at plugh] solved all twenty puzzles, so I don’t think it’s a stretch to say this year was hard.
I am always curious how much testing is done with the Australian hunts… My guess is not a lot. The posted solutions often have commentary about what the constructors expected and what actually happened, and when there is, there isn’t any mention of what happened in testing. If any constructors from SUMS/MUMS/mezzacotta are reading, I’d love to hear about your puzzle testing process, and if there isn’t any internal testing, I’d strongly encourage you to introduce it… I can tell you that virtually every American puzzlehunt gets solved by someone unspoiled (either as a unit or in bits and pieces) before it’s released to the public.
Aussie hunts have a static hint system (everybody gets the same hint after 24 hours, then another, and then another), and the helpfulness of these hints varies from hunt to hunt (and even year to year within the same hunt). In my personal opinion, the best possible first hint is one that both helps teams get started but also helps them if they’ve gotten the aha and are stuck on extraction (since both those teams are sad for different reasons), and that the third hint should pretty much tell teams how to solve the puzzle. There were several puzzles that were solved by very few teams even with three hints… in my opinion, that’s very unfortunate, and if that wasn’t intentional, testing should have shown that it was likely to be the case.
We also didn’t solve the metapuzzle, though I suspect we could have (at least with hints) if we’d tried… but by the time the metapuzzle was out, particularly since it was delayed due to tech difficulties, we had been worn down by the difficulty of the puzzles and pretty much threw n the towel. SUMS, like mezzacotta, has a prize for the first meta solve but doesn’t incorporate the meta in team rankings, which really minimizes motivation to solve it.
Puzzles I especially liked:
* 1.3 (Big Break), 4.4 (Knit-Picking): On a recent podcast interview (coming soon to a device near you) I mentioned I don’t tend to like puzzles where there’s nothing to do immediately. Aussie puzzles often have this issue, in that you’re given some kind of abstracted form of data, and there’s not much to do but look through it until you have an idea. But the best examples of these puzzles have clear enough patterns/repetitions that you’re immediately drawn to something you can look at, which then draws your attention to other patterns, and so you make gradual but satisfying progress. I’d put both of these puzzles in that category.
I won’t spoil either of them further because I found both very satisfying. I will say that if you solve Knit-Picking alone, you will have some fairly tedious work to do once you know what you’re doing, and the final answer might not be easy to identify.
* 5.3 (A Way Out!): This puzzle is based on a pop culture property I first encountered in the Mystery Hunt and then ran into on various websites since. That said, the puzzle only relies on that property to a small degree; the meat of the puzzle is a set of subpuzzles riffing on a very specific theme, and the puzzle uses that theme on multiple levels in many creative ways. I think this was the most satisfying puzzle I encountered in this Hunt.
* 1.2 (Daze): This got solved while I was asleep, but I think it’s nifty.
* 3.4 (xxXWord): I also wasn’t involved in solving this, but the constraints satisfied are dazzling (although the final step is generally considered a no-no in puzzle construction).
Puzzles I especially disliked (sorry):
* 2.2 (Schoenflies When You’re Having Fun), 3.2 (Die Wohltemperierte Sequenz): These were solved by the fewest teams (three and two respectively despite three hints) and both were intensely frustrating because they were what I usually refer to as “Guess what I’m thinking” puzzles (which I’ll abbreviate as GWIT here, since it’ll come up in the future). These are puzzles where the puzzle itself gives you a lot of information, and the answer extraction is achieved by doing one of many possible things with that data, with no indication of what you should do. Rookie constructors often create puzzles like this innocently, because the extraction is the first thing they thought of, and it doesn’t occur to them there are lots of other reasonable ways to proceed. An elegant puzzle, in my opinion, will give you motivation to do the right thing.
For 3.2 in particular, I did a lot of legwork determining with WTC segments were transposed, and by how many steps; that data took a long time to collect, and once you had it, I feel that what you were expected to do with it required some pretty big leaps. (Similarly, we knew we needed crystal structure symmetry groups in 2.2, but it wasn’t at all clear how to use the “tags”). It also didn’t help that the hints for these two puzzles were overstuffed; they contained a whole bunch of nouns that were clearly intended to be useful, but if you already knew to manipulate these things, it wasn’t clear how. Again, good playtesting will help bring these things to light in advance.
* 4.3 (Hexiamonds): I wanted to like this puzzle much better than I did, but again, once you have the twelve grids filled (and I’m impressed that they could all be filled uniquely) there were many things you could do with the completed boards. The intersection letters you were supposed to get read as garbage until you do a further action with them, and that’s to me the essence of a GWIT puzzle. If there are ten things you can do, and one of them yields an answer in one step, that might be okay. If there’s only one thing you can do (or one thing most clearly clued) that yields an answer in multiple steps, that’s okay to. But in this case, the solver is expected to take one of many possible paths, and then look at one of those paths much more closely, and that’s not a reasonable expectation.
The hints were admittedly better on this one, and fourteen teams (including KCB) eventually solved it. But nobody solved it without at least two hints, which probably means the information in those two hints needed to be in the puzzle itself.
3.3 (Transmutation): This might not belong here, because I actually liked most of this puzzle… But I think it was severely underclued. If there was a reference to chemistry (the title is one, but it’s very subtle) or especially to making three changes instead of just one (which was given in Hint 1), I think the aha would have been more reasonable, and once we had the aha, the part after that was super-fun. It makes me sad that the fun part was hidden behind a wall, and then the author didn’t really give you the right tools to break through the wall. Props to the four teams that still did so without the hint.
I’ll add that SUMS has a very user-friendly website, well-presented puzzles, and a good scoreboard that tells you which puzzles are causing teams trouble, and when teams are solving things (allowing all teams to see the “competitive story” of the hunt). These are nice features I’ve come to take for granted in Aussie hunts, and having just completed a similar event that lacked these bells and whistles, I appreciate them much more. Polish really makes a difference… more on that in an upcoming post.
Some of the puzzles were frustrating, and the scheduling was not great for my American team, but I certainly enjoyed aspects of SUMS 2016. It was not my favorite Aussie hunt ever, and I think it was certainly pitched on the hard side (possibly unintentionally due to the rush to get it up in 2016), but I thank the constructors for their hard work in putting it together.