Recap: SUMS 2016

(This is a recap/review of the 2016 SUMS Puzzle Hunt, which happened in late December. Puzzles and solutions can be found here. This recap may contain spoilers, but I’ll try to avoid them where possible, especially for puzzles I recommend.)

Once upon a time, there was a yearly triumvirate of Australian puzzlehunts: MUMS (based at the University of Melbourne), SUMS (based at the University of Sydney), and CiSRA (sponsored by a research company in Australia). CISRA stopped running their event in 2014, which brought the yearly number down to two, but then some CiSRA Puzzle Competition alums created the mezzacotta Puzzle Competition, which ran for the first time this year. However, in mid-December, it was still looking like this would be a two-Aussie-Hunt year, because SUMS had not occurred. Then just before Christmas, there was an announcement that there would be a 2016 SUMS just under the wire between Christmas and New Year’s, on less than a week’s notice.

Daily puzzle releases for Aussie hunts have traditionally been at noon in Australia, but the most recent mezzacotta and SUMS both released in the evening. This is pretty awful for Americans, especially if you’re on the east coast; for me, it’s gone from releases at 9pm or 10pm, meaning I can get a few hours of solving in before I need to sleep, to releases around 3am. That almost certainly means waiting to solve until the next day (after any west coast teammates have picked off the low-hanging fruit), though mezzacotta happened in the summer when my schedule is flexible and I was crazy enough to go to sleep early and wake up for the release. In any case, just as I think the MIT Mystery Hunt should be designed for students, and anybody from outside the MIT community should be an afterthought, I feel the same way here… if the new release time is better for Australians, Americans (including myself) should suck it up. But I won’t be sad if MUMS sticks with noon releases this year.

I solved SUMS 2016 with Killer Chicken Bones, a team that usually consists of some subset of Brent Holman, Rich Bragg, John Owens, Kenny Young, Todd Etter, Ian Tullis, and myself (if space allows, since I’m the most recent addition). This time around Todd and Ian sat out, and the five of us remaining came in seventh, solving 15 out of 20 puzzles. That’s pretty low for us, as we usually solve most if not all of the puzzles, and often we solve them all with no hints; this year, even with three hints, five of the puzzles eluded us. In fact, only one team, one of the usual plugh subteams [obligatory fist shake at plugh] solved all twenty puzzles, so I don’t think it’s a stretch to say this year was hard.

I am always curious how much testing is done with the Australian hunts… My guess is not a lot. The posted solutions often have commentary about what the constructors expected and what actually happened, and when there is, there isn’t any mention of what happened in testing. If any constructors from SUMS/MUMS/mezzacotta are reading, I’d love to hear about your puzzle testing process, and if there isn’t any internal testing, I’d strongly encourage you to introduce it… I can tell you that virtually every American puzzlehunt gets solved by someone unspoiled (either as a unit or in bits and pieces) before it’s released to the public.

Aussie hunts have a static hint system (everybody gets the same hint after 24 hours, then another, and then another), and the helpfulness of these hints varies from hunt to hunt (and even year to year within the same hunt). In my personal opinion, the best possible first hint is one that both helps teams get started but also helps them if they’ve gotten the aha and are stuck on extraction (since both those teams are sad for different reasons), and that the third hint should pretty much tell teams how to solve the puzzle. There were several puzzles that were solved by very few teams even with three hints… in my opinion, that’s very unfortunate, and if that wasn’t intentional, testing should have shown that it was likely to be the case.

We also didn’t solve the metapuzzle, though I suspect we could have (at least with hints) if we’d tried… but by the time the metapuzzle was out, particularly since it was delayed due to tech difficulties, we had been worn down by the difficulty of the puzzles and pretty much threw n the towel. SUMS, like mezzacotta, has a prize for the first meta solve but doesn’t incorporate the meta in team rankings, which really minimizes motivation to solve it.

Puzzles I especially liked:

* 1.3 (Big Break), 4.4 (Knit-Picking): On a recent podcast interview (coming soon to a device near you) I mentioned I don’t tend to like puzzles where there’s nothing to do immediately. Aussie puzzles often have this issue, in that you’re given some kind of abstracted form of data, and there’s not much to do but look through it until you have an idea. But the best examples of these puzzles have clear enough patterns/repetitions that you’re immediately drawn to something you can look at, which then draws your attention to other patterns, and so you make gradual but satisfying progress. I’d put both of these puzzles in that category.

I won’t spoil either of them further because I found both very satisfying. I will say that if you solve Knit-Picking alone, you will have some fairly tedious work to do once you know what you’re doing, and the final answer might not be easy to identify.

* 5.3 (A Way Out!): This puzzle is based on a pop culture property I first encountered in the Mystery Hunt and then ran into on various websites since. That said, the puzzle only relies on that property to a small degree; the meat of the puzzle is a set of subpuzzles riffing on a very specific theme, and the puzzle uses that theme on multiple levels in many creative ways. I think this was the most satisfying puzzle I encountered in this Hunt.

* 1.2 (Daze): This got solved while I was asleep, but I think it’s nifty.

* 3.4 (xxXWord): I also wasn’t involved in solving this, but the constraints satisfied are dazzling (although the final step is generally considered a no-no in puzzle construction).

Puzzles I especially disliked (sorry):

* 2.2 (Schoenflies When You’re Having Fun), 3.2 (Die Wohltemperierte Sequenz): These were solved by the fewest teams (three and two respectively despite three hints) and both were intensely frustrating because they were what I usually refer to as “Guess what I’m thinking” puzzles (which I’ll abbreviate as GWIT here, since it’ll come up in the future). These are puzzles where the puzzle itself gives you a lot of information, and the answer extraction is achieved by doing one of many possible things with that data, with no indication of what you should do. Rookie constructors often create puzzles like this innocently, because the extraction is the first thing they thought of, and it doesn’t occur to them there are lots of other reasonable ways to proceed. An elegant puzzle, in my opinion, will give you motivation to do the right thing.

For 3.2 in particular, I did a lot of legwork determining with WTC segments were transposed, and by how many steps; that data took a long time to collect, and once you had it, I feel that what you were expected to do with it required some pretty big leaps. (Similarly, we knew we needed crystal structure symmetry groups in 2.2, but it wasn’t at all clear how to use the “tags”). It also didn’t help that the hints for these two puzzles were overstuffed; they contained a whole bunch of nouns that were clearly intended to be useful, but if you already knew to manipulate these things, it wasn’t clear how. Again, good playtesting will help bring these things to light in advance.

* 4.3 (Hexiamonds): I wanted to like this puzzle much better than I did, but again, once you have the twelve grids filled (and I’m impressed that they could all be filled uniquely) there were many things you could do with the completed boards. The intersection letters you were supposed to get read as garbage until you do a further action with them, and that’s to me the essence of a GWIT puzzle. If there are ten things you can do, and one of them yields an answer in one step, that might be okay. If there’s only one thing you can do (or one thing most clearly clued) that yields an answer in multiple steps, that’s okay to. But in this case, the solver is expected to take one of many possible paths, and then look at one of those paths much more closely, and that’s not a reasonable expectation.

The hints were admittedly better on this one, and fourteen teams (including KCB) eventually solved it. But nobody solved it without at least two hints, which probably means the information in those two hints needed to be in the puzzle itself.

3.3 (Transmutation): This might not belong here, because I actually liked most of this puzzle… But I think it was severely underclued. If there was a reference to chemistry (the title is one, but it’s very subtle) or especially to making three changes instead of just one (which was given in Hint 1), I think the aha would have been more reasonable, and once we had the aha, the part after that was super-fun. It makes me sad that the fun part was hidden behind a wall, and then the author didn’t really give you the right tools to break through the wall. Props to the four teams that still did so without the hint.

I’ll add that SUMS has a very user-friendly website, well-presented puzzles, and a good scoreboard that tells you which puzzles are causing teams trouble, and when teams are solving things (allowing all teams to see the “competitive story” of the hunt). These are nice features I’ve come to take for granted in Aussie hunts, and having just completed a similar event that lacked these bells and whistles, I appreciate them much more. Polish really makes a difference… more on that in an upcoming post.

Some of the puzzles were frustrating, and the scheduling was not great for my American team, but I certainly enjoyed aspects of SUMS 2016. It was not my favorite Aussie hunt ever, and I think it was certainly pitched on the hard side (possibly unintentionally due to the rush to get it up in 2016), but I thank the constructors for their hard work in putting it together.


6 thoughts on “Recap: SUMS 2016

  1. I thought what to do for Schoenflies was reasonably clear (although the ordering mechanism was a little iffy), although the dealbreaker for our team was that actually getting all of the correct products proved to be impossible. 3.2 was definitely too leapy.

    Also, certainly the timing (both the post-Christmas release and the late night release for Americans) was slightly unfortunate.


    • We had sources for the symmetry groups that didn’t match the ones in the puzzle solution, so ours weren’t coming in pairs… thus, we never considered pairing them. If we had, I’m not sure the presentation of the tags suggested that those are the things that should be combined (though if that were the only piece of data unused at that point, that might have been enough motivation to try it).

      Either way, I’m not sure this puzzle fit the description given on the front page: “The puzzles are designed to not require any specialist knowledge and so should be quite accessible for everyone.” Certainly the low solve rate among so many teams suggests a lot of people got stuck at some stage.


  2. Thank you so much for your blog; have been wanting to discuss things like this but have never found a forum.


    Just so happens that – heading into this hunt – among my two least favorite puzzle types have been music sheet puzzles and organic chemistry puzzles; I think both subjects tend to offer way too much potentially useful information in their natural states while offering very little clear info on solution methods. This hunt did nothing to sway me away from previous notions. (We happened to solve the chemistry puzzle, using the ‘orderbyeverythingremotelypossible’ method; even then it took some imagination since there were multiple internet answers to the reactions. We never solved the music thing, which still just looks absurd.)

    Mouthy (4.2) looks likely to add a third puzzle type to the ones above I can’t stand. What we thought you should be doing — shushing people in a library — turned out to have nothing to do with the puzzle. Or maybe it did, I don’t really remember.

    We also failed to solve Psychosomatic, as nobody had Soma cubes around or an ability to fake it. Rubik’s cube stuff you can sometimes get by without a cube present (see: 4.9 from the Cambridge thing), Soma, I don’t think so.

    I liked several of the puzzles, though – generally the ones you mentioned. Was also happy to see Quadtrees in a puzzle (1.4), though the eventual method/message to that was less interesting than the presentation.

    5.4 was a fantastic puzzle all the way up to the end, which seemed ridiculous — especially the ‘white dot on’ for half the XOR and ‘white dot off’ for the other half. We ended up solving it just by finishing the last puzzle, using some semi-educated guesses on 20% of the XOR, then flipping things on/off until something that looked like letters formed on the rest. Frustrating when you do a puzzle properly and follow (what you think are) the proper directions and you still can’t get it done. But again, loved the start and the concept.

    Several of the day 1-3 puzzles were solid; I won’t comment on them individually but thought that they’re the sort of thing that represent half of these hunts and that keep everyone coming back – puzzles with decent ideas packed in new-ish containers that are eventually solvable. Good stuff, perhaps a bit more difficult than ‘school of fish’ types on average since the methods aren’t always as clear.

    Really liked Transmutation (perhaps until the ending, which was hard to find), as I hadn’t seen that before. But it was eventually clear (for us, anyway) what to do and remarkably difficult (and satisfying) to make some of it work. xxXword (3.4) was also neat, though it was one of those that made you wonder about the difficulty (4 stars, even though it was one of the first solved in the round, while we still had little clue on the 3 star music puzzle.

    What isn’t fully appreciated until you do a bunch of these contests is just how much of a grind they can be (I think the #1 quality needed to beat Aussie hunts is perseverance). Four puzzles a day for five days really doesn’t sound like that much, especially if you have many people working on it. But days one-three tend to soften you up, and often you have a straggler or two left while your sleep has been a bit disrupted and a teammate or two fails to show up. Then things like day four hit…

    Knit-picking was the first we solved (thankfully someone saw what to do and also had the ability to interpret the rest; it’s the kind of puzzle I have a hard time following even with the solution). And eventually we made it through (K)not chemistry, though my mind is still trying to make atoms out of those pictures (a thought that was incredibly hard to shake). But Mouthy – ugh. And Hexiamonds presented some logical method of getting started (that was assisted when we found all possible hexiamond grids solved online) but soooo much information to grind through along with no clear solution method following. Just a ton of work, also work that wasn’t easily shared among teammates. Seems a much better concept than a functioning puzzle.

    So had about half of that day done when we were struck by an equally imposing Round 5. Also liked a lot of 5.3 until being unimpressed by a randomgram finish. Really? Thankfully was prepared for that (even with one of our nine letter words wrong), but I really don’t like things like “you can see HOLOGRAM on the outside, so you know the method is right. Just anagram that with what’s in the middle.” Why not just make the answer spiral inward?

    Mazes, again, I thought was 95% fantastic, but still a lot to do, especially on some of them near the end (the final maze was trivial by comparison). It really makes me want to play that game. Soma cubes, again we didn’t really even attempt (hope the puzzle was fun). And I really liked 5.1 – like you I find it’s nice to be able to work on a puzzle for a while without being told what I’m doing, and then slip into the ‘aha!’. But the cluephrase at the end was difficult to parse. Is that a B or an 8?

    Also the Meta I thought was pretty dang cool. And I think we should have seen it earlier, but we were tired by that point/

    Anyway, I had a lot of fun doing this one, and I’ll never complain about free puzzles/prizes. Will happily do SUMS any time it comes out. I do think that this particular hunt felt a bit rushed and as if it went through a little less playtesting than normal. But I’ll tend to remember the ideas and puzzles I liked, and hope that they can continue the fresh concepts while just doing a little more reigning in of the ideas to make sure that they’re solvable to teams spending the effort.

    Thanks again for your commentary; look forward to hearing your thoughts on Cambridge.

    – JJ


    • RE: Psychosomatic (perhaps my favorite puzzle of the hunt due to construction elegance… I suppose solving it first helps too)

      I used a software that seems to be specifically suited for tackling Soma-type puzzles, Somatic, combined with spreadsheets to help visualize the letters.


  3. I’m surprised that your team didn’t try to do the meta. It started off as a lot of work but was interesting as a change of pace.

    I hit a point in the end stages of the hunt where I wasn’t being productive on solving, so I entertained myself writing code that would try to crack our last 5 puzzles. Zach and I ended up backsolving Mouthy and Psychosomatic, but weren’t able to backsolve Schoenflies (given the silly form of the final answer), Mazes (the “meta piece” they gave out was incorrect!), or Sequenz (since the meta piece was hard enough to obtain that only 1 or 2 teams got it).

    I guess the moral of the story is that it paid off because Cinnabar Herrings beat KCB by exactly one point. 🙂

    Liked by 2 people

  4. Can confirm that last years SUMS hunt was rushed, and many puzzles did not go through much testing. Managing to sneak in a 2016 hunt MAY have been worth it though (matter of opinion). In addition to that, many of us were new to puzzle writing. Thank you all for your feedback, I just discovered this blog today but i’ll definitely be back!

    To my knowledge, the following puzzles were written by first-time puzzle authors:

    1.1 The Connection
    2.2 Schoenflies When You’re Having Fun
    2.4 Child-like
    3.2 Die Wohltemperierte Sequenz
    3.4 xxXword
    5.4 Mazes


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s