Joined In Progress – REDDOThunt

A Singapore-based online hunt called the REDDOThunt began this morning at 10am ET and will be continuing for another 35 hours or so. I heard about this event on puzzlehuntcalendar.com a while ago, but I didn’t mention it here for three key reasons:

1) I wasn’t sure what to expect, not having participated in any events created by the authors.

2) The warm-up puzzles they posted didn’t set my world on fire (no offense to the authors, since they were intended for a less seasoned audience).

3) Honestly, I forgot.

But having said all this, I’ve been working through it as a solo solver, and I’m pleasantly surprised; I’ve solved nine puzzles and the first metapuzzle so far, and so far they range from fine to quite good. I recommend checking them out; if you’re going for points, the first-round puzzle values drop when hints come out, at 10am ET Saturday morning.

Congrats to Aviations Laws for already cutting through the thing like a knife through butter, and good luck to anyone already solving or who decides to join the fray. I’ve got a semi-local work conference to attend tomorrow, so I’ll be solving for another hour or so before sleeping and then disappearing for much of tomorrow.

Advertisements

3 thoughts on “Joined In Progress – REDDOThunt

  1. I am a local who took part. While it is heartening as part of the community to see more interest and hunts, I am somewhat concerned about a possible trend of new hunts following a certain style – with no perceived need for editing, limited test-solving and relying on a hint system to help bridge leaps which might be less fair. Majority of the puzzles in this hunt were fortunately clean, but as most solvers who spend a decent amount of time on a hunt can attest to, even one problematic puzzle can be frustrating. Barring say, the benefit of a first-hand involvement setting a Mystery Hunt with experienced organizers and editors, what do you think could be ways to help the hunt community and aspiring writers develop in this area?

    Like

    • I may write a more detailed post about the REDDOThunt once things slow down, but as first-iteration online hunts go, I thought it was extraordinarily clean. I solved solo, and of the 19 non-meta puzzles, I solved 14 of them without a hint and backsolved 4 of them (figuring out the answer primarily based on the metapuzzle, sometimes with help from the puzzle theme). The metapuzzles were extremely forgiving, so if a puzzle proved impossible to solve, it was definitely possible for an experienced solver or team to get past without that solve. (I actually solved the second meta with 3/7 answers, which I admit required a leap of faith.)

      Of the five puzzles I didn’t solve, looking at the solutions I didn’t really see anything that felt unfair. My biggest complaint was the solution writing style I often see from rookie constructors that says things like, “It should be obvious to the solver that…” If my biggest objection is how they phrased their solutions, that’s probably a good thing.

      So I’m not sure this hunt is an example, but I agree that puzzlehunts that go through inadequate testing and/or rely on hints are a bad thing. Canned pre-written hints can also be extraordinarily frustrating in the Aussie hunt format; if you’ve been stuck on a puzzle for 24 hours, and the hint you get tells you something you already know (or is esoteric enough that you just can’t understand it)

      The solution to hunts that aren’t sufficiently tested is probably pretty obvious: Test your hunts sufficiently. Good test-solving practice is something that deserves its own post, and I’ll write one some time in the future, but some things off the top of my head that help are getting testers to fully document their solving process, testing multiple times to look for trends, and taking the results seriously; it’s easy for constructors to look at a complaint and assume, “Well, most solvers won’t have that problem.” Maybe so, but unless you get somebody else to test who doesn’t have that problem, that’s a pretty big (and perhaps egotistical) assumption.

      Gauging difficulty of a hunt once it’s clean is a whole other thing; I think the 2017 Mystery Hunt had extremely clean puzzles, but as a result, teams cut through it like butter and it ended up super-short for those teams (but not for most teams, and the merits/disadvantages of that is a separate debate). But I agree that even if a hunt full of broken/kludgy puzzles takes the intended amount of time, it won’t be fun, and it takes a lot of constructing/testing discipline to avoid that. Thankfully, I thought this hunt did avoid that.

      I’m in the midst of a busy season at work, but later this year I hope to write some “roundtable” posts to get people to discuss their constructing/solving experiences… maybe this comment will inspire the first one.

      Like

      • Definitely look forward to your future posts on test-solving practice and hunt construction experiences!

        My team also solved 15 of the non-metas without hints, and backsolved the remaining 4 from the second meta. Backsolving those 4 seemed almost consistent amongst teams who solved the second meta, though I felt we could have solved 2-3 of those probably with more time and effort. So I agree the puzzles are generally clean, notwithstanding room for improvement in the execution for some puzzles. Backsolvability to me is not necessarily a good or important thing, but I have no complaints either given it arises by nature from the construct of their metas.

        My earlier comment was not specific to the experience of this hunt. But more broadly a concern on mindsets and approaches towards hunt writing which would more likely result in problematic puzzles. Kind of like driving a bus without checking the brakes thoroughly (perhaps with the presence of passenger seatbelts playing a factor). Should there be better awareness of the “hunt contract”? Having clean unbroken puzzles should be the minimum threshold to aim for. Testing and editing should also help refine the puzzle so that steps are fair, intuitive and logical, and ideally elegant.

        Beyond the importance of prioritizing the hunt experience of solvers, I also feel that writers learn most during the iterative puzzle construction process itself. So it is ultimately not good perhaps for the future of the hunt community if some of these hunt best practices are not propogated, and the new generation of puzzle enthusiasts do not develop their skills to their full potential.

        I had the rare benefit of being somewhat involved in the construction of a Mystery Hunt, observing and learning from the best. But even in the context of such an established hunt, the best writing practices are hardly passed down in a conscious or comprehensive manner. The presence of “freelance” experienced puzzlers is probably often helpful in guiding first-time Hunt constructing teams. But my sense is that it may not suffice to rely on oral historians for such a deep knowledge base in the long run.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s