For the recent evening Ministry of Testing Meetup here in Cambridge I brought some Testing Puzzles for people to solve. Or rather some Puzzles for people to test, naturally, and on the hunt for appropriate puzzles I tried to find types that would give us an opportunity to exercise and reflect on basic testing skills.
The evening was a blast and great fun, as was the preparation; of course I had to try out all the puzzles and riddles and challenges myself first to find the good ones! I ended up with about five puzzles which I vaguely fitted into three different categories (questions & assumptions, strategy, models), and some backups, although we ended up doing only three puzzles of two of the types, but that still gave us plenty of ground to cover and a variety of different angles to explore.
Puzzle category: Questions & Assumptions
My first group of puzzles were of a type that I felt is closely related to testing: reflecting on the question (instead of only focusing on the answer) and being transparent about your assumptions. Some question type puzzles however follow a simple pattern: “Tricky Question” with “One Correct Answer”. But that’s not what testing feels like to me; tricky questions we might have, but to my experience there is rarely one straightforward, correct answer. And often it’s also not about the answer itself, but more about the exploration of the question and what we learn from pursuing the space that it opens up, from being able to step back and reflect on the bigger picture rather than jumping to a potentially shortsighted solution. Which is why I chose two quite open ended questions for us to explore.
Q: How can you find a needle in a haystack?
This is a personal favourite of mine which I picked up from a former colleague, who uses this as a developer interview question. I asked people to pair for about 10 to 15 minutes to come up with ideas, after which we had a group debrief, where we gathered some solutions people had come up with. Solutions included using X-Ray, a metal detector, magnets, having cows eat up all the hay, burning the haystack down (needle will be left over), throwing it into water (needle will sink down), applying sorting and weighing strategies, and a few more.
Than we moved on to discuss how people came to their conclusions. Given we were pairing they had to share their thought process and discuss the question so as an added benefit I got a great opportunity to listen in – and I chuckled to the echoing reflection on the assumptions people were making and the questions that were thrown against the requirement itself. And it was easy to see how questioning the requirements and getting more background information could make some of the solutions void:
- What are the specifics of the things we’re looking at, e.g. what is the needle made of? It could be metal, but also thorn or wood – in which case burning down or using a metal detector might actually not be such a good idea …
- How much time do we have for this (could we just let the haystack rot away)? Is there any expectation on what state the haystack (or needle) would have to be in after?
- Questioning our mental image of the thing also proves interesting – what does the haystack actually look like, e.g. how big is it, also in proportion to the needle? Is it what we think (assume!) it is or could it be something else completely?
- How do we know there is a needle in the haystack? And is it just one needle or is the haystack full of needles? How did it got in, who put it there, and how do we know?
- And of course: Who is the user? What do they want it for, what are they after: a needle? Or a needle-free haystack? What’s the motivation behind it – could deciding it’s not worth the effort also be a solution?
I was interested to know if any testing heuristics had been used to approach the problem – and took that as an opportunity to provide copies of the Test Heuristics Cheat Sheet (via Elisabeth Hendrickson), also as a reference for the upcoming puzzles. Not much heuristics had been used, but one interesting approach that was mentioned was to go through the spec word by word, looking at hidden meanings and possibilities; “How” – “can” – “you” – “find” – “a” – “needle” – “in” – “a” – “haystack”, which in turn can trigger questions and ideas (about number of needle and haystack, the specifics of the things, the location, the motivation behind the task, …) but also can help reveal misunderstanding and assumptions made upfront.
Q: How do you test a light switch?
Next up was another interview question, which one of my testing colleagues had encountered once in the wild. We kept the same pairs and again regrouped after 10-15 minutes. Be it the previous debrief where we talked about strategies and testing heuristics, this resembling more closely an actual testing task or the trigger word “test” in the question, this time round people were using James Bach’s heuristic SFDIPOT pretty much straight away.
I loved to see creativity spiking here as well and how people again picked up on assumptions, rolled the question over in their head (“could this be a very light, that is a not heavy, switch?”), used heuristics and visualisation, e.g. Mind Maps, to cluster ideas.
This puzzle allowed us to reflect on using a heuristic to generate test ideas. We also touched on how different heuristics can compliment each other and can be combined – you could for example use SFDIPOT and then look into different testing types to add on, e.g. reflecting on security or performance. And having come up with a lot of great and sometimes crazy test ideas we reflected on how you face the next challenge: to step back, dial down, and figure out which ones you should prioritise and which ones you can let go, maybe after having gathered some more information from stakeholders.
Puzzle category: Strategy
In our testing community events here in Cambridge I often benefit from having great people coming to the Meetups who are willing to give just anything a try, who engage, cooperate and are open for just about anything I throw their way. So when I revealed the next puzzle, nobody even blinked an eye, nope, people happily dived in to find a good strategy to tackle the task presented. Awesome.
The wordsearch puzzle idea was suggested to me by the test manager, who had originally designed this for one of his daughters, but naturally thought this could be fun for testers also. And right he was. It’s quite a simple setup, where each person gets a sheet to fill in, like this:
It was great fun watching the testers concentrate on filling in the puzzle, each coming up with their own unique approach to tackle the task, which we also discussed in the debrief: what kind of strategy are people choosing to do this?
- Some testers reflected on their approach for a while, whereas others jumped right in to explore the space, stepping back from time to time (“plunge in and quit”)
- The grid was being filled using different patterns, some were working their way from the middle to the outside, others focused more on the borders in a mostly left to right pattern
- There was some reflection on doability, calculating the total number of boxes in the grid
(14×16 = 224 boxes) and comparing to the total number of characters that would need to fit in (26 words aka 153 characters)
- Clustering was used, determining the individual length of the words (ranging from 9 to 3, from Alligator to Yak) and than finding useful groups by length to fit into the space, e.g. groups of 14 or 16 characters in total to fit into a row
- Some kept the end user in mind, making it intentionally harder to solve, e.g. by using different orientations, overlapping words, choosing some of the rarer patterns (“oo”, “ll”, “qu”, “pa”, …) that would be easier to detect as fillers to throw people of, by mixing upper and lower case and using additional special characters
- …. while others focused more on speed to solve the task – ending up with a straightforward left to right pattern and the bottom third of the grid leftover, filled in with a string of A’s.
Famous last words
There is a beauty in watching people tackle a problem or a task creatively, observing a whole lot of different approaches and strategies being used and seeing these reflected in the results. It’s a thing I love about testing – it respects detours, shortcuts and crazy ideas, it values diversity – a different angle always hold the opportunity of learning and showing something new. Meeting up with people to explore this together just makes it so worth wile. Or, in the words of one of the attendees, as a review of the evening (which quite blew me away):
“I learnt, I laughed, and fell in love with testing all over again.”
Thanks to all attendees and contributors for the inspiration and the fun evening! And a special shout out to LockHouse Escape Games Cambridge (LockHouse.co.uk) who sponsored a free session for their Escape Games :)