While a new feature can be intimidating, with the right planning, nothing should be too much to handle. Without really realizing it, I have grown into a process for how to handle the decomposition of a new feature. It pulls from a few different concepts to a formula that seems to have worked for me so far.
Now is a good time for this disclaimer: I'm a big fan of mind maps. Growing up, I had introductions to mind maps in a variety of classes and scenarios. I never really applied them to testing until meeting my (now) testing mentor. He's also a big fan of mind maps and the driving force for my continued use of them.
I think you know where this is going. Yes, I use mind maps for my test planning. This is not a new concept, and there's at least a couple good reasons why I like using them. They're lightweight. They're quick to whip up. They're easily modified. And most importantly, they're easy for anyone to understand. They serve as sort of a living document of the testing being performed. And once the testing is complete, they serve as documentation of what was considered during the initial testing phase of the feature.
In a previous blog post, I refer to Agile testers needing to wear many hats. I employ two of those hats when planning for new feature testing. One of the blackbox tester followed by one of the whitebox tester. And I do so in that specific order.
Step 1: Blackbox Analysis
First things first - I mind map the usage of the feature.
- Starting with any provided documentation (assuming there is any requirements or specifications), nodes are created in the map based on actions I'm told I can perform. These are the expected/documented actions of the feature.
- Then nodes are created based on my assumptions about what I think I should or should not be able to do. As far as I'm concerned, I'm playing the role of a user who has not read any documentation. So if I *think* I can or can't do something, that is a scenario worth testing. These are the assumed actions of the feature.
Step 2: Whitebox Analysis
Now that I have a mind map that shows some semblance of what I'm going to do during my testing, I remove my blackbox testing hat and replace it with a white one. There are many ways to get insights into the feature code itself. I use a combination of looking at the code reviews for the code submission for that feature and speaking to the developer face-to-face.
- Understanding what code paths exist allows for some nodes on the mind map to be de-prioritized. There may be a node for a negative path that we can clearly see (or have been told by the developer) that the code prevents. For me, I'd still like to try that scenario in the product to ensure it is handled well, but it requires less focus because it's a scenario the code was designed to handle.
- This may also reveal execution paths we didn't think about in our initial assessment. Maybe a requirement wasn't stated, but the developer decided it was implied and coded for it. Or maybe there was just a potential path that we missed during our blackbox assessment.
- Look at which methods have been unit tested, and what the unit tests actually test for. If there's good unit test coverage, there's a good chance we don't need to test for things like basic inputs and outputs, because that has already been covered (our unit tests run before the code is merged into the code base...assuring us that the unit tested scenarios are good before the feature makes it into any product build).
The intention here is that the blackbox analysis of a new feature is performed before the whitebox analysis. By not having the code in front of us while we think about how to test a feature, it prevents tunnel vision which could cause us to not think creatively about how that feature might perform. The whitebox analysis allows us to hone that creative thinking down using facts. We can focus more on the areas that are risky, and have confidence that specific scenarios have been unit or integration tested.