Sunday, 7 June 2015

Please, stop telling testers how to test!



Let me start with a scenario.

A programmer works away on a new feature or a bug fix. They finally feel like they're finished and it's time to move the user story over to a tester. They update the user story with an explanation of the work they did, any assumptions they made and one final thing: 
"Testing notes: Try exercising feature X by using entry point A as well as entry point B and assure that the output is the same."
Now let me explain my problem with this. I have a gut-wrench reaction when something like this occurs, because my first thought is, "Wait! If these scenarios have been identified as important, why didn't you try them?"

That's right. I'm proposing that if a programmer can identify a risk area, wouldn't it make sense that they ensure their code works under those conditions? Even better - shouldn't there be coded tests for these scenarios, if at all possible?

I don't mean to lay blame here. As far as I'm concerned, in development there is no "us" vs "them". "We" are a team. So if testing is playing out like this, it is up to everyone to correct it. But, I AM advocating that it gets corrected. Allowing testers to be told how to test is basically boiling down their job to executing steps that others don't feel like doing. It is removing what I believe is the core expertise of testers - to think critically and creatively about how features have been implemented, how they integrate, and how to stress the shit out of them!

So please, stop telling testers how to test - and start collaborating with testers on what to test! In an Agile environment, we have the luxury as programmers and testers to sit together and chat about a feature or a bug fix any time we need to. Theres a few times when these talks can be mutually beneficial and help us build the best products as quick as possible.

  1. Before programming begins: This is an excellent time to discuss what the problem being tackled is (new feature or bug fix), and how both parties see it being solved. The programmer can explain how they are going to approach the implementation of the solution and the tester can talk about how they plan to test it. This gives the programmer the chance to keep those scenarios in mind during development. It also gives the tester the chance the develop a test plan based on the intended implementation.
  2. Demoing before testing: When the programmer believes they're done, a brief demo to the tester can be extremely helpful. (In fact, you wouldn't believe how many times we've caught a glaring bug during these demos...BEFORE any formal testing has begun). This is an opportunity for the tester to ask about integration points, and think about how to exercise those.
    The other thing that could be discussed at this time...UNIT TESTS! Talk about what unit tests have been written and if any more coverage would be beneficial. If certain things aren't unit testable, the programmer can explain why that is the case and the tester can plan to focus on that area (since we know there's less coverage there than other areas).
  3. During development: And of course, in true Agile fashion, any point in time is actually a great time to talk about roadblocks that arise, and possible solutions! Keep the communication up and leverage eachothers' skills early and often! 
Tell me what to test. Tell me where to test. But please, don't tell me how to test!

Wednesday, 14 January 2015

Looking beyond the titles (and an explanation of my QA CAN'T CODE movement)

In every facet of my life, I'm a self-proclaimed "jack of all trades, master of none". And being a jack of all trades often goes hand in hand with being a fast learner. While there are many people out there who identify their skills and abilities, and hone them to be a master of their craft, I am not one of them. However, I have always believed that this has actually been a key factor of my success in software testing. I have had experiences in software and hardware, script-directed manual testing and self-directed exploratory testing, planning testing and executing testing, reading and yes, even writing code. 

I have written at least some code in over 12 programming languages. Couple that with exposure to various frameworks at work, attending workshops and doing projects in my spare time and I'd say I have at least a basic level of understanding of some programming concepts. But I bet most of my coworkers don't know this about me. And why should they? - I am, by current title, a "Software Quality Assurance Specialist" and to most people that means I am a blackbox, manual software tester.

That is not their fault. It's a labelling issue. It's a perspective issue. Dare I say, it is a stereotype issue. I almost feel as though the industry has done a disservice by adding modifiers and variations to the title. Quality Assurance Developer, Software Developer in Test, Test Engineer - what do all these titles have in common? They perpetuate the belief that "typical QA can't code". And herein, I believe, lies the problem. 
If I am a Software Quality Assurance Specialist*, then I believe my specialty is anything that can help contribute to the quality of the product. 
This spans a far wider range than functional testing of features and products. If you look upstream in the development process, this means advocating quality in design meetings and helping build testability into new features. If you look in the "middle" of the development process, this could mean smart functional testing of the product or tool development to ease the workload with scripts or other tools. To me, smart functional testing means testing complex scenarios and applying automation where automation is best suited. Looking further downstream, the responsibilities could include being involved in the support process to analyze escaped defects, close the feedback loop and iterating over the whole testing process for constant process improvement.

I'm not advocating that every member of the test team should have all of these skills - that would be a little too Agile Utopian. But if you're a tester, building up your skills toolbox never hurt. So it would be wise to keep in mind the full range of areas where testers can contribute, and seek to improve skills across those areas. Then work your ass off to apply the skills within your current team, so that the rest of the team is aware you possess these awesome skills! I'm sure it will net you new and exciting challenges, more respect from your team members and leaders, and a bump in product quality (which surely no one is going to complain about).
I once volunteered to write an in-house tool that plugged into another system being used, in a language I had zero experience with. It was desperately wanted, but no developer time could be spared on it. I whipped up the tool, learned a new language in the process and in the eyes of management, "saved the day". Suddenly, that "Quality Assurance Developer" title I had been asking for was granted, and I was being given more time to work on tools, automation initiatives and more!
Anyways, the point is that titles mean nothing - so don't let the lack of a "developer" title stop you from learning to code and finding ways to apply that. In a previous post, I revealed that I've chosen to use the phrase "QA CAN'T CODE" (ironically, of course) - because QA can and should code, among many other things.



*Despite being a Software Quality Assurance Specialist, I typically don't associate with the term "QA". I usually try to refer to myself as a Software Tester.

Monday, 12 January 2015

A rant - QA CAN'T CODE

So...this is happening:


I've heard it too many times. So now I plan to wear it in irony. There's this notion that lingers and hasn't been dispelled yet - one that perpetuates the belief that "QA can't code". 

I'd like to send two messages with this simple statement.
  1. This is a push for a re-branding. I am not a fan of the job title QA (aka "Quality Assurance"). I've said it before - "I don't assure quality; I test software." Maybe quality inspectors can't infact write code, which is fine. But then I don't want to be a quality inspector.
  2. This is also meant to be an ironic statement to spark a discussion. Why can't QA code? I'm a QA Specialist by title..and I can write code. So why then do I often hear amongst the industry that writing code is reserved for developers? In the modern "Agile" era, where every member is responsible for every stage of the development process in one capacity or another, QA can code. Infact, QA should code!

There's a larger and more well thought out blog post to follow on this topic. For now, I'm just going to mention that these shirts will be available for sale (at no profit to me) for anyone that agrees, and feels like making this bold statement with me.

*EDIT*: Design has changed thanks to a suggestion from Jim Hazen!

Thursday, 6 November 2014

Famous first words - the moments leading to defect discovery

Just a fun little thought of the day from today...

There's a well-known phrase that people use - "Famous last words". Often it's in this context:



I was thinking about my inner dialogue today during testing, and I realized that every moment just before finding a bug, I'd catch myself saying one of two phrases. I'm going to call them my Famous First Words, because they define the first moment when I know I'm on to a defect. I'm sure many testers can relate to these moments.

"Hey! I wonder what happens when..."
I'd say this is the more common one. It usually happens during exploratory test sessions. I'll be working through testing the feature as normal, and out of nowhere this thought occurs. It's like being able to tell ahead of time that an area is just asking for there to be a bug. "I wonder what happens when I enter this value..." BOOM! - Exception occurs.

The other phrase happens a little more out of my control:
"Hey! That was weird..."
This one happens after catching a glimpse of some action (or lack of action) that occurs. It's when a dark corner of my brain lights up and says "I've seen this before" or "Did that just...?". This one is neat to me because it's those little details that the untrained tester may miss. When just a flick of uncertainty pops up for a brief second. This phrase has led me to find countless threading/asynchronous issues and things that are just subtle enough they weren't caught in your average, functional testing.


These are just to two I seem to notice most commonly in my day-to-day testing activities. Are there any other internal phrases that people find are precursors to finding defects?


Wednesday, 1 October 2014

Using Blackbox & Whitebox Analysis Methods to Build Strong & Efficient New Feature Test Plans

My most recent position has me doing a significant amount of new feature testing, with the freedom to do it in an exploratory manner. This is great - a dream come true! So I see a testing task come through for the newest developed feature and the excitement begins. (EDIT - I feel the need to clarify here. I work hard to ensure testers are included early in the design and development process, so I'm well aware of the feature before it comes into the test queue. My excitement here is that I get to begin physically testing the feature.) But then the reality of the situation sets in: I'm responsible for testing this entire new feature and confirming that it works to a certain standard.

While a new feature can be intimidating, with the right planning, nothing should be too much to handle. Without really realizing it, I have grown into a process for how to handle the decomposition of a new feature. It pulls from a few different concepts to a formula that seems to have worked for me so far.


Now is a good time for this disclaimer: I'm a big fan of mind maps. Growing up, I had introductions to mind maps in a variety of classes and scenarios. I never really applied them to testing until meeting my (now) testing mentor. He's also a big fan of mind maps and the driving force for my continued use of them.

I think you know where this is going. Yes, I use mind maps for my test planning. This is not a new concept, and there's at least a couple good reasons why I like using them. They're lightweight. They're quick to whip up. They're easily modified. And most importantly, they're easy for anyone to understand. They serve as sort of a living document of the testing being performed. And once the testing is complete, they serve as documentation of what was considered during the initial testing phase of the feature.

In a previous blog post, I refer to Agile testers needing to wear many hats. I employ two of those hats when planning for new feature testing. One of the blackbox tester followed by one of the whitebox tester. And I do so in that specific order.

Step 1: Blackbox Analysis

First things first - I mind map the usage of the feature. 

  • Starting with any provided documentation (assuming there is any requirements or specifications), nodes are created in the map based on actions I'm told I can perform. These are the expected/documented actions of the feature. 
  • Then nodes are created based on my assumptions about what I think I should or should not be able to do. As far as I'm concerned, I'm playing the role of a user who has not read any documentation. So if I *think* I can or can't do something, that is a scenario worth testing. These are the assumed actions of the feature.
Step 2: Whitebox Analysis

Now that I have a mind map that shows some semblance of what I'm going to do during my testing, I remove my blackbox testing hat and replace it with a white one. There are many ways to get insights into the feature code itself. I use a combination of looking at the code reviews for the code submission for that feature and speaking to the developer face-to-face. 
  • Understanding what code paths exist allows for some nodes on the mind map to be de-prioritized. There may be a node for a negative path that we can clearly see (or have been told by the developer) that the code prevents. For me, I'd still like to try that scenario in the product to ensure it is handled well, but it requires less focus because it's a scenario the code was designed to handle.
  • This may also reveal execution paths we didn't think about in our initial assessment. Maybe a requirement wasn't stated, but the developer decided it was implied and coded for it. Or maybe there was just a potential path that we missed during our blackbox assessment.
  • Look at which methods have been unit tested, and what the unit tests actually test for. If there's good unit test coverage, there's a good chance we don't need to test for things like basic inputs and outputs, because that has already been covered (our unit tests run before the code is merged into the code base...assuring us that the unit tested scenarios are good before the feature makes it into any product build).


TL;DR:

The intention here is that the blackbox analysis of a new feature is performed before the whitebox analysis. By not having the code in front of us while we think about how to test a feature, it prevents tunnel vision which could cause us to not think creatively about how that feature might perform. The whitebox analysis allows us to hone that creative thinking down using facts. We can focus more on the areas that are risky, and have confidence that specific scenarios have been unit or integration tested.

Wednesday, 13 August 2014

Why "We've always done it that way" is not such a bad phrase after all


This isn't a new topic. I've heard many people before talk about how this phrase is so detrimental. I want to propose a different take on it, instead of the classic negative view.

There's a running joke at my office about this phrase. In a sprint retrospective, as a relatively new member of the team I mentioned that I was sick of people answering my questions with "We've always done it that way". As a result, my coworkers now like to toss the phrase out at me in playful office jest.

But all jesting aside, it did spark some very interesting discussion. I realized that I wasn't tired of hearing that phrase specifically, but I was tired of hearing it used as a way to explain away a questionable practice without any responsibility or ownership. As the newbie on the team and getting frustrated with the painfulness of process X, when I asked "why do we perform action X this way?", it was easy for a more experienced team member to reply with "we've always done it that way". This avoids having to explain the "why" portion of my question and halts all discussion about improvement. I don't think anyone's doing in intentionally, and that is what makes the phrase so detrimental.

A year ago, I was listening to a talk by Adam Goucher (@adamgoucher) entitled "It's our job to collect stories" (Title may not be exact). One of his points was regarding this phrase. Adam pointed out that if "we've always done it that way", then at some point a decision was made to do it that way, and that decision must have been based on certain factors at the time.  Make sense? Furthermore, if we trust our team, then we should also trust that it was the RIGHT decision at the time. So perhaps the phrase is actually an indicator of a previous decision that needs to be reassessed. We should be able to justify WHY a particular choice was made. I believe this blanket statement is so popular because it allows us to skip the justification altogether, thus not requiring us to think about the reasons behind the initial choice. I liken it to the classic scenario of a child asking an adult "why" repeatedly. Once we run out of the ability to provide a reasonable explanation, we revert to "because". But children often don't accept "because" as a response. They continue to prod. 

Consider the following scenario:
Child: Why do we wait until 8:00pm to eat dinner every day?
Father: Because that's the time our family has always eaten dinner.
It's easy to imagine that the father thinks this answer should suffice for the child. But what if the child is getting hungry at 6pm every day? This answer would probably frustrate him. 
Here's the magic part that transforms this phrase from detrimental to something that can be used to focus improvement.

Same scenario, but the child responds to his father's answer:
Child: Why do we wait until 8:00pm to eat dinner every day? 
Father: Because that's the time our family has always eaten dinner.  
Child: Well, why so late?  
Father: Because my dad didn't used to get home from work until after 7pm.  
Child: But you get home from work at 5pm, and I'm usually hungry by 6pm. 
Father: You're right - I never thought about that before. Let's make it 6pm then. 
It's a silly example, but it shows the point clear enough. By pushing to get to the reason why the family eats so late, they were able to recognize that they could change dinner time with no negative effects, while improving the scenario for the kid (he's not starving for hours each evening).

By pushing back and refusing to accept a blanket justification, we can dig down to the underlying reasons a decision was made. Perhaps a decision was made 2 years ago based on a time crunch, but now we have time to address tech debt and this particular feature would be a prime candidate. In fact, any time the question "why do we do X?" gets asked, it should be a flag to investigate further. You may find there's a reason that is still valid, in which case, no harm to have asked. But I'm guessing that fairly often you will find a decision was reached based on factors that have now changed.

Sometimes it just takes that one stubborn person to point it out. So the next time you're asking "why" and you get that dreaded response: push back. Encourage everyone involved to dig deeper, and find out the real reasons. It has worked for us and begun to help foster discussions of iteration and improvement on things we've been doing for a long time.

Just because it has always been done that way doesn't prove that it's the best way anymore. 

Sunday, 20 July 2014

Being an Agile Tester - You're going to need a bigger hat rack


Software testers have traditionally worn multiple hats. We are frequently asked to switch context quickly, and adapt accordingly to get each and every job done.

With the rise of Agile development, testers need to be prepared to perform a variety of testing activities - both traditional activities and some new, potentially unfamiliar ones. The general expectation is that these activities will be performed in less time, with even less structured requirements. If you're already a context-driven tester, this won't come as too much of a shock to you. If you come from a more traditional testing background, this could be a pretty sizeable change for you to grasp.

In the Agile world, all roles and contributors within the team are considered equal. That is, all are equally required for the team to be successful - there is no "rockstar". And from time-to-time, each and every member of the team WILL be called upon to perform actions outside of their standard responsibilities. I believe for some people, this is a scary concept. A classical developer is as likely to respond with "I'm expected to test?" as a classical tester is with "I'll have to develop things?". My answer to any of you with these questions is "yes". And I don't want you to simply grasp this concept - I want you to embrace it.
If you want to be a successful tester on an Agile team: you're going to need a bigger hat rack. 
There, I said it. Now let me explain why.

Throughout my early years in testing, I understood that as I grew, I would learn about more hats to include in my theoretical "hat rack" of testing skills. I began honing my functional, blackbox testing skills. I would follow test plans and test steps, keeping a keen eye out for things that were unexpected. Then I developed skills in performing the test planning - learning how to functionally decompose a feature and figure out how to test it against its expected behaviour (as provided by the dev team via the project manager). As I gained more credibility within my test team, I was able to work closer to the dev team and hone some whitebox testing skills, where I could see what the code was doing and test accordingly. This all happened throughout the course of a few years working with the same testing team.

Eventually I switched companies and joined an Agile test team. Within the first year, on any given work day, I could be expected to do any of the following:

  • Work with a developer and/or designer to talk about how a feature could potentially be tested when it was finished development
  • Functionally test a new feature
  • Accessibility test a new feature
  • Usability test a new feature
  • Performance test a new feature
  • Security test a new feature
  • Perform full system regression testing for all of the above
  • Gather statistics from internal and external sources and analyze data (for the purposes of improving testing focus)
  • Write code to support the test automation framework
  • Write automated test scripts
  • Develop test tools & scripts to assist testers wherever possible
  • Mentor team members (testers and non-testers) in improved testing techniques
  • Contribute to overall test strategies for Agile teams
  • And so much more...
Of course, not everyone has to do all of these things. But the opportunities for a purely functional, blackbox tester are diminishing. In Agile, they're virtually non-existent. As I said:
You're going to need a bigger hat rack.

We see Agile testing often go hand-in-hand with context driven testing. In context driven, you are the test planner AND the test executor. You are given a handful of requirements and expected to determine if they are met. You are also expected to advocate for the customer. Question when things don't feel right. And you have to do this all with limited time, using the most effective method. Is one exploratory pass good enough? Should this be automated and added to the regression suite? HOW should this be automated? Where does this fall within our testing priorities?

Hopefully you are prepared for this. It's a new and exciting world, where testers are being handed more responsibility than ever before. Testing is definitely not becoming obsolete - but bad testing is. And with that comes the need to constantly learn new things, continuously improve and find new and creative ways to contribute to the Agile team and keep quality improving. Just as programmers must continue to learn and keep up with the latest technologies and languages, testers must continue to learn new practices, new ways of thinking, and keep collecting those hats.