Since arriving at my new workplace, I've been struggling with a classic QA problem: What do you do when your release is in X hours, and you have more than X hours of tests to run?
The ideal answers are to get more hours or speed up the tests, but these may not be possible on short notice. Delay the release? Business costs may be too high. Establish a longer release/testing schedule? Definitely, but that won't help today. Automate tests? Writing them generally takes longer than running the tests manually, so it won't help today. Get more testers? Depends who's available, and can they be brought up-to-speed in time?
The answer I usually fall into is "run fewer tests," which begs the question "which tests?"
For the last couple releases, I've gone with a mix of priorities and intuition. What are the most critical components of our product? Which components is this release most likely to affect? What small set of actions will exercise the greatest number of components? I execute these in the limited time I have before release, and try to keep good notes so that when I talk to stakeholders, I can justify my confidence (or lack therof) in this release.
That sounds an awful lot like exploratory testing, doesn't it? I've taken to calling this exploratory regression. Googling around for that term, I've only seen it mentioned once (in a blog post from 2012), so I'd like to give it some more thought and definition.
Some questions I'm thinking about right now:
- How can conventional exploratory testing tecnhniques be modified to benefit a regression?
- Is exploratory testing relevant when I do have enough time to run a full regression suite?
- What are the weaknesses of exploratory testing during a limited-time regression?
- What other testing strategies can I use for a limited-time regression?