Software development is a complicated topic because we're dealing with a combination of technical challenges and human conflicts. There are many ways to build a product, none of them perfect or simple.
On social media, where nuance is an unaffordable luxury, we often see simplistic statements trying to condense this complicated reality into a straightforward set of rules.
And no responsibility has been more oversimplified than QA.
The combination of quality assurance and agile software development just isn't a straightforward one. Setting up a system that promotes fast feedback-driven iterative development while keeping quality high is tough.
While quality assurance is way more than testing, it’s the testing part that people struggle with. That’s what we’ll focus on in this article.
In waterfall-style processes, testing was a phase in the project life cycle. After the "implementation phase" was done, the project moved into the "QA phase" and brought on a group of dedicated testers who weren't previously involved.
It's a common misconception that the tester's role in such a project was to check if the software contained bugs. That was not their job. They verified whether the software they didn't build matched the specs they didn't write. At the end of the ride, teams often delivered software that was on spec yet broken. This paradox was one of the main drivers behind the agile software development movement of the nineties.
Testing software after development introduces a handover, delaying delivery and slowing the team down. A handover is when one worker gives another their work item, much like workstations on a conveyor belt. In case of a quality handover, the developer considers their work “done” and gives it to the tester to verify. This creates a long feedback loop and introduces context switching: the developer has to wait for the tester to do their thing and will pick up other work in the meantime. While these days we all agree that handovers are a bad thing, a lot of us struggle to avoid those in day-to-day development.
One of the reasons it's so hard to get this right is because we no longer plan time for testing. Where the old-school project dedicated weeks or months to testing, we're now expected to wing it as we go along. Unplanned work always has the habit of messing up our schedule.
Do we need a tester?
Facebook's famous motto told us to "move fast and break things". It's OK to release broken software into the wild, provided we can quickly ship bug fixes. It's the modus operandi of most early startups and a perfect match for agile software development. We ship and capture feedback. A bug report, in such a scenario, is just feedback. If a customer takes the time to complain that the Sales Report is broken, we learn they care. If nobody complains about that bug, it's not worth fixing.
If you can afford to move fast and break things, you definitely should. It's the quickest way to evolve your product, as it avoids handovers altogether. To do this, your developers need to be in a position to decide whether the feature can go live. That requires maturity and a deep understanding of the product vision.
A lot of teams can't go around breaking things. They might work in a regulated environment where breaking things too often means going out of business. Or maybe they just find that an unstable product is hard to sell.
In those scenarios, you'll need some kind of quality gate- a handover to an extra set of eyes that will validate whether the feature is good enough to ship.
Short-term handovers
As much as we try to avoid handovers, there are valid scenarios where this isn't possible. In those cases, we need to plan for them.
Most commonly, we have short-term testing. This is a quality gate that's triggered right after the implementation of a feature. Rather than just merging to main, the developer has to ask for someone's OK. This is where you get your domain specialists or testers to double-check that the feature behaves as the users expect.
In such a scenario, it's vital that this feedback comes fast. Hours are better than days. We want to avoid context switching, where a developer focused on one problem suddenly needs to revisit a previous one.
"I know you're working on Y, but I still found a bug in X. Remember X?"
As a rule of thumb, quality feedback should be given during the timebox. From a planning point of view, that means you need plenty of testers. You're in trouble if you have five developers whose work needs to be checked by a single tester. Make sure QA doesn't become the bottleneck. You can do this by not double-checking everything, hiring more testers, or reducing development capacity.
That last one makes “we don’t have the budget for testing” a bogus argument. Hire fewer developers if your tester can’t keep up.
Long-term handovers
The other scenario is "late testing". This is any kind of testing feedback that comes weeks or months after the implementation. While late feedback is expensive and dangerous, it's a reality for many companies. In regulated environments where releases are rare, there is often a need to do some extra bulk testing right before a launch. We could lament how that isn't "true Agile", or we could plan for it.
If you need to do late testing for whatever reason, dedicate a timebox to it. A few weeks before the release, block some time when the team will focus only on hardening the release. That results in a defacto code freeze, as no more new features will be developed until the release is shipped.
One of the hard laws of software development is that testing will always uncover issues. In other words, you can't have someone test your feature and expect zero feedback. A big testing round before going live will always uncover something to fix.
So plan for it.
Planning for quality assurance is vital. Too often, QA is considered a side job. Ideally, your developers are in a position to decide whether their feature is good to go. If that isn't possible, aim for handovers with short feedback loops. If, for one reason or another, you really have to go the expensive route of late testing, plan time in the team's schedule.
I like the nuanced perspective of your post, Mike... It's grounded, realistic, in recognizing that not all contexts will allow the "fail fast, react fast" approach. Too often that is taken for granted these days.