How to plan validation right
Developing modern software products is all about validation. We no longer build an expensive solution and hope for the best. The days of "if you build it, they will come" are over. We validate a big feature before we build it. We research what users really want and only build what is guaranteed to have traction. Through validation, we're sure to build the right thing.
Well, that's the theory. Despite the buzz on Product Manager Twitter, most companies are nowhere near close to this dream. Our industry is still in its infancy when it comes to validating features and products.
We talk about data-driven decisions and A/B testing. Yet we limit our investigation to asking a few users whether they like a certain wireframe.
The reason why we don't do full-fledged user research is that we don't think in experiments. We think in features. It’s OK for Product people to send surveys, but once a developer is assigned to the Jira ticket, we expect them to build that solution. In reality, we can't validate without including developers. Just like with solution design, developers need to be included in designing the experiment.
So, how do you plan that? How do you make sure we set up a good experiment, capture its feedback and only build the solution when it's verified?
Step 1: Problem Definition and assumptions
Feature or product validation all starts with a problem. Our customers feel a certain pain, and we have some ideas on how to soothe this. The Problem Definition is a document where we describe the problem. Who struggles how hard with what. This type of document is usually written by Product, but others can contribute. While this problem definition does not contain detailed solutions, it should list some assumptions. It's these assumptions that we want to verify. That's what validation is: measuring how wrong our assumptions were.
Let's get into a practical example. Let's say we have a tool that visualizes datasets. One of the pain points for users is copy-pasting the data from external systems into our tool. We feel some kind of importer would be valuable. We write a Problem Definition and list a bunch of assumptions:
We assume our users prefer to import from Excel
We assume importing multiple date formats will be a hassle
We assume datasets will be between 100Kb and 2MB in size
And most importantly: we assume users would love such an importer
Step 2: Designing the experiment
The team gets together and carves out a timebox with a single purpose: designing the experiment. The Problem Definition gives us the list of assumptions we want to validate. So, the purpose of the experiment is to find ways to capture these assumptions in a quantitative way. This is where it's vital to include developers. While we could just have Product people send out a survey, building a real experiment gives us richer feedback.
So, sticking with our scenario, the team comes up with an experiment. Certain users will see a popup teasing the alpha test of the new import functionality. They are asked to upload a file and get thanked for their contribution. We will inform them of further progress. The team will build a simple popup, file upload, and Thank You page. While that's trivial to build, it gives them a lot of metrics to validate the assumptions. It also provides a list of champion users to interview.
Are users clicking on the thing (is there interest in the feature?)
What types of files are they uploading (how many are 100Kb Excel?)
Will those timestamps be an actual hassle?
Step 3: The waiting game
Users need time to interact with the experiment. Let's give it a good few weeks before closing the validation round. In this period, Product people can investigate the incoming feedback and even ask additional questions to champion users.
Step 4: Implementing the solution
Now it's time to look at the gathered feedback and turn this into a solution. If the feedback is overall negative or inconclusive, we skip this step. We don't want to invest effort in an unvalidated or disproven feature. But if we notice interest is there, we can design and build a solution armed with a much better understanding of the problem.
Our team might have discovered that while Excel is indeed popular, most users would prefer integration with Google Sheets instead. That's something we want to know before designing the solution!
So, let's look at what the plan for such a validation cycle would look like.
In this scenario, we assign a Focus Block of a week to design the experiment and one of 2 weeks to build the solution. In between the experiment and solution, our engineers can tackle different problems while Product gathers feedback. We've saved two weeks of useless effort if our validation is negative.
We let developers flex their problem-solving muscles twice and give Product the specific tools they need to get the right insights.
This kind of real validation beats surveys and wireframes any day.