Automated testing for business owners: Designing your first automated test plan

This series of articles is for business owners and product managers with a small team of contract or full-time developers who build software products of all kinds, and who want to increase product quality, increase confidence in the product, and get time back from repetitive manual testing to focus on more important work. This article guides you through developing your first test automation plan. It will take you two to four hours to complete a plan based on the guidelines presented below.

Whether your product is SaaS, a mobile app, or it’s a more traditional software app, you know how important the final product’s quality is. Getting reports from customers that the app is broken is costly -- there’s the risk of customer churn, the response emails that you have to send, the fire drill to fix the problem.

Building software is a creative act: we create it from nothing but the electricity in our laptop batteries, a bold idea or two, and countless hours of effort and refinement. But software’s power is in its repeatability -- this is what affords us the incredible productivity gains our society has seen in the information age.

If you are running a business, of course you are responsible for the final quality of your product, but you don’t have to check that quality yourself. Software’s repeatability gives you the power to scale yourself to ensure your product’s quality. The idea is simple:

  1. identify the most valuable aspects of your product’s quality,
  2. select those which are the most effective to test through automation,
  3. define an automatable test plan, then
  4. build some software to do the testing dirty work for you.

This article covers steps 1-3, and assumes you have no automation in place. By the end, you will have the outline of a project you can take to your development team and begin work on step 4. This is not intended to be prescriptive; if you see a better way to do any of these steps, adjust this plan to better fit your business. For example, if you can involve your developers sooner, they will likely better understand your core goals.

For simplicity’s sake, let’s assume your business produces a SaaS-based web app. If your product is more complex than this, everything here will apply, but you should expect a more complex setup.

1. Identify the most valuable aspects of your product’s quality

Tests for high-value aspects of your app are highly valuable.

The most important thing to remember when planning out which parts of your app are the most important to test is the 80/20 rule: roughly 80% of the effects come from 20% of the causes.

You likely don’t have the time to write automated tests for absolutely everything your app does, nor would it be desirable to do this. Instead, focus on the 20% of your app that generates 80% of the benefit for your customers.

Your app might have some formally or informally-defined use cases (a use case is a sequence of steps a particular type of user would take to achieve some particular goal). Which of these are the most important to your customers for them to get value out of your product? Which use cases do 80% of your customers use? There’s only likely to be a small number of them.

When it’s time for a new release, what steps do you run through when you’re testing out the new build? These are probably based on the use cases that you know are highly important to your customers. Why do you test these things? Which of these tests give you the most confidence?

Which browser / OS combinations do most of your users use? Do most of your users only use the desktop app, or do you see significant usage from mobile browsers? For the desktop users, approximately what screen size do 80% of your users use? All of this is easily found out from web analytics tools (e.g. Google Analytics). Basing your implementation details (and updating them over time) based on these types of metrics can save many hours of underutilized or unnecessary work.

Sometimes it is helpful to consider the ways things can break instead of the ways things should work. Your app might frequently break in a particular place or doing a particular operation, or on a particular browser. Don’t focus too much on these negative test cases, but some judicious test cases for automation might save some detection time.

It is true at a surface level to say that knowing the app is not broken is the same as knowing the app is working correctly, but with any non-trivial app it’s very hard to prove either of these. Think about both sides of what a small number of discrete tests might show: you can’t cover every possibility, but if certain pathways work as expected, and any common failures you’ve seen aren’t occurring, then you can have a good degree of confidence that the feature is working.

Create a prioritized list.

Write down your ideas for what areas to test, then prioritize them by their value. You should expect to spend one to two hours doing this.

For a hypothetical online banking app you might write the following:

  1. An unauthenticated customer can log in to their account from bank’s main website.
  2. A customer can see their latest account balances on the home screen.
  3. A customer can view the last month’s transactions in the account details page.
  4. A customer can apply for a loan from the home screen.
  5. An unauthenticated user cannot access the banking app home screen.
  6. A user from cannot access a different customer’s account data.
  7. etc…

In this list, (1) is something that all customers must be able to do. (2) and (3) are what the great majority of users do (e.g. based on analysis of actual usage patterns). (4) is responsible for a significant amount of incremental revenue generation. (5) and (6) are negative tests, but since privacy of customers’ financial data is so important, these are high up on the list.

Knowledge of your customers and your business will dictate your own prioritization. The exact ordering isn’t important, but a generally accurate prioritization will set you up to get the most valuable tests implemented and “earning their keep” soonest.

2. Estimate the cost of test automation

Or, how to think like an automated test

Humans are good at spotting patterns, but not great at following precise instructions in exactly the same way over and over again. Automated tests are exactly the opposite. People are able to handle ambiguity, and improvise. Automated tests are not.

Automated tests are easiest to write when they are sequences of very straightforward actions and checks. For example:

  1. Type “jc424” into the user id text field.
  2. Type “Bad_passw0rd” into the password text field.
  3. Click the log in button.
  4. Check that an error message is displayed.

More complex tests are possible, but take more time and effort to create, tend to be more error-prone: they have a higher cost. For example, automated tests for drag and drop functionality are harder to to write.

Some things are very hard to write automated tests for. Whether the app “looks right” is one. We don’t recommend you try to automate any tests that require this type of subjective judgement. The return on investment for these type of tests is very low.

Assign costs to each test case

The goal is to assign a cost to each test case at the top of your list. We recommend a “T-shirt sizing” approach, i.e. give each test case an estimated cost represented by one of S, M, L or XL.

The effort to implement an automated test is a good proxy for its cost. Estimating development effort is generally inaccurate, so you don’t need to be highly concerned about accuracy here. Instead, be consistent in how you evaluate these estimates, estimate items relative to one another, and move fast through this exercise. You should spend at most fifteen minutes doing this.

Depending on your technical experience, it might be valuable to get your trusted technical advisor to assist.

If you don’t have any idea of how to estimate the cost of the tests, these guidelines should get you started:

  • For each test case, start with the following sizes:
    • Small if the test case is a linear sequence of 10 or fewer simple actions (e.g clicks, text entry into a field, and simple checks).
    • Medium If the test case is a sequence of 30 or fewer simple actions.
    • Large if the test case requires a reset of the backend database for it to run properly.

Then, go up to the next size for each of the following if the test case:

  • ...introduces a new platform (e.g. a mobile test when all other tests are desktop);
  • ...requires dragging, swiping, or more complex UI interactions;
  • ...can only be expressed as a branching sequence of steps instead of a simple sequence;

If the test case matches two of these conditions, you should go up two steps.

Reduce your list to the five test cases with the highest ROI

Your list of test cases is prioritized in terms of business value, and you have just estimated the cost of implementation. Re-order your list, placing the highest value, lowest cost test cases at the top. Exclude any tests that have an XL implementation cost. Try to avoid any tests that have a L implementation cost (though, this might be unavoidable). Then select the top five.

Testing and test automation are ongoing efforts. Once you have completed these first five (or even better, three!) test cases, you can revisit the next most important tests. They will be easier to implement the second time around.

3. Define an automatable test plan

If you are defining test cases to express business goals to a development team unfamiliar with this type of testing, or to contract developers, writing the details of the tests as a simple list of actions and checks is a straightforward way to communicate the requirements for the tests.

Lists can easily express a single linear sequence of actions and checks. If your test case has multiple branches, it will be hard to write as a list. Avoid nesting sub-lists inside lists; instead, separate the branches into their own test cases. Let your developers implement the common parts efficiently.

By now you will have been thinking about many of your test cases as sequences of simple actions and checks. Give each test case a meaningful title (e.g. “Customer can apply for a loan from the home screen”), and write out the complete sequence of steps for all five of your selected test cases.

Depending on what you have done so far, this might either be very easy to do, or it might take up to a couple of hours.

Next...

Your next step is to take your plan to your development team to implement. Review the test cases with them, and review the cost estimates. Their input on cost estimates might cause you to adjust your priorities a little.