Published by Brad Kuhn on 30 Jun 2009 at 07:31 pm
There are several items that every tester should have in their toolkit. Among them is a test scenario/script template that you are comfortable with, is easy to use, and can be quickly tweaked. I’ve been using this template in one form or another for several years . I hope you’ll find it useful, though I doubt it will meet 100% of your needs. I write test scripts in a word processing document because I find it easier to modify and much easier to read. A good test script should tell a story – why is this script being run, what requirements does it cover, who is the target user group, etc… A spreadsheet doesn’t do this, IMO – though I know quite a few testers who insist on writing test scripts in them.
Let’s start with a definition of test scenario and test script. Test script is easy: it’s the actual step-by-step actions the tester takes, along with the expected results. Test script, test case, and test procedure all refer to the same thing. A test scenario is a group of one or more test scripts that cover a particular functional area, business process, use case, etc… Other names include test set and test suite.
For example, you may have a use case for an existing customer placing an order on your web site. Chances are you’ll have a test scenario (possibly more – it depends on your test approach, particular requirements, etc…) to cover this, with potential test cases covering a single item order, a multiple items order, quantity not on hand exception, etc…
So looking at the template, we begin with the test scenario.
I strongly recommend coming up with a numbering sequence (perhaps meshing with the use case numbering scheme in place) and also naming the test scenario something specific that will help differentiate it from the other test scenarios.
This should be narrative and explain the purpose of the tests, a brief overview of the test cases, and/or references to other related test scenarios.
It is particularly important when you have a large number of test scenarios to track when something was baselined and when subsequent modifications were made. You may want to tweak this to track approvals as well.
Simple list of test cases comprised in the test scenario.
Reference the related use case, if you are using them. I would suggest limiting a test scenario to cover only one use case and not more (though of course you may have more than one test scenario covering a specific use case).
In this section list the technical and/or functional components being covered along with any requirement groups that are being tested. Do not list specific requirements here – that will be done in each test case. Listing components is very helpful if you’re assigning test cases to testers based on technical/functional component or you’re looking at a phased delivery of functionality to test (where components 1-3 are available on a specific date, but components 4-6 are not ready until a future date).
List the target user group(s).
For each test case that is in the test scenario, document the following:
See comments above – I generally number test scripts based on the test scenario, so if I had scenario X, then the first test script would be X.1, the second X.2, etc…
This is vital for a variety of reasons. First, from a planning perspective you need to validate that all requirements are covered by test scripts. Also, when a requirement changes – I know, this hardly ever happens – you need to know which test scripts may need to be updated.
List all steps that should be taken prior to executing the test case – these might include initializing data, checking for required test data, other test scripts that should be executed first, specific hardware/software that should be used, etc…
Post Test Actions
Along with setup steps, there may be steps that should be taken after the test case is executed.
This is the heart of the test script – for each step list the action the tester should take, then document the expected result. Do not make the mistake of including desired outcomes/expected results in the test action column. The more precise and clear you are in your documentation, the less room for questions or tester interpretation during execution. I also have a pass/fail column listed – a lot of people also like to put defect #s here as well, but I don’t (I prefer to track the test script # against the defect in the defect tracking database, not the other way around).
This section should certainly be tweaked for your specific test needs, but this is a good starting point. Here you track who did the test, which test phase it was executed in, which test id was utilized, when the test was executed, and what was the status (passed, passed with minor errors, failed, critical failure, etc… – whatever statuses you utilize).