Advantage Advertised | But ... | So here is my advice: |
---|
Fast "scripts execute faster than human users."
| People are needed to program the scripts, setup the test runs, interpret the results, discuss fixes, etc.
All this makes test execution actually a small part of the whole testing effort.
| The first automated test project is more work than purely a manual effort.
Expect automation to be a tool for the efficiency of testers, not a total replacement of testers.
Use test script programming to quickly bring the tester to a consistent starting point within the application under test.
|
Comprehensive "You can build a suite of tests that covers every feature in your application."
| The more functionality covered with automation,
the more complicated the test programming.
Automation overcomes the trade-off between time spent on breath vs. depth of testing.
| Define what is "total" functionality and run conditions.
Use checklists for manual tests on what is prone to error.
Automate coverage for breath so testers focus on depth.
Be alert for where new techniques are being attempted and budget for them.
|
Reliable "Tests perform the same operations each time they are run,
thereby eliminating human error."
| Today's technology can only recognize what it's been programmed to check.
Humans (like hound dogs) can snoop around and notice things out of the ordinary.
| Use automation to do tedious tasks such as scanning an
application for expected menu titles, etc. and for presenting indications of possible problems to testers.
|
Programmable "You can program sophisticated tests that pull out hidden information from the application."
| Will time on sophisticated test script programming be viewed as taking time away from the "real work" of manual testing?
Are testers able and willing to become disciplined programmers maintaining complex program code?
| Budget time specifically for automation research and development (and evangelism).
Train programmers and verify their proficiency with crafting logic
and extracting information from the API.
|
Repeatable "You can test how the software reacts under repeated execution of the same operations."
| When the software changes, scripts and image maps need to be reprogrammed or even redesigned.
| Use a data driven approach:
Use spreadsheets or test specification databases to programmatically drive testing.
|
Reusable "You can reuse tests on different versions of an application, even if the user interface changes."
| Test scripts may need to be re-recorded with each deployment if
developers do not detail what has changed (from the tester's point of view)
since the last deployment.
To work well with each other, program code need to be created in reference to a common architecture.
| Run code analysis tools.
Use script generation tools.
Enforce
naming conventions
and the use of a common library of functions. Design test scripts in modules
which begin from common starting points (such as a home screen) and
track who uses what test data.
|