Functional Testing Process
- Define Requirements and Test Cases.
Within HP Quality Center (QC), define all business processes in the system.
Using HP BPT (Business Process Testing),
subject matter experts (SMEs) with an understanding of business processes
a test case create "component shells"
for main and variant (alternative process flow)
in each business process (such as "Open an Order").
Multiple tests are defined for each business process for variations in data boundaries
and potential (negative) error or vulnerability.
The linkage of test cases to requirements defines the RTM (Requirements Traceability Matrix)
used as the basis for determining requirements coverage metrics.
Categorize each requirement by importance and risk exposure so that
relevance can be used as the basis for prioritizing testing efforts within
the time available.
- Catalog Technical Components Under Test.
Build lists of transactions, screens, fields, controls, etc.
using TAO scans and QTP recorder and Object Spy.
The result is a
GUI Map of logical handles to uniquely identify physical objects under test.
Issues of testability may surface which are resolved by
developers using unique "ID's" in the screen presented to users.
Not only does this help automated recognition,
this makes for more precise communication by avoiding confusion.
SMEs identify frequently used test functionality common in several scripts
which Automation Engineeers define as keywords and functions
in a reusable library referenced by several test scripts.
- Design Testing Work.
Test sets of manual or automated test steps are
organized for each test requirement.
These define a test flow sequence of functional Actions
(such as "Login") consisting of several steps
such as user manual task or tester verfication.
- Identify sources of data.
Custom SQL queries may be needed to create spreadsheets containing data needed for testing.
- Pre-plan variations in what is tested (using Testing Ideas Checklists).
- Identify how to verify whether what the system returns is correct.
- Anticipate in scripts how to handle various conditions the application might encounter,
such as pop-up screens announcing errors and confirmation messages.
This reduces script debugging while testing.
- Define how to trigger possible error conditions during negative testing
of alternative logic branches.
- Assemble Testing Assets.
Create and debug
Test Scripts by consolidating references to GUI maps and functions in an
Object Repository.
- Make sure that scripts can identify objects at run-time.
- Edit code to Parametize scripts to use variations in data.
- Add code to output log trace entries.
- Add Correlations to catch data returned from the system
for result verification in checkpoints
to determine whether tests pass or fail.
- Conduct a Test Readiness Review (TRR) to ensure that systems,
data, disk space, and other resources are
indeed available when testing is scheduled to occur.
The review is on requirements assigned for testing within a particular
Release defined in Quality Center.
Within a release, several Cycles may be specified
(for integration, smoke, regression, interface, system performance, etc.).
- Perform Testing. Emulate users and systems at work by
running scripts remotely from Quality Center or using Silent Test Runner
from LoadRunner or BAC.
If running manual tests, run QTP scripts to provide quick navigation and
run the Mercury Micro Player to record testing sessions.
If running fully automated tests, use Batch.
- Analyze Results and Report Conclusions.
Review error messages and captured status.
Perform Root-Cause Analysis when necessary while documenting defects.
Add tester notes into TestDirector for Quality Center.
- Archive Results.
Use the Test Results Deletion Tool to remove obsolete test result files.
|
|
|
|