|
Software Test PlanningHere are my notes about creating Test Plans in a way that reconciles several methodologies, including RUP and the UML 2.0 Test Profile (U2TP) language for designing, visualizing, specifying, analyzing, constructing, and documenting the artifacts of test systems. | Inputs to Testing: |
|
Testing Artifact (Deliverable) Flow Sequence Diagram |
This shows the flow of deliverables among major participants (the stick figures in the use case diagram above). The darker vertical lines illustrate the principal exchanges of artifacts of information — the teamwork necessary among participants. Each artifact's color identify one of the 4 metalayers (packages) of abstraction defined by the UML 2.0 Test Profile (U2TP) standard:
The word "Profile" in U2TP means that it conforms to UML standards. |
"A Standard for Testing Application Software" (1991) by William E. Perry "Quality Essentials" by Jack B. Revelle |
Testing Terminologies
HP (Mercury Interactive)'s Quality Center (formerly TestDirector) product organizes each requirement for testing as a test subject for each AUT under a Test Plan Tree hierarchy. Both manual and automated scripts can be specified in TestDirector. Each test script selected from the tree becomes a Test Step in a Test Set actually executed by TestDirector. In Rational's TestManager, a test plan contains test cases organized within test case folders. These are read by Rational's ClearQuest defect tracking system. Why Bother with UML Test Profiles?
| From the Rational Unified Process (RUP) tutorial:
|
Test Outputs
Test LogsWith UML: A Test Log is an interaction resulting from the execution of a test case. It represents (remembers) the different messages exchanged between test components and the SUT and/or the states of the test components involved.A log is associated with verdicts representing the adherence of the SUT to the test objective of the associated test case. The names of test log files usually differ by vendor: Test ResultsTest Results areKey Measures in TestWork Load Analysis ModelSee Performance Testing using LoadRunnerTest Evaluation SummaryPerformance Reports and Enhancement Requests, Defect Reports, or something.
|
Test Architecture
Test ElementsThe architecture section defines two test elements: test suites and test components.
|
SchedulerThe Scheduler is a predefined interface defining operations used for controlling the tests and the test components.
finishTestCase(t : Test Component) createTestComponent(t: TestComponent) To AGEDIS[3], the Scheduler is a property of a test context used to control the execution of the different test components. The scheduler keeps information about which test components exist at any point in time, and collaborate with the arbiter to inform it when it is time to issue the final verdict. It keeps control over the creation and destruction of test components and it knows which test components take part in each test case.
|
Test Validation ActionValidation actions are performed by a test component. It sets the local verdict of that test component. Evaluation actions evaluate the status of the execution of a test case. The action assesses the SUT observations and/or additional characteristics/parameters of the SUT.Every validation action causes the setVerdict operation on the arbiter implementation to be invoked.
Test VerdictA Verdict is an assessment of the correctness of the SUT.AGEDIS notes that a verdict is a property of a test case or a test context to evaluate test results and to assign the overall verdict of a test case or test context respectively. Test cases yield verdicts. Verdicts can be used to report failures in the test system. Predefined verdict values are:
Verdicts can be user-defined. The verdict of a test case is calculated by the arbiter.
|
|
Test ContextWith U2TP, each <<TestContext>> is a structured classifier acting as a grouping mechanism for a set of test cases.The composite structure of a test context is referred to as a test configuration. The classifier behavior of a test context is used for test control. Each test context must contain exactly one property realizing the Arbiter interface and the Scheduler interface.
Test PartA part of the test system representing miscellaneous components that help test components to realize their test behavior. Examples of utility parts are miscellaneous features of the test system.Test Utility
|
Structured Classifier
|
Test ComponentsWith U2TP, a test component is a structured classifier participating in test behaviors. A test component is commonly an active class with a set of ports and interfaces. Test components are used to specify test cases as interactions between a number of test components. The classifier behavior of a test component can be used to specify low level test behavior, such as test scripts, or it can be automatically generated by deriving the behavior from all test cases in which the component takes part.Test component objects realize the behavior of a test case. A test component has a set of interfaces via which it may communicate via connections with other test components or with the SUT. A test component object executes a sequence of behaviors against the SUT in the form of test stimuli and test observations. It can also perform validation actions, and can log information into the test trace. Whenever a test component performs a validation action it updates its local verdict.
|
Test Behaviors Package |
Test timings
|
ParticipantsTest Team
Development Team
Operations Team
|
Test Risks |
|
Subjects of Testing
|
|
Test Guidelines & Strategies
|
Test PrioritiesWhen a requirement is added using Mercury TestDirector, the product requires specification of a Priority:
2 - Medium 3 - High 4 - Very High 5 - Urgent Note: WinRunner does not recognize the F4 key normally used to drop an activated list. Define examples of what each priority level means in your organization and when each value is appropriate and not appropriate.
|
Test Ideas
|
Related Topics:
| Your first name: Your family name: Your location (city, country): Your Email address: |
Top of Page
Thank you! |