| Software Performance Project PlanningThis page presents the phases, deliverables, roles, and tasks for a full performance test project that makes use of several industry best practices and tools for load testing and performance engineering one of the activities for capacity management of IT Service Management (ITSM). This is a companion to my Sample Load Testing Report "If you can't describe what you are doing as a process, you don't know what you're doing." W. Edwards Deming |
Aspects of a Performance Improvement Project
There are two environments for ensuring performance:
|
|
Pre-requisitesIdeally, all this occurs AFTER considering the organization's quality maturity at ensuring smooth team development by clarifying participants' roles and risk-adjusted milestone schedules toward delivering a flow of artifacts created by achieving project process objectives using provisions needed for load-testing |
|
Phases: Define > Measure > Analyze > Improve > Control
I prefer to use this 5-phase approach detailed on this page and on other pages of this website.
|
|
The approach above was drawn from several capacity management frameworks: In the electronics industry:
After prototyping, and after the product goes though the Design Refinement cycle when engineers revise and improve the design to meet performance and design requirements and specifications, objective, comprehensive Design Verification Testing (DVT) is performed to verify all product specifications, interface standards, OEM requirements, and diagnostic commands. Process (or Pilot) Verification Test (PVT) is a subset of Design Verification Tests (DVT) performed on pre-production or production units to Verify that the design has been correctly implemented into production. The Microsoft Operations Framework (MOF) defines this circular process flow of capacity management activities:
Oracle's Expert Services' Architecture Performance Capacity Scope & Assessment consulting uses these phases and deliverables:
Software Performance Engineering (SPE)Smith's Software Performance Engineering (SPE) approach begins with these detailed steps:
"5S" Kaizen Lean ApproachSort > Stabilize (Set in order) > Shine > Standardize > Sustain
Sort tools used most often vs. what is infrequent
|
|
Deliverables Flow
Click on this image to pop-up a full-sized image. The objects in this flowchart are labeled so that they can be referenced in plans and schedules. |
|
Forms/Types of Performance Testing/Engineering | Accomplishments |
| ||
A. Speed Tests
conclusions
| During speed testing, the user response time (latency) of each user action is measured. The script for each action will look for some text on each resulting page to confirm that the intended result appears as designed. Since speed testing is usually the first performance test to be performed, issues from installation and configuration are identified during this step. Because this form of performance testing is performed for a single user (under no other load), this form of testing exposes issues with the adequacy of CPU, disk I/O access and data transfer speeds, and database access optimizations. The performance speed profile of an application obtained during speed testing include the time to manually start-up and stop the application on its servers. |
|
| |
B. ContentionTests (for Robustness) conclusions | This form of performance test aims to find performance bottlenecks (such as lock-outs, memory leaks, and thrashing) caused by a small number of Vusers contending for the same resources. Each run identifies the minimum, average, median, and maximum times for each action. This is done to make sure that data and processing of multiple users are appropriately segregated. Such tests identify the largest burst (spike) of transactions and requests that the application can handle without failing. Such loads are more like the arrival rate to web servers than constant loads. |
|
| |
C. Volume Tests (for Extendability)conclusions |
These test runs measure the pattern of response time as more data is added. These tests make sure there is enough disk space and provisions for handling that much data, such as backup and restore. |
|
| |
D. Stress / Overload
conclusions
|
This is done by gradually ramping-up the number of Vusers until the system "chokes" at a breakpoint (when the number of connections flatten out, response time degrades or times out, and errors appear). During tests, the resources used by each server are measured to make sure there is enough transient memory space and adequate memory management techniques. This effort makes sure that admission control techniques limiting incoming work perform as intended. This includes detection of and response to Denial of Service (DoA) attacks. |
|
| |
E. Fail-Over
conclusions
|
For example, this form of performance testing ensures that when one computer of a cluster fails or is taken offline, other machines in the cluster are able to quickly and reliably take over the work being performed by the downed machine. This means this form of performance testing requires multiple identical servers to be configured and using Virtual IP addresses accessed through a load balancer device. |
|
| |
F. Spike |
Such runs can involve a "rendevous point" where all users line up to make a specific request at a single moment in time. Such runs enable the analysis of "wave" effects through all aspects of the system. Most importantly, these runs expose the efficacy of load balancing. |
|
| |
F. Endurance
conclusions
|
Because longer tests usually involve use of more disk space, these test runs also measure the pattern of build-up in "cruft" (obsolete logs, intermediate data structures, and statistical data that need to be periodically pruned). Longer runs allow for the detection and measurement of the impact of occasional events (such as Java Full GC and log truncations) and anomalies that occur infrequently. These tests verifies provisions for managing space, such as log truncation "cron" jobs that normally sleeps, but awake at predetermined intervals (such as in the middle of the night). |
|
| |
H. Scalability |
The outcome of scalability efforts feeds a spreadsheet to calculate how many servers the application will need based on assumptions about demand. |
|
| |
I. Availability
conclusions
|
These are run on applications in production mode. This provides alerts when thresholds are reached and trends to guage the average and variability of response times. |
|
Capacity Management
A META Group study published in September 2003 reveals that capacity planning is the top critical issue for large enterprises (those with more than 1,000 people), with 33.8 percent of respondents identifying this as a critical issue. This high priority will continue through 2005/2006, escalating consolidation and efficiency demands. Load testing ensures that demand for computing power can be met by the supply of computing power. Proactive capacity management balances business requirements with IT resources, so you can consistently deliver quality service at minimum cost while minimizing the risks of higher utilization rates. Both ITIL and MOF (Microsoft Operations Framework) recognize that CM consists of three sub-processes:
|
|
|
The above makes use of concepts from
Six Sigma vs.
the "Planning to Implement Service Management" function
referenced by ITIL
based SIPs (Service Improvement Plans).
|
|
Six SigmaTraditional "Six Sigma" projects aim to improve existing products and processes using a methodology with an acryonym of DMAIC (Commonly pronounced duh-may-ick, for Define, Measure, Analyze, Improve, and Control).This is one of many Design for Six Sigma (DFSS) methodologies. The words Identify, Design, Optimize, and Verify in [brakets] are the basis for the acronym to the IDOV methodology for designing new products and services to meet six sigma standards. I also appreciate the "Define, Measure, Explore, Develop and Implement" steps from the PricewaterhouseCoopers methodology because they treat performance project artifacts with the same controls as "real" developers.
|
|
ITIL Service DeliveryLoad Testing is a sub-process of the Capacity Management function within the Service Management standards ITIL (Information Technology Infrastructure Library) and its derivatives the BS 15000 and the MOF (Microsoft Operations Framework)
|
|
Capacity PlanThe capacity plan is the consolidated output (deliverable) from the capacity management process.The capacity plan recommends the resource levels and changes necessary to accomplish operating level requirements that support the service level agreement (SLA). The capacity plan includes the cost and benefit of those resources, reports of their compliance to the IT SLA, and the priority and impact of systems and resources on the overall business and the IT infrastructure. The Capacity Plan documents:
|
|
Other ApproachesMark McWhinney's SEI Load Test Planning Process associates the 6 areas of a Load Test Plan in the sequence they are addressed.Mark McWhinney's Critical Success Factors for Load Test Projects Mercury's Capacity Planning product webpage (OEM'ed from HyPerformix) by Sivan Metzger, Product Manager |
|
Organizational Concerns
The performance engineer's role is often misdefined and misunderstood. He or she can be blamed for not providing information, or scorned for providing unfavorable information. To operations personnel who are used to doing what they want, their support of capacity management efforts can be perceived as additional unneeded intrusion Naturally, the role of capacity management is reconciling the needs of all parts of the organization mentioned above at the same time at the most overall cost effective" way. Organizational design issues can setup performance engineering projects and personnel for either success or failure. Network Performance Management systems/platforms (such as Avesta's Trinity, Loran Kinnetics, Manage.com's Frontline, and NextPoint's S3) have these capabilities:
As organizations move toward virtualization of server capacity, the job of capacity management would naturally be more about monitoring and managing costs. balancing the production network and benchmarking
|
Information Technology Evaluation Methods and Management (Hershey, Pa. Idea Group Publishing, 2001) by Wim Van Grembergen Achieving Software Quality Through Teamwork (Boston Artech House, 2004) by Isabel Evans Essentials of Capacity Management (New York John Wiley & Sons, 2002) by Reginald Tomas Yu-Lee Six Sigma Team Dynamics : The Elusive Key to Project Success (New York John Wiley & Sons, Inc. 2003) by George Eckes
|
|
Operating Level Agreements (OLA)
It is similar to but is normally not as formal as SLAs (Serice Level Agreements) with customers. The OLA should have its metrics stored in the CDB. The META Group anticipates that through 2005, more than half of IT organizations will invest in formalized IT business plans governed by service-level agreements (SLAs). Unfortunately, fewer than 10 percent of IT organizations have a well-defined service-level management process in place today that can accurately and consistently communicate relevant service levels to the business units.
|
|
Artifacts and Information Flows among Roles
This pseudo usecase diagram summarizes the information (artifacts) flowing among people assuming certain roles involved in managing the performance of large applications:
Before any Project: |
|
Project Mission and Objectives for Customer Satisfaction
The column to the right of each requirement contains weight ratings that allow certain customer requirements to be weighted higher in priority than others in the list. The example shown here is the average of weights for different sub-groups. The "(1-5)" range in this example can be optionally replaced with ISO/IEC 14589-1 evaluation scales or advanced methods such as Thomas Saaty's "Analytic Hierarchy Process" used to establish scales with precise scales:
Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World (3rd ed. May 1, 1999) The customer sub-groups shown in this example is for roles working with a computer application:
QFD graphic programs can add:
HOWs - Product Requirements - Technical Engineering Design CharacteristicsAt the heart of the Quality Function Deployment (QFD) approach is a matrix of how well each customer requirement is satisfied by measurable product requirements (also called by various authors product design engineering characteristics) that are:The International TechneGroup, Inc. (ITI) approach for Concurrent Product/Manufacturing Process Development breaks this "WHATs" of the "voice of the customer" (VOC) down further into User Wants, Must Haves, Business Wants, and Provider Wants.
The CMM (Capability Maturity Model developed at Carnegie Mellon University) has 7 measures:
|
|
Software Quality RequirementsAssociated with SEI/CMU's Taxonomy of Quality Measures, the 2000 revision to ISO/IEC FDIS 9126:1991 and SQuaRE defines 3 types of software quality requirements:
ISO/IEC 14598 gives methods for measurements, assessment and evaluation of software product quality. SPICE - Software Process Improvement and Capability dEtermination is a major international standard for Software Process Assessment. There is a thriving SPICE user group known as SUGar. The SPICE initiative is supported by both the Software Engineering Institute and the European Software Institute. The SPICE standard is currently in its field trial stage.
|
|
Project Numerical Business Goals
Capacity planning saves money by balancing two conflicting conditions:
| Balanced Scorecard Diagnostics: Maintaining Maximum Performance (John Wiley & Sons © 2005, 224 pages) by Paul R. Niven presentis a step-by-step methodology for analyzing the effectiveness of a company's balanced scorecard, with tools to reevaluate measures for driving maximum organizational performance. |
"Balanced Scorecard" MetricsThe "Balanced Scorecard" (BSC) was introduced in 1996 by a popular book written by Robert Kaplan and David Norton (consultants and professors at the Harvard Business School). This lists the perspectives of a Balanced Scorecard, and some activity and Key Process Indicators (KPIs) which are most relevant to capacity and performance management:
These Balanced Scorecard metrics imply these business strategies:
Management DashboardsTo each metric, dice and slice:
|
Performance Project Plan
This information supplements (and in some cases contradict) the body of knowledge for QAI Certified Software Project Managers (CSPMs). |
|
Performance Within Development LifeCyclesParasoft's Automated Error Prevention (AEP) process includes several test-friendly steps:
|
|
The Context of Performance and Scope of Performance TuningPerformance improvement and management projects can be considered in the context of these architectural layers and components:
|
|
Provisions for Performance Testing
One of the key "Success factors" of a performance measurement project is the availability of resources when needed. Waiting for resources (or working around the lack of resources) is one of the major reasons for project delays. There are two areas of provision (two sets of budgeted costs): |
|
|
|
|
|
EnvironmentsSpecific machines on the technology "stack"
Machines Specfic to the Load Test Environment |
Component resources within each server
|
|
|
|
|
|
Risk Contingency Adjustments
|
|
Usage Patterns Trend Analysis
The estimates are based on Estimated Market (User) Usage Patterns which feed Budgets & Forecasts of Costs and revenues Sudden peaks are common, as illustrated by this graph of search interest about a movie title:
Resource Load PatternsMeasures of "Load" provide a guage of the amount of work, such as "horsepower" in the physical world and MB of data transferred per second.service capacity business impact analysis (BIA). CapacitiesUse the Poisson Distribution
Each piece of equipment has a limit on how much it can produce. An assembly can only handle as much as its smallest channel. For example, a web server has an input buffer, an internal queue, and an output buffer.
Predicted Performance Profiles for anticipated loadsEstimates need to be based on peak rates of transaction "busyness" at various points in time.Rather than "average" loads, it's "maximum" values during various blocks of time. BottlenecksLike a chain's strength is limited by the strength of the weakest link,the capacity of an entire system processing transactions is limited by the capacity of the slowest component within the slowest server. Cash memory is the part of the computer that remembers how much money you spent on your computer. The more you spend on your computer, the faster it will work. That's why the million dollar computers work so fast - they have more cash memory than you do. IssuesAnalysis of results may identify the following issues:So the capacity manager must involve him/herself in a large scope of all categories of the entire CIT architecture supporting the organization's Service Catalog: The components within each server: The impact of bottlenecks is included in the metric percentage utilization of resources. Capacity Management Database (CDB)One of the "best practices" of service management frameworks is that all this information be defined in a capacity plan stored within a capacity management database (CDB). (This is related to but separate from the configuration management database, or CMDB.)The CDB contain the detailed technical, business, and service level management data that supports the capacity management process. The resource and service performance data in the database can be used for trend analysis and for forecasting and planning. Mainframe based data collection methods, tools and techniques include MXG, SMF, SAS and quantitative analysis techniques.
|
verifier.exe /flags 2 /driver drivername The Linux Test Project's list of open-source Tools for testing, debugging, static analysis of code making use of Linux filesystems, clusters, databases, event logging, By Jeff Martin (ffej at us.ibm.com).$40/$2 Professional Web Site Optimization Wrox Press. February 1, 1997 by Michael Tracy, Scott Ware, Robert Barker, and Louis Slothouber Microsoft's Open Wiki Forum for performance and scalability discussions Computer Systems Performance Evaluation and Prediction (Digital Press; October 20, 2002/2003) by Dartmouth professors Paul Fortier and Howard Michel. This textbook fills the void between engineering practice and the academic domain's treatments of computer systems performance evaluation and assessment by providing a single source on how to perform computer systems engineering tradeoff analysis which allows managers to realize cost effective yet optimal computer systems tuned to a specific application. List of Web Site Test Tools and Site Management Tools maintained by Rick Hower |
|
UML 2.0 Test Profile
Information about each Test Context — the collection of a test configuration on which test cases within test suites are executed.
component-level and system-level tests. Time ConceptsThe set of concepts to specify time constraints, time observations and/or timers within test behavior specifications in order to have a time quantified test execution and/or the observation of the timed execution of test cases.specification of tests for structural (static) and behavioral (dynamic) aspects of computational UML models, A test context is just a top-level test case. Annotate the model with testing information:
|
|
Off-Estimate Alerts
So if during production operation loads become higher than expected, those monitoring the data center would issue alerts for reactive action and additional analysis. |
"Would you believe ... ?" |
|
Performance Criteria (Supplemental Requirements)
Variations on this include response time degradation expected for different number of users exercising business and administrative tasks described in the application's Use Cases document.
| $95/49 The Art of Computer Systems Performance Analysis : Techniques for Experimental Design, Measurement, Simulation, and Modeling by R. K. Jain (John Wiley: 1991) is a seminal classic must-read for its clarity. $45/15 Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software (Addison-Wesley: September 17, 2001) by Dr. Connie Smith and Lloyd Williams of Performance Engineering focuses on object-oriented systems and alignment of Software Performance Engineering (SPE) with RUP. It notes performance patterns and anti-patterns. $40 Measuring Computer Performance : A Practitioner's Guide (Cambridge University Press: September 2000) by David J. Lilja (Professor at U. of Minnesota and author) is a more gentle introduction than Jain's, which is more quantitative. |
|
User Steps
|
|
Result DesignAn important outcome of the design phase is how results will be organized and presented to various audiences. Results from the $800 SPECweb99 (v1.0 announced 1999) and SPECweb99_SSL (March 2002) pre-defined workload generators to benchmark the number of WWW server connections per second are summarized using this table format:
|
|
Design the tests
Examples of categories (and actions) include:
Each possible action during user sessions (the paths through the application) can be graphically depicted with lines when using the industry-common User Community Modeling Language (UCML)
Parentheses after the action indicate the likelihood of that action occuring.
Dotted lines under the action identify additional optional actions.
When Scott Barber first defind UCML in 1999, he also proposed a format for defining additional information, such as:
The parameters include how many virtual testers will be used and when. The factors used during testing define what is varied. Variations in configurations and database-access scenarios for each iteration. Mock-ups of the statistics and graphs to be generated after each test (for each build) are created at this time. The production test physical Environment of servers and networking devices are also assembled at this time based on the same Installation Procedures as used during actual Deployment. To avoid delay later, it helps to identify early what the techniques needed to handle complexities in the application or environment (such as the use of firewalls, fail-over, load balancing, session identifiers, cookies, XML/XSLT transforms, etc.). Web protocol agents do not include processing time for plug-ins like Macromedia Flash and Real players. For timings on how much time it takes Flash to paint the screen, you need an additional license from Mercury for the Flash protocol emulator. Test of web servers typically include use of HTTP caching such as
| This document makes use of the terminology from UML 2.0 Testing Profile specifications v1.0 (July 7, 2005) This enables the test definition and test generation based on structural (static) and behavioral (dynamic) aspects of UML models, UML2 Testing Profile was developed based on several predecessors: SDL-2000, MSC-2000, and TTCN-3 (Testing and Test Control Notation version 3) — also published as ITU-T Recommendation Z.140 — (Developed during 1999 - 2002 at the ETSI (European Telecommunications Standards Institute)) is a widely accepted standard in the telecommunication and data communication industry as a protocol test system development specification and implementation language to define test procedures for black-box testing of distributed systems. ETSI European Standard (ES) 201 873-1 version 2.2.1 (2003-02): The Testing and Test Control Notation version 3 (TTCN-3); Part 1: TTCN-3 Core Language. J. Grabowski, D. Hogrefe, G. Réthy, I. Schieferdecker, A. Wiles, C. Willcock. An Introduction into the Testing and Test Control Notation (TTCN-3). Computer Networks, Volume 42, Issue 3, Elsevier, June 2003.
Neil J. GuntherProfessor of Computer Science at the Federal University of Minas Gerais, Brazil, Xerox PARC & Pyramid (Fujitsu) alumnus, founder of Performance Dynamics and developer of the PARCbench multiprocessor benchmark and C-language Open Source performance analyzer called PDQ queueing model solver.$90 The Practical Performance Analyst: Performance-By-Design Techniques for Distributed Systems (McGraw Hill: February 1998). This has been obsoleted by
$46 The Practical Performance Analyst, 2nd Edition (Authors Choice Press, October, 2000)
&
A complete rewrite of the above is now underway with Guerrilla Capacity Planning: Hit and Run Tactics for Sizing UNIX, Windows and Web Applications Springer-Verlag)
|
|
Provisioning Milestones
Time to ensure proper (coordinated) installation of hardware and software is chronically underestimated for performance projects. For whatever reason, it is often assumed that performance testers do not need the same amount of time as operations staff to install an application. Additionally, information about installation issues often do not get to performance testers. Yet, even small mistakes in installation can invalidate test results. The installation milestones (below) are repeated for each hardware configuration (m) to be tested : m.1.allocated > m.2.delivered > m.3.assembled > m.4.installed > m.5.configured > m.6.available > m.7.operational > m.8.benchmarked
|
|
Construction: Create and validate Speed Test scripts
This corresponds to SPE steps:
Repetitive load and stress tests automate the steps defined during preliminary performance testing. Actions automatically captured into a load testing script are modified several ways: Changes to loadrunner scripts include:
|
|
Levels of Scripting CapabilityThe IBM Rational Process (RUP) for Performance tracks the progress of performance testing by the maturity of scripting assets.The purposes of test runs during a load testing project typically follow this sequence of increasing capability levels over time:
|
|
Possible impacts to performanceHere are the major variables to track the capability of load test scripts:
Here are the complexities of a load script:
|
|
Process Milestones
|
|
Models of Usage and Capacity
The objective of modeling is to create a mathematical model. For example, create a spreadsheet such as Exch_Calc.xls from the Microsoft Capacity Planning and Topology Calculator to predict the scalability of Exchange 2000 email messaging infrastructure deployment. The spreadsheet calculates the expected number of Windows 2000 Active Directory Global Catalogs, domains, and sites.
The model would take into account the software clients used to access mail, server transactions with that client, the hardware (number/speed of processors), and the physical deployment itself.
back-end "user-per-server" numbers are not very useful taken out of the context of the whole deployment. As part of Microsoft's Dynamic Systems Initiative (DSI) that supports SOA (Service Oriented Architecture) -- the Windows Communication Foundation (WCF) within Microsoft, code-named Indigo -- on Vista 2007 servers is Microsoft System Center Capacity Planner Manager 2006 simulates deployment sizing forensic simulation "what-if" analysis. This product uses a common, central SDM (Systems Definition Model) used by all System Center software packages, starting with Microsoft Operations Manager (MOM) 2005, built for use with the MS.Net Framework version 2.0 To diagnose the root causes of performance problems in a Microsoft Windows Server 2003 deployment, Microsoft provides Microsoft ® Windows Server ™ 2003 Performance Advisor 6/17/2005 .NET 1.1 Framework replacement of the 5/24/2004 Server Performance Advisor V1.0. These run on Windows 2003 SP1 (not Windows 2000 or Windows XP). It provides several specialized reports, including a System Overview (focusing on CPU usage, Memory usage, busy files, busy TCP clients, top CPU consumers) and reports for server roles such as Active Directory, Internet Information System (IIS), DNS, Terminal Services, SQL, print spooler, and others. TeamQuest analytic modeling software claims to find the optimal configuration based on business forecasts and to handle spikes in demand by experimenting what what-if analysis in a virtual environment. But I would not recommend them because they don't seem to willing to talk to me. Mercury Capacity Planning (MCP)CDB (Capacity Data Base)
| $55 Performance by Design: Computer Capacity Planning by Example (Prentice Hall, 05 January, 2004) Hardcover by Virgilio Almeida, Lawrence Dowdy, Daniel Menasce
Virgilio A.F. Almeida$52 Capacity Planning for Web Services: Metrics, Models, and Methods, 2nd edition by Daniel A. Menasce & Virgilio A.F. Almeida (Prentice Hall; September 11, 2001)
Daniel A. Menasce$17 Capacity Planning for Web Pårformance: Metrics, Models, and Methods (Prentice Hall; June, 1998) by Virgilio A.F. Almeida & Daniel A. Menasce104707 eBook ISBN: 1417507810 IT Performance Management (Oxford, Burlington, Mass Butterworth-Heinemann, 2004) by Peter Wiggers, Kok, Henk.; De Boer-de Wit, Maritha. |
|
Format data for Presentation
|
|
Refine Stress Test Scenario parameters to Identify Bottlenecks
Early detection of bottlenecks improve the efficiency of developers, so Performance testing parallel to application Construction (rather than deployment) can be very cost efficient. Performance testers can make testing scenarios and scripts more realistic by refining scripts to be invoked on a random, sequential or synchronized basis -- emulating more and more complex (and negative/conflicting) scenarios:
These tests may be repeated for each set of installation options (such as different brands/capacities of hardware and software) and different configuration settings (support of different locales or database tuning settings). Additional functionality can be tested as new builds add additional functionality or stubs and drivers can be created to simulate actual application functionality. These tests quantify the two basic parameters used to predict performance capacity:
| Tuning Javajavaperformancetuning.com has a complete list of books, resources, and everything else.by Jack Shirazi, author of: $45/31 Java Performance Tuning (2nd Edition) (O'Reilly: January 2003)
$50/25 Performance Analysis for Java Web sites by Stacy Joines, Ruth Willenborg, and Ken Hygh (Addison-Wesley: 2002, 464 pages) from consultants and developers at IBM Software Group at Research Triangle Park, North Carolina. [Review] $32 J2EE Best Practices: Java Design Patterns, Automation, and Performance (Wiley Application Development Series: November 8, 2002) by Darren Broemmer $31/6 Java Platform Performance: Strategies and Tactics (Addison-Wesley: May, 2000) by Steve Wilson and Jeff Kesselman Among White Papers by Mercury Interactive: Diagnosing J2EE Performance Problems Throughout the Application Lifecycle presents techniques for delivering high performance applications to production, managing and measuring the performance of applications, and diagnosing the toughest J2EE problems throughout the entire application lifecycle. The paper will examine the various types of performance issues that need to be dealt with at each stage of the lifecycle and what different diagnostic tools and techniques can best resolve them.
|
|
Diagnosing results for Longevity Load Tests
The green line in the middle can represent response time. Some performance test tools can divide the total average response time down into how much time was spent in each aspect of the environment (network, application server, database, etc.). Such analysis identifies performance bottlenecks, such as the bandwidth capacity of a CPU, a network device, application component, or database tuning parameter. The result of this analysis is summarized and formatted for presentation to developers and management. Examples of recommendations include the tuning of run-time parameters on servers and network devices or upgrading of hardware to meet expected loads. |
|
Testing Various Configurations for Scalability
The top-most curved line represents the highest estimate of resource usage. The lowest estimates of usage are represented by the bottom trend line. For example, under the heaviest usage, an additional server should be added before actual usage reaches 100% at month 8 and another should be added before 200% is reached around month 30. However, if the lowest level of usage is actually encountered, no additional server is needed until month 24. These curves combine the result of parameters determined during scalability testing multiplied by expected product sales growth estimates. This capacity-performance 3D surface [from Neil J. Gunther] predicts the user response based on the number of "m" processors running at various levels of load (load factors). Capacity planning saves money by avoiding the expense of too much unused capacity (in specific componets or system-wide) and, on the other extreme, avoiding loss of profits from not having enough capacity to meet demand. Performance Engineering Laboratory operated by Dr. Liam Murphy at the University College Dublin and Dublin City University http://www.ejbperformance.org COMPAS Performance Prediction |
|
Related Topics:
Load Testin Products
Mercury LoadRunner
Mercury LoadRunner Scripting
NT Perfmon / UNIX rstatd Counters
WinRunner
Rational Robot
Free Training!
Tech Support
| Your first name: Your family name: Your location (city, country): Your Email address: |
Top of Page
Thank you! |