![]() ![]() ![]() ![]() |
![]() Here are concise notes on using LoadRunner for performance testing. This is a companion to my pages on Vu Scripting, performance monitoring, performance tuning, and reporting.
As of November 28, 2007, the URL | Topics this page:
|
|
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
| Note: Links to documents that used to be here were removed after Mercury Interactive, Inc. lawyers demanded their removal. ![]() Although version 9.10 is now installed under "HP", "Mercury" remains under Program Files\Common Files, its \TDAPI\Client folder contains files TDCIntui.dll and tdclient.dll. These, the hidden folder C:\Config.Msi, the MacroVision folder (within Documents and Settings\All Users.WINDWS\Application Data), and many other files remain after uninstall. Over three thousand entries also remain within the Windows Registry after uninstall. | ![]() ![]() ![]() |
|
Application Product | Process Image Name | Process KB | File Size | |||
---|---|---|---|---|---|---|
V9.51 | V9.0 | V8.0 | ||||
![]() | Launcher | LRLauncherApp.exe | - | 15,840 | 16,288 | n/a |
![]() |
Virtual User Generator![]() | VuGen.exe | 36,024 | 23,980 | 12,436 | 2,334,769 |
![]() |
Controller![]() | wlrun.exe | - | 61,312 | 13,076 | 5,681,215 |
![]() |
Load Generator Agent![]() | magentproc.exe | 2,684 | 3,336 | 3,236 | |
magentservice.exe | - | 3,496 | 65,536 | |||
mdrv.exe | - | |||||
![]() |
Analysis![]() | Analysisui.exe | 26,768 | 64,460 | 13,132 | 6,058,496 |
![]() |
Tuning Console![]() | protune.exe | - | - | 3,403,833 |
perl5.8.0.exe | Interpreter | 20,535 |
regtlb.exe | registers the batch automation type library | 30,720 |
sed.exe | GNU sed (gsed) version 2.05 | 55,296 |
wdiff.exe | Compares text files | 197,632 |
Alex Arbitman's LR 7.8 Footprints.xls
reports that to run Web requires __ per process and __ per thread.
| ![]() ![]()
| ![]() ![]() ![]() |
|
Protocol | Server | Client Program | Parameter | Notes |
---|---|---|---|---|
Web | WebTours\StartServer.bat | http://localhost:1080/mercuryWebTours | ||
COM/DCOM | (Operating System) | samples\bin\frsui.exe | ||
Winsock | sockfrs.exe | samples\bin\flights.exe | Winsock WinSockWeb | |
ODBC![]() | (MS Access) | samples\bin\flights.exe | ODBC_Access | |
CORBA | samples\CorbaSamples\server.cmd & samples\CorbaSamples\server.bat | samples\CorbaSamples\client.cmd & samples\Corbasamples\clientrecord.cmd |
Stuart Moncrieff's article on CORBA![]() | |
RMI | samples\RMISamples\server.cmd & samples\RMISamples\server.bat | samples\RMISamples\client.cmd & samples\RMISamples\clientrecord.cmd |
According to CPT11877.doc, JDK 1.5 users need to contact Mercury Support for a patch
to each specific LoadRunner version (7.6, 7.8 FP1 or 8.0). Otherwise, you'll get these messages:
The Java sample apps use the "flight32lr" User Data Source with Microsoft Access driver(*.mdb) in the USER DNS table in Data Sources(ODBC) of the VuGen's local machine.
Additionally, the sample Java servers must be operational prior to starting the client. This is done with the "samples\RMISamples\server.cmd":
Note the location of loadrunner class files I added to the default sample.
They are pre-pended to the existing classpath.
Note that there are no spaces in the file path.
The Zip file is equivalent to a JAR file in Unix systems.
Do not delete the black command window because the Java server runs within it.
CORBA and RMI Java clients are invoked with a command for Windows to start the java.exe program. This "samples\RMISamples\client.cmd" file contains:
Note that the RmiFlights.main class file name is passed into java for it to load.
When recording Java with VuGen, a different command — such as the sample clientRecord.cmd — needs to be invoked because VuGen needs to be invoked within the JVM sandbox:
Instead of web "Start recording",
Java VuGen scripts invoke Java functions within the Actions section.
"vuser_init" and "vuser_end" actions are not relevant within Java VuScripts.
Internally, the cjhook.ini file specifies which Java classes can hook in its [EXC_SYSTEM_CL] section. Java classes specified in the [SYSTEM_CL] section are not hooked.
The user.hooks file in LR \bin folder is a general format and cannot be used as-in. It needs to be copied.
Unlike Microsoft Office applications, LoadRunner has not been programmed so individual components to be selectively uninstalled.
| ![]() ![]() ![]() |
![]()
![]() | ![]() ![]() ![]() |
![]()
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
| ![]() ![]()
Within the .usr file:
| ![]() ![]() ![]() |
![]() ![]() ![]() |
By default, VuGen creates a new user named after the current user name. For example, a user named "Tester" will have run results under
the C:\Documents and Settings\ folder for a user named Tester.LOADTEST.
VuGen automatically sets the Windows environment variable TEMP to %USERPROFILE%\Local Settings\Temp
so that results are written to that user's sub folder \Local Settings\Temp. The full path for user Tester would be
| ![]() KB article 11367 on "Manual collation of result set data" explains this in detail.
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
This table summarizes the typical settings for each
type of run scenario |
Menu | Setting | A. Speed | B. Contention | C. Overload | D. Longevity |
---|---|---|---|---|---|
Controller | # of Iterations | 1 only | Several | Infinite | |
Run Options | Frequency of output: Sample once every | 1 second | 10 seconds | 1 minute | 5 minutes |
Run-time Settings | Vusers | 1 only | max. licensed | # below "knee" | |
Logging | For debugging | No | |||
Think Time | None | Randomized Actual | |||
Continue on error? | No | Yes | |||
Network Speed Simulation | Maximum | No | |||
Browser (cache) emulation | No | Yes | |||
Content Checks | Yes | No | |||
Schedule | Ramp-up: | Load all Vusers simultaneously | Yes | ||
Initialize Before Run? | No | Yes | No | ||
- | Interval (seconds) | 4 | >4 | 30 or more | |
Tools > Options > Monitors | Server Resource Monitors: Data Sampling Rate | 3 seconds (Default) | 5 minutes |
When scheduled to Run until completion, the Quantity for a Scenario
is the number of vusers running one at a time.
When scheduled to Run for a period of time, the Quantity for a Scenario
is the number of vusers running simultaneously.
The time specified begins after the Ramp Up period, when all vusers have entered Run state. This specified time plus the time it took to get all vusers into Run state
becomes the total Elapsed Time of a time-limited run.
To run a specific number of vusers simultaneously, set the Parameter file in
the script (in VuGen) to Abort after reaching the end of file.
Scenario Run Time Settings make use of Run logic scripts plus:
| ![]() ![]() ![]() |
If Run-Time Settings has the "Advanced Trace" checkbox selected before the script is run,
these lines will appear in the output log:
| ![]() ![]() ![]() |
One of the most common headaches with load testing is running of hard disk space during a long run.
|
![]()
| ![]() ![]() ![]() |
|
Mode: | Scenario Duration: | Load Behavior: |
---|---|---|
Scenario Scheduling ![]() __ Schedule by Scenario: | Within Duration tab: __ Run Until Completion __ Run for ________ HH:MM:SS
| Within Ramp Up tab: __ Load all Vusers simultaneously (default) __ Start __ Vusers every ___ HH:MM:SS Within Ramp Down tab (if Run for limited duration): __ Stop all Vusers simultaneously __ Stop __ Vusers every ___ HH:MM:SS |
Group Scheduling ![]() __ Schedule by Group (for each script): | Unknown duration | Defined per scenario group |
__ Initialize all Vusers before Run?
One way to determine appropriate ramp up time to specify is to set vusers to start simultaneously,
then look at the resulting rate Running Vusers drop off after processing
(such as 10 users within a 15 second span).
|
Transactions | Secondary | System Resources | ||
---|---|---|---|---|
Runtime: Running Vusers + # Connections | Error Statistics | UNIX Load Avg | Win Threads | |
Transaction: Response Time (sec) | - | UNIX CPU Util | Win CPU Util | |
Total Trans/Sec | per Second Hits + Pages Downloaded + Connections + SSL | UNIX Paging | - | |
Throughput (bytes) | Network Delay | UNIX Disk Traffic | - |
These are usually the most important relationships under study.
LR does not remember most scenario graph settings (4 graphs is the hard coded default).
So instead of building graphs from scratch, I start from opening and then changing my custom but standard scenario file.
Delete graph definitions you don't need to ever see.
LR collects data for graphs even if it is not displayed.
By default, the Controller online monitor shows a maximum of 20 measurements for each graph. To increase it, go to the LoadRunner\dat\online_graphs directory to modify the value of MaxDispMeasurments= in the file controlling each type of graph:
Description | File Name |
---|---|
All | generalsettings.ini |
System Resource Graphs | online_resource_graphs.rmd |
Runtime Graphs | online_runtime_graphs.def |
Transaction Graphs | online_transaction_graphs.def |
Web Resource Graphs | online_web_graphs.def |
Streaming Media | online_web_graphs_mms.def |
Default counters for the System Resource, Microsoft IIS, Microsoft ASP, or SQL Server monitors are defined in the res_mon.dft file within the LoadRunner/dat folder. Its values can be pasted from the [MonItemPlus] section within scenario .lrs files.
UNIX Resources and some other graphs are continuously updated even after the test is done.
So immediately after the scenario runs, right-click on the graph to freeze the values displayed
to lock in values associated with other graphs.
Installation tip: If the Web Resource Graph is blank, try re-registering .dll files by running MS-DOS Batch Files register_controller.bat and set_mon.bat in the LoadRunner\bin folder.
Actions are executed sequentially in the order shown in Run-time Settings.
However, transactions that start and stop between two observations will appear
to be running simultaneously even if they were actually executed sequentially.
An observation interval of 4 seconds is the most often that you can set for Controller on-line graph
(to prevent too much CPU-intensive graphic refresh time from consuming the Controller machine).
But Analysis reports will show more granularity than on-line graphs
— down to 1 second.
|
Called Service [Graph] | Protocol/Product | Template folder | Header | Notes |
---|---|---|---|---|
Custom / General | ![]() | lrc | lrun.h, global.h | |
![]() | General-Vba | - | ||
![]() | General-Vb | - | ||
![]() | General-Js | global.js | ||
![]() ![]() | General-Java | - | ||
E-Business Web Resource (eBusiness) |
![]() ![]() | http, web, General-Js | as_web.h | |
![]() | FTP | mic_ftp.h | ||
(Operating) [System Resource] ![]() |
![]() ![]() | - | - | |
[Network] |
![]() | winsock WinSockWeb | lrs.h | |
[Firewalls] |
![]() | - | - | |
[Web Server Resource] |
![]() | - | - | |
[Web Application Server] |
![]() | - | - | |
Client/Server [Database Server Resource] |
![]() ![]() ![]() | Siebel_web | lrd.h
lrdtypes.h | |
![]() | Oracle_NCA | orafuncs.h | ||
[Streaming Media] | ![]() | - | mic_media.h | |
![]() | real | lreal.h | ||
[ERP/CRM Server Resource] |
![]() ![]() | Sapgui, SAP_Web | as_sap.gui.h | |
![]() ![]() | - | lrdsiebel.h | ||
![]() | baan | - | ||
Distributed Components [Java Performance] | ![]() ![]() | - | - | |
[Application Component] |
![]() | com | - | |
![]() | dotNet | - | ||
Enterprise Java Beans![]() | ![]() | - | - | |
[Application Deployment] |
![]() | Citrix | ctrxfuncs.h | |
[Middleware]![]() |
![]() ![]() ![]() | - | - | |
![]() ![]() | - | - | ||
Mailing | ![]() | POP3 | mic_pop3.h | |
![]() | - | mic_smtp.h | ||
![]() | IMAP | mic_imap.h | ||
![]() ![]() | MAPI | mic_mapo.h | ||
Networking | ![]() | DNS | - | |
![]() | LDAP | mic_mldap.h | ||
Application Services | ![]() | - | lrt.h | |
![]() | - | - | ||
Wireless | ![]() | I_Mode | - | |
![]() | voiceXML | - | ||
![]() | wap | - | ||
![]() | 327x (IBM mainframe) UNIX RTE (Remote Terminal Emulation) | rte | lrrte.h | |
![]() | F5 BIG-IP | - | - |
Any of these can be set as hidden in the New Script dialog if the protocol is marked with "Hidden=1" (rather than Not Hidden "Hidden=0") in file .\dat\protocols\webjs.lrp.
|
Service | Summary Counter | Components | |
---|---|---|---|
ASP Requests | Volume: | | |
| |||
| |||
(Accepted & Completed) | |||
Pipeline Queue Length: | | ||
| |||
Residence Time seconds transactions spend at the server | Wait time | ||
Service time | |||
Utilization percentage the server provides services | Waiting | ||
Busy | |||
Throughput Rate of completions per second
| | ||
(valid/sec.) |
Component counters on the right column should total to the value of the summary counter on the left.
A total counter is broken down into detail counters.
Metrics in parentheses, such as (Accepted), are not collected by the system because they are assumed and can be derived by subtracting from the total all other related counters.
ASP metrics does not include Residence time (the sum of Wait time and Service time) and Utilization.
The
$800 SPECweb99 (v1.0 announced 1999) and
SPECweb99_SSL (March 2002) pre-defined workload generators
benchmark the number of WWW server connections per second
specific hardware configurations can sustain
require a sustained throughput of 400 and 320 Kbps
in order for its measurements to be considered conforming.
An Analysis of Web Server Performance"
http://www.research.ibm.com/people/i/iyengar/ton04.pdf
by
IBM Master Inventor Arun Iyengar
The free Mindcraft WebStone 2.5 benchmark improves on the 1995 version originally from Silicon Graphics by also simulating the activity of 100's of web clients on a computer making GET calls to CGI and server API as well as static HTML pages. Its run rules currently does not support POST, SSL, Authentication, HTTP 1.1, HTTP 1.0 keep-alives, Cookies, dynamic workloads with database access.
Sample Test Results of runs using
Web Server Stress Tool from network monitoring company
Paessler
The
TPC (Transaction Performance Council's)
TPC-W Web eCommerce
benchmark (first announced July 2000 & with v1.8 published Feb 2002)
measures the number of Web Interactions processed Per Second (WIPS)
from a "Web Interaction Mix" of Shopping (WIPS), browsing (WIPSb) and ordering (WIPSo) transactions
simulating a retail bookstore with 14 web pages, including shopping cart functionality.
The
top audited price/performance result on 01/28/02 is a TCO-based range of US$24.50 - $277.08/WIPS using IIS5 & SQL2000 within Windows 20003AS on a Dell server.
Audited performance characteristics are detailed by web server, web cache, database server, and image server.
The benchmark measures scalability by providing a Remote Browser Emulator (RBE) executable that (without client caching) simulates
2880 different users accessing databases at various scales (10,000 or 100,000, etc. unique product items within a schema of 8 tables)
on database servers with 2 to 4 gigabytes of memory (with a 30 second non-SSL cache time-out).
So TPC-W requires a network topology that supports several 100 Mbytes/sec of data.
User think time is based upon a distribution with an average of 7 seconds and a maximum of 70 seconds.
Article by Wayne D. Smith, Intel Corporation
|
Protocol | Metric | Measurement Description | Infrastructure Technologies |
---|---|---|---|
HTTP/S | [Client Time] | the average amount of time that passes while a request is delayed on the client machine due to browser think time or other client-related delays. This does not include time for Flash to paint graphics (which takes many seconds). | - |
[Connection Time] | is the time needed to establish an initial connection with the Web server hosting the specified URL. This gives a good indicator of problems along the network. It also indicates whether the server is responsive to requests. | - | |
[DNS Resolution Time] | time needed to resolve the DNS name to an IP address. If the hosts file contains the IP/host name pair under test, this should be very quick. Otherwise, the DNS server specified for the TCP/IP Properties is used. DNS Lookup measurement is a good indicator of problems in DNS resolution, or problems with the DNS server. | - | |
[Error Time] | the average amount of time that passes from the moment an HTTP request is sent until the moment an error message (HTTP errors only) is returned | - | |
[First Buffer Time] | time that passes between when the initial HTTP GET/PUT request until the first buffer (8K large) is successfully received back from the Web server. This measurement is a good indicator of Web server delay as well as network latency. (Time to First Buffer) | - | |
FTP | [FTP Authentication time] | time taken by the FTP server to authenticate the client before it starts processing the client's commands. This measurement is applicable only to communications using FTP (not HTTP/S) protocol. So HTTP transactions would always show zero (0) for this metric. | |
HTTP/S | [Receive Time] | time that passes between when the first byte
to when the last byte arrives from the server -- when downloading is considered complete.
The Receive measurement is a good indicator of network quality
look at the time/size ratio to calculate receive rate).
This is the metric reported by LoadRunner function
longLastByteMSecs=web_get_int_property( HTTP_INFO_DOWNLOAD_TIME ); | |
HTTPS | [SSL Handshaking Time] |
time taken to establish a Secure Socket Layer connection (includes the client hello, server hello, client public key transfer, server certificate transfer, and other stages).
![]() ![]() |
Each column represents a different hit (resource). The Average and Maximum
The Analysis module reads and saves its setting in the LRAnalysis70.ini file
with in the Windows directory (C:\Winnt or C:\Windows on Windows XP).
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
The formula above can be illustrated with this surface chart created
usng Microsoft Office Excel 2003.
| ![]() ![]() ![]() |
|
| ![]() ![]() ![]() |
| ![]()
| ![]() ![]() ![]() |
|
Major Metric | color | |
---|---|---|
Running Vusers | beige | upper left corner |
Average Transaction Time | green | |
Data Throughput | purple | |
Transactions Per Second | blue |
Excel files generated with the HTML report are
scaled.
So either change all scale to 1 before creating the HTML report,
or multiply the numbers in the spreadsheet.
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
| LoadRunner competes with several other load testing products and services ![]()
| ![]() ![]() ![]() |
| LoadRunner competes with several other load testing products and services ![]()
| ![]() ![]() ![]() |
| ![]() ![]() ![]() |
| Bookmark this page: | ![]() ![]() ![]() |
![]()
|
![]() |
![]()
| Your first name: Your family name: Your location (city, country): Your Email address: |
Top of Page ![]() ![]()
Human verify: |