|
Hyperformix Mercury Capacity Planning (MCP) Here are my notes taken while investigating the internals of Performance Engineering | Topics this page:
|
Overview: What is the big deal?
Hyperformix software's IPS (Integrated Performance Suite) provides capacity planners the foresight to reduce "firefighting" through disciplined capacity management. The product enables more accurate quantificantion of alternative "trade-offs" between performance and cost. In this sample XY scatter plot, alternative "A" was found to actually cost more than the "current" configuration, yet performs worse! The product is worth $135,000 because it is used to identify alternative "B" (which both costs less and does more) through "predictive simulation" examining various "what-if" conceptual configurations more quickly and less expensively than installing actual hardware.
"Systems are now too complex for analysts to predict without simulation modeling."
|
The Methodology: Dynamic ModelingHyperformix models are not merely accounting formulas in a static spreadsheet.Hyperformix modeling is "dynamic" in that 1) predictions are obtained by "running" 2) a modeling program that predicts what happens, moment-by-moment, over simulated time frames in response to a workload (the arrival rate of requests). Impact on different hardware within a "virtual data center" can be simulated because Hyperformix has amassed 3) a proprietary (separately licensed) Hardware Model Library containing the result of SPECint tests and other characteristics of over 2,000 hardware components. With the Hyperformix Performance Engineering methodology simulation run results are generated in the same format as data parsed from actual runs so that the 4) "Visualizer" utility can "validate" simulated results against real measurements in a bar chart report such as this. Measurements from real runs are obtained by parsing 5) "raw" logs captured from running actual systems and summarized into 6) an MS-Excel spreadsheet called a "profile". The 7) proprietary Excel add-in code is called the Hyperformix "Application Model Generator (AMG)". The 8) "Profiler" utility parses a wide variety of logs, including 9) result files from LoadRunner runs. A wide variety of logs, including 9) result files from LoadRunner runs, can be used because of the 10) sumarization and filtering before creating the 11) data used as the basis of simulation runs. Fine-grained "behaviors" of an application are coded as Java-like 12) Application Definition Notation (ADN) included in Modeler runtime code.
|
Annoyances
|
InstallationAt the time you read this, the version may have likely moved on from "3.3" created May, 2005.
Hyperformix offers a class on "Transition to 3.4". The LrrTransSetup program by default does not install to the "Mercury Interactive" folder within Windows' "Program Files", but "Mercury". By default, the setup program creates a "Mercury" folder in "Program Files", what LoadRunner 8.1 also does. This table lists some of its sub-folders used to store the 150,428 bytes used during installation, listed below.
| As of April 17, 2006, the Knowledge Base at Mercury Support had no articles associated with this product.
Hyperformix support line:
|
The Navigator
ID | Folder | Start Programs Mercury MCP3.3 | Executable | Note | |
---|---|---|---|---|---|
N | Navigator2.3 | Navigator2.3 | navigator.exe
Update10To11.exe \bin\CheckUserAccount.exe regtlb.exe | MCP Name: "Performance Engineering Project Manager" | |
A | AMG4.0.1 | Applications, AMG 4.0.1 | amg.exe | This opens a command window and invokes MS-Excel with custom menu "Application Model Generator". | |
V | Visualizer3.0 | Applications, Visualizer 3.0 | visualizer.exe | MCP Name: "Graphical Reporting Tool" Views, saves, and prints from Modeler simulation results statistical reports in multiple formats. | |
Modeler4.6\Program\ | Applications, Modeler 4.6 | Modeler | Modeler.exe | ||
W | Modeler Workflow Analyzer | workflow.exe | |||
- | ? | ? | csestrat.exe
multipadex.exe querystats.exe runexcel.exe xdocgen.exe xexport.exe ximport.exe | - | |
S | Modeler4.6\ ScenarioManager\ | Applications, Modeler 4.6, Scenario Manager | Scenario Manager | SM.exe | MCP Name: "Model Execution Manager". Creates a series of simulation runs, each with different values to parameterize various projected system modifications. |
- | Register Service | SMReg.exe | - | ||
- | Remove Service | SMReg.exe remove | - | ||
- | Start Service | SMReg.exe start | - | ||
- | Stop Service | SMReg.exe stop | - | ||
- | Change Service Password | SMReg.exe chpwd | - | ||
- | Modeler4.6\ ScenarioManager\ server | ? | CheckUserRights.exe
SMServer.exe ServerLaunch.exe | manages multiple runs containing different experimental settings for variables and constants parameterizing the effect of a variety of projected system modifications. | |
Profiler3.6 | Applications, Profiler 3.6, | Profiler | RunProfiler.exe 3.6 | MCP Name: "Data Analysis Tool" | |
r | Profiler Workflow Analyzer | wfa.exe | |||
- | ? |
PList.exe PerfMonConv.exe Profiler.exe ProfilerLM.exe Recycle.exe Unicoder.exe automergensf.exe datafilter.exe dice.exe loadrunner2dice.exe recap.exe retier.exe scrubips.exe windowsort.exe csh.dll, cmax20.dll, lm.dll, Perl56.dll | Profiler 3.6 | ||
- | ? | ? | beautifycsv.exe | reformats and reorganizes selected data files in CSV format, then opens them in Microsoft Excel. | |
- | ? | ? |
mergebfsummary.exe mergecolcsv.exe mergecsv.exe mergeresourcedata.exe mergetranssummary.exe | - | |
- | ? | ? | profilercharts.exe | creates Excel charts from selected data files in CSV format. | |
- | ? | ? |
lrrtranslate.exe lrrtranslatecl.exe | - | |
- | Switchboard1.0 | - | - | switchboard.exe | handles communication between Navigator or Modeler to and from Visualizer. |
- | UserPubs3.3 | - | - | - | A central documentation folder for all modules. |
- | LoadRunner | - | - |
LrrTranslate.doc license.rtf lrrtranslate.exe lrrtranslatecl.exe | lrrtranslatecl.exe is the console version of lrrtranslate.exe. |
- | Tools | - | - | "Installation Integrity Checker" is not installed in Mercury/Tools as described in this pdf. It creates and automatically opens Outlook to generate an email containing its output log file. |
AMG
Modeler ADN (Application Definition Notation) code defines classes and interfaces. ADN is the object-oriented programming language built into Modeler. ADN code describes software processes and control simulation runs in Modeler Simulation models. AMG profiles are used to generate an ADN include file that contains transaction classes defining workloads and business functions. Avoid putting your own files within the installation folder. Inevitably, when versions change, you would need to figure out which files need to be saved or end up spending frustrating hours debugging why new versions don't work. In "Third Party Applications", you can install "J2SE" and "Adobe Reader". The setup program installs these files in C:\WINDOWS\system32\ The setup program also install an empty "flexlm" folder in C:\ root. File ExtensionsThe setup program installs these file name extensions:
No file extensions are created for:
Visualizer project (.vop) files. Navigator project (.npj) files. scenario (.sm) files. Old scenario (.sem) files. |
Licensing and Invocation
Copy the Host ID and Machine Name into an email to support.Mercury.com to obtain a license key. If you switch NIC cards on your laptop or swap hard drives, mention it because the Host ID is generated from the machine's hard disk serial # and MAC address. Separately licensed are the Hyperformix Application Modules for Oracle 7, 8, or 9; Peoplesoft 8, SAP, and WebSphere Application Server. The *.lic file should be copied into folder C:\Program Files\Mercury\License.
Once you are "legal", you can invoke executable modules
without getting pop-up such as As I understand it, Mercury's MCP product does not include the Hyperformix Data Manager product a central repository to store historical heterogeneous data for reuse. Mercury's MCP product also does not include the Hyperformix Capacity Manager product that predicts the impact of growth in workload, modeled at a higher level than the MCP product. Unlike its competitor TeamQuest, this slices and dices workload by business unit. Metron's Athene is another competitor.
|
$14.5 annual sales. 800.759-6333 4301 Westbank Drive, Bldg. A, Austin, Texas 78746 (West of downtown, N off Hwy 1 on S. Capital of Texas Hwy 360) 14.3 miles from the Austin-Bergstrom Intl Airport (AUS)
Leading its 85 people are: |
Sample Project Data
eComm3, IntelligentNetworkingModels, LoadBalance, SimpleDemo, WorkloadModel eComm3 demonstrates a multi-tier web eCommerce system created by combining the hardware infrastructure from the Modeler "eComm3_Topology.csm" sample file and the application and workload specification from the Application Model Generator "eComm3 Profile.xls" sample file. There are two input workloads of 50 local users and 450 remote users. The remote user transactions arrive from the Internet via a T1 connection and the local user transactions arrive directly via a local Ethernet.
C:\Program Files\Mercury\Visualizer3.0\Samples\eComm3\eComm3.vop or C:\Program Files\Mercury\Visualizer3.0\Samples\eComm3 with Performance Objectives\eComm3.vop These show mean utilization at 100% for the Network component. IntelligentNetworkingModels contains four models: NetworkTrafficBalancingI - Routing Path Distribution illustrates the impact of using network traffic balancing when routing. The impact is demonstrated by running the same model under several different scenarios:
NetworkTrafficBalancingII Port Distribution illustrates the impact of using network traffic balancing when routing. The impact is demonstrated by running these scenarios:
NetworkTrafficBalancingII Priority-basedRouting illustrates the impact of using priority-based routing. The impact is demonstrated by running the same model under two different scenarios:
UserDefinedRoutingTables-LeastHops illustrates how to define and use custom routing tables. The default routing table uses a bandwidth-based criteria (much like the OSPF protocol) when calculating routing paths. In this example, a least-hops based criteria is defined, as in Routing Information Protocol. The impact of using the custom routing tables over the default routing table is demonstrated by running the same model under two different scenarios:
LoadBalance illustrate the use of load-balancing techniques in an environment of several identical client workstations connected by an Ethernet to a pair of servers performing a workload reading and writing a message. Semaphores are used to synchronize threads in the server process.
WorkloadModel illustrates modeling applications at a high level. It assumes that detailed modeling of the databases is not required. The environment for this model consists of a token ring with 225 workstations, a mid-range data center which provides database services, and a mainframe which handles remote procedure calls. The mid-range data center and the mainframe are on separate Ethernets. The Token Ring and Ethernets are all connected by routers to a wide-area network. C:\Program Files\Mercury\Visualizer3.0\Samples\Application Profile Report\Application Profile Report.vop |
Navigator Method Steps
From within the Navigator, MCP users can start LoadRunner. Scaling analysis identifies the bottlenecks by scaling up the workload on a "benchmark" system as it is. This measurement provides an initial model for tradeoff analysis. Configuration tradeoff analysis uses an initial model from measurements to analyze the ability of different hardware and software configurations to handle various workloads. Within C:\Program Files\Mercury\Navigator2.3\MethodTemplates
There are MCP "Basic Integration" and the MCP "Automated Integration" method. The MCP Automated Integration.mtp method template has this sequence for the sample "FMStocks":
2. Data Collection and Analysis 3. Model Construction, Verification, and Validation 4. Scalability Scenario Analysis |
Basic
LoadRunner
Modeler
Profiler
Scenario Manager
|
Data Flows
|
In this chart, "raw" data is processed from the bottom, then combined in the middle, and finally graphically displayed by the visualizer at the top. First, various utilities reformat logs from their native format into a common format. Hyperformix has defined different common formats for resource and network data. Reformatting utilities are called Profiler utilities because the Profiler needs them to create its reports and diagrams. Some Profiler reports are imported into the profile spreadsheet using the Application Model Generator Excel add-in. At the top, the "Visualizer" creates reports and graphs as html and graphic files from statistics generated by simulation runs conducted by the "Modeler". Hyperformix uses the Performance Manager/Monitor/Agent for zooming within graph within HP Openview. |
Hardware Components Typology
Hardware Model Library*.txx files are stored in a proprietary format because they are separately licensed from the Hyperformix Hardware Model Library of over 2,000 hardware components.
| |||||||||||||||||||||||||||||||
Model Components
Workload components with an arrival pattern of requests made by the application's users, as measured by:
|
X Axis | by | Response Time Seconds | Throughput rate per sec. | Utilization | Count | ||||
---|---|---|---|---|---|---|---|---|---|
Mean | 90th | Max. | % | Bytes | Time | ||||
Subsystem tier (e.g., "client", "Web", "App", "DB") | - | - | - | - | - | - | |||
Business Functions | [stacked by Subsystem] | - | - | - | - | ||||
[LoadRunner transactions] | - | - | - | - | - | - | - | ||
[Measured vs. Modeled] | - | - | - | - | - | ||||
I/O [Read vs. Write Size] | - | - | - | - | - | - | - | ||
Network [Request vs. Reply Size Wire] | - | - | - | - | - | - | - | ||
Run Number | - | - | - | ||||||
stacked by Transaction Details | - | - | - | - | - | - | - | ||
Number of Vusers | - | - | - | - | |||||
Response Time vs. Throughput (functions per sec) | - |
The Visualizer
C:\Program Files\Mercury\Visualizer3.0\lib\DefaultChartProperties.vop C:\Program Files\Mercury\Visualizer3.0\Samples\Scenario with 18 Runs\Modeler Results 18 Runs.vop C:\Program Files\Mercury\Visualizer3.0\Samples\BF Resource Summary\BF Resource Summary.vop C:\Program Files\Mercury\Visualizer3.0\Samples\T2 Validation Report BF_ViewSummary\BF_ViewSummary T2 Validation Report.vop C:\Program Files\Mercury\Visualizer3.0\PluginTemplates\visualizer.vop
Questions AnsweredThe Navigator's System Scalability wizard asks the same questions I ask in my performance reports
C:\Program Files\Mercury\Visualizer3.0\Samples\Scenario with 18 Runs\Modeler Results 18 Runs.vop
Define in the Visualizer project performance objectives (PO). Visualizer automatically attaches "Model Bottleneck Detection" reports to a component wherever it detects a bottleneck in the model. These reports use pre-defined "MBDException.rox" and "MBDException Multiple Run.rox" from the Templates/Hidden directory where Visualizer is installed. Scenario Manager scenario definitions point to Modeler simulation data. Switchboard.exe handles communication between Navigator or Modeler to and from the Visualizer. Although numbers are reported to four significant digits in the raw simulation output, Excel and Visualizer rounds to three digits.
|
The Model name specified in the Application worksheet is used to create a file named
The Application name specified in the Application worksheet is used to create a file named
.adn include file contains functions to create run-time query and user-defined statistic objects during model initialization time.
Workload Components |
Note that dropped file names do not include drive and file paths.
C:\Program Files\Mercury\Visualizer3.0\Templates\Profiler templates\T1 Validation Report.rox
creates a verification report for a
T1 model constructed from only a network trace, and
compares a T1 transaction with the output of the
model created from that summary.
The T1 report compares several T1 runs looking for repeatability in the T1 tests and for runs with outlying data that should be excluded. But just one of the runs is chosen for use in a model.
T2"T2" exercises a single business function in runs at varying (but steady) levels of load (from 20% to 80% CPU utilization with no contention for CPU and disks).C:\Program Files\Mercury\Visualizer3.0\Samples\Business Function Profile\Business Function Profile.vop
However, this test requires the automatic and repetitive submission of a single business function. The results of the T2 test provide estimates for resource usage of a single business function. The computed resources used by one of the runs needs to be chosen for use in a model. It takes as input several resource summary files, one for each load level. Navigator creates these files by dicing a single Resource Summary File into several files, one for each load level. Results from one of these runs are combined with network trace data and hardware resource usage data (also called "service demand") to create the Business Function Summary. C:\Program Files\Mercury\Visualizer3.0\Samples\T2 Run Comparison BF_Login\BF_Login T2 Run Comparison .vop
This report was created by using file T2 Validation Report.rox
| |
T3"T3" tests "stresses" a mix of business functions or business processes that drives the system's resource usage up to measurable levels (such as 25, 50, 75, 100 vusers).C:\Program Files\Mercury\Visualizer3.0\Samples\T3 Test Validation Report\T3 Test Validation Report.vop C:\Program Files\Mercury\Visualizer3.0\Samples\T3 Test Run Comparison\T3 Test Run Comparison .vop
T3 Validation Report.rox creates this a verification report for a T3 test.
|
Distribution Functions
|
"Raw" Log FilesRaw text log files created by network monitoring programs need to be transformed and filtered into NSF files by the Profiler:
SAR files from UNIX machines should be save in ... format. The MCP method templates in Navigator rely on Profiler to convert LoadRunner performance data results files into resource standard format (RSF) data analysis files. These RSF files, in comma-separated value (CSV) format, can then be used as building blocks for an AMG system profile used to construct a Modeler simulation model. LoadRunner results data is stored in one of the following formats: • LRR, where performance data is spread across numerous text files • LRA, as a database constructed from one binary (.mdb) and one text file To integrate LoadRunner with MCP, Navigator and Profiler use the LoadRunner Results Translator program to create an .lrv file from LoadRunner performance data results. An .lrv file serves as a single-file bridge between LoadRunner performance data results and a Profiler RSF data analysis file. To integrate MCP with LoadRunner, you must install LoadRunner Results Translator software on the following computers: • The local computer where you install MCP • Any remote computer running LoadRunner to generate results files that you plan to use for a Navigator-driven performance analysis project ISM's Perfman Capacity Planning and Performance Management (CPM) software LoadRunner Result FilesMCP accepts both types of LoadRunner result files:
Resource Standard Format (RSF)Resource Standard Format (RSF) The Profiler CSV file format for hardware resource logs. After you specify a hardware resource log as input in Profiler or Navigator, Profiler generates RSF file to be used by its filtering and data-conversion modules.Network Standard Format (NSF)Network Standard Format (NSF) The Profiler CSV file format for network trace data. After you specify a network trace file as input in Profiler or Navigator, Profiler generates an NSF file to be used by its filtering and data-conversion modules.
|
LoadRunner Translation
for LoadRunner 8.1 is C:\Program Files\Mercury\LoadRunner\bin So after you run a command window, cd to that folder, then:
lrrtranslatecl.exe looks for *.eve files specified within *.lrr file you provide, so you have moved files from a controller, make sure to recreate the exact path. Command Line Options:
-in <:filepath> : Full path to .LRA or .LRR file-out <:filepath> : Full path to output file (default: .LRV)-delim <:delimiter> : Output file column delimiter (default: \t)-style <:style> : Conversion style -- plain (default), line#-tz <:mode> : Timezone of timestamp -- local (default), original-na : Fill empty entries with 'n/a' (default)-nona : Fill empty entries with 0-exclude <:types> : Exclude type of data t - transaction s - server metric d - datapoints w - web metrics v - vuser status
|
Simulation Runs
A Scenario Manager scenario (.sm) file with a multiple-run scenario contains multiple .stx files. The Visualizer project maintains references to the .stx files rather than to the .sm file.
|
| |||||||||||||||||||||||||||||||
System Resource Usage Metrics
Although numbers are reported to four significant digits in the raw simulation output, Excel and Visualizer rounds to three digits. | ||||||||||||||||||||||||||||||||
Transaction Details
| ||||||||||||||||||||||||||||||||
Transaction Summary Reports
By transaction:
|
The mean for time-persistent continuous statistics (such as memory usage) is time weighted -- each value is multiplied (weighted) by the proportion of simulation time where the statistic equals that value.
The mean for discrete statistics (such as the response time for a request) are the sum of the individual sample values divided by the count of values.
This ADN master input file also contains statements to unconditionally include several named packages:
The ADN language uses Java syntax for comments, operators, object modifiers, etc.
Statements specifically for simulation include: Startup Stop Warmup Thread Execute Receive Send
ADN "thread" statements refer not to operating system thread, but the simulation of system processes. Threads execute behaviors and functions. Parent threads are started by a "Startup" statement or a "Startup" clause within a "Process" or "User Behavior" "call" statement. Child threads are started by a "Thread" statement or a "Thread" clause in a "Receive" statement.
Threads specified within a Join statement run concurrently.
Threads specified within a Fork statement, wait until the
longest thread created by the Fork statement concludes.
Delay and Think statements do not place resource demands on hardware, but:
The number of seconds during a Think statement does not contribute to the client statistics collected for executions in progress.
Statement Parameter | Sample Value | Parameter Name & Notes |
---|---|---|
Application | "Instructions" | cpu_units in the second column of the benchtable.txt file. |
CPU | 10000 | cpu_value |
Memory(3.1, 8.7, 30.0, 0.0) | ( kilobytes shared_mem_value, private_input_mem_value, temp_mem_value , private_output_mem_value ) | |
Read | 1.2 | [ Physical ] [ Sequential ] read_K_value |
Number | 75 | num_reads_value |
FirstIo | 17 | record_number the starting record for a file stored on a RAID volume, which effectively specifies the device selected first. |
File | "my_file" | read_file_name should not contain spaces |
Write | 0.10 | [ Physical ] [ Sequential ] write_K_value |
Number | 1000 | num_writes_value |
Disk | "DiskSet0[3]" | write_disk_value |
Other statements include procedures:
threadSignal threadWaitForEndOfRun threadWaitForEndOfWarmup threadWaitForSignal
Output from ADN code go to the .tra simulation trace file.
Also traced are categories of simulation events to help in understanding underlying simulation events at each level of abstraction used in the model.
In the .tra file are messages explicitly coded in ADN Java code which contain the simulation clock time and name of the behavior executing at the current simulation time.
Other messages come from:
Display statements which create a Dynamic Display window containing run-time messages. Its syntax and example:
Display "Title = 1M e-mails, Sizing = AutoSize, Color = Black",
Related Topics:
Load Testin Products
Mercury LoadRunner
Mercury LoadRunner Scripting
NT Perfmon / UNIX rstatd Counters
WinRunner
Rational Robot
Free Training!
Tech Support
| Your first name: Your family name: Your location (city, country): Your Email address: |
Top of Page
Thank you! |