This series lists the benchmarking applications available, providing an analysis of the installation and operation of key benchmark apps.
There are basically four sources of benchmarks:
June 2001 Oracle Pet StoreOracle publishes a benchmark on the performance of Sunís Java Pet Store blueprint application running on the Oracle 9i Application Server.
November 2001 Microsoft C# Port of PetstoreMicrosoft announced a benchmark comparing the performance of Sunís Java Pet Store blueprint application running on the Oracle 9i Application Server against the performance of a Microsoft "port" of the Java Pet Store to C# for the .NET platform.
March 2002 OracleIn March 2002, Oracle published this benchmarking study claiming that their revised implementation of the Sun Java Pet Store 1.1.2 is "10 times faster than Microsoft .NET".
May 2002 MicrosoftSo in May, 2002, Microsoft hired VeriTest (a supposedly "independent" lab which conducts tests to certify Windows) invited Oracle to participate in an head-to-head re-test, which Oracle declined to participate. So VeriTest repeated Oracle's tests using Mercury LoadRunner test scripts Oracle published on their Web site.
Veritest raised questions about Oracle's published data noted "serious flaws, including missing application functionality and questionable test script settings." Using their own settings, Veritest then reported that Microsoft .NET is "10 times faster" than Oracle's app server.
June 2003 "Draw" DeclarationThe performance of .NET 1.0 and it J2EE rival was declared about the same in a June 2003 benchmark report by the Middleware Company which compared Microsoft's .NET against its J2EE competitors. The J2EE competitors refused to be named, probably because the earlier version of the benchmark test declared .NET the winner.
Sep 2005 BEA MedRec-SpringBEA's The MedRec rewritten to use the Spring framework is my current favorite as the most realistic for J2EE.
Where's the Microsoft .NET 2.0 version that can be used for comparison now? Perhaps one using Iron Speed?
May 12, 2006 Jave EE 5 AJAX-based PetshopSun releases the Java Pet Store 2.0 (jPetStore) Reference Application early access application to illustrate use of the Java EE 5 platform to design and develop an AJAX-enabled Web 2.0 application. The application comes with full source-code available under a BSD-style license. [download 537.3kb]. Adrian Lanning's 01-27-04 Setting up your Windows computer to run JPetStore 3.x with MySQL and Tomcat
Sprint Pet Clinic Sample App (136.4kb).
Sun's version 1.4 of Pet Shop, dated August 25, 2005, was written for J2EE SDK 1.4.
Sun's localized version 1.3.2 of Pet Shop, dated Aug 04, 2003, was written for J2EE SDK 1.3.1.
Java Pet Store version 1.1.2 was for J2EE SDK 1.2.1.
Sun's Java Adventure Builder Reference application v1.0 (2564KB dated Aug 25,2005) are six web services that fulfills a supply chain. It's written to run with other J2EE 1.4 downloads.
Microsoft has developed several reference benchmark apps Architectural Sample Applications
Its own reference application implemented in both Microsoft Visual C# and Microsoft Visual Basicģ .NET, version 7 of Duwamish for .NET
Microsoft's Petshop 3.0 .NET Sample Application (771KB published 5/23/2003) provides the code behind the October 2002 Using .NET to Implement Sun Microsystems' Java Pet Store J2EE BluePrint Application and May 2003 Design Patterns and Architecture of the .NET Pet Shop (created by Gregory Leake of Microsoft and James Duff of Vertigo Systems) for use in application benchmarks comparing the performance and scalability of .NET Web applications to the performance of an equivalent, revised, and fully optimized J2EET application.
Micorsoft developed PetShop.NET to provide a direct comparison against Sun's Pet Shop, then at Version 1.1.2: (For J2EE SDK 1.2.1), available among "Guidelines, Patterns, and code for end-to-end Java applications"
However, Sami Jaber argues in his insightful November 2002 article PetShop.NET [2.0]: An Anti-Pattern Architecture (translated from the original French) that PetShop.NET is an anti-pattern (a mistaken solution) because its single namespace (PetShop.Components), has methods such as GetProductsBySearch() containing SQL statements in service layer code rather than in separate data (DAO) layer code, as with Java PetShop. "If the application must evolve/move from the thin client (ASPX) to the fat client (WinForms), no change in the service layer should be necessary."
For this reason, Jaber, through his DotNetGuru.com community aims to rewrite PetShop.NET to use a true N-tier architecture for greater agility, using an Abstract Factory pattern between layers and "Remoting/WebService/Local calls in the service layer and real O/R Mapping tool or DAO in the Data tier just by changing configuration file." This is so it can be a true .NET Best Practice sample.
IBM has its own Commercial Performance Workload (CPW).
SAP AG provides its Standard Application Benchmarks to compare the performance of various hardware and database choices.
SAP's benchmarking procedure is standardized and well defined. It is monitored by the SAP Benchmark Council made up of (since 1995) representatives of SAP and technology partners involved in benchmarking. Originally introduced to strengthen quality assurance, the SAP Standard Application Benchmarks can also be used to test and verify scalability, concurrency, and multi-user behavior of system software components, RDBMS, and business applications.
The Council created a hardware-independent throughput measurement metric called SAPS (SAP Application Performance Standard), where 100 SAPS is defined as 2,000 "fully business processed" standard Sales and Distribution (SD) order line items per hour. Since each sales order contains 5 line items, 2,000 / 5 = 400 repetitions are performed per benchmark hour. During each hour, 2,000 + 400 = 2,400 SAP transactions in each baseline hour. This means 2,400 / 60 = 40 SAP transactions per minute or 40 / 60 = 0.67 transactions per second.
Each repetition in SAP-SD requires 15 posting dialog steps (screen changes) between login and logoff.
So 400 x 15 = 6,000 posting dialog steps are processed per baseline hour (excluding entry and exit steps, which are not counted in the metric).
With 10 seconds of user "think time" between user actions and an average 2 second response time, a baseline benchmark run takes 12 seconds x 6,000 steps = 72,000 seconds or 72,000 / 360 sec./hr. = 200 user hours, which means 200 concurrent active users in a given hour.
My assumption is that all users are logged in and actively working (although realistically there should be some).
Will faster servers or more RAM enable the environment to support more simultaneous users and process more transactions per hour by a corresponding larger number of emulated users?
On May 24, 2007, Oracle announced that its 10g RAC database running SAP ERP 2005 (2-tier) on IBM AIX machines with two dual-core 4.7 GHz POWER6 processors achieved 20,120 SAPS by generating 402,330 fully processed order line items/hour using 1,207,000 posting dialog steps/hour. 4,010 users were emulated, so each user generated 301 posting dialog steps per hour or 301 / 60 = 5 steps per minute, which is 60 / 6 = one every 12 seconds, since the average dialog response time was 1.96 seconds 10 seconds is added to emulate user "think time" between each step.
These results were optained with the CPU at or near 100% utilization, so these numbers should be considered maximum possible values.
Results for Sales and Distribution (SD), run by IBM in Beaverton, OR, using these configuration defaults, are:
3-tier, where a single central server accesses the database on another server.
3-tier parallel, where several central servers accesses the database on another server, with users equally distributed across all database nodes (using a round-robin-method).
The test transactions access the main tables of the SAP Sales & Distribution SAP application (SD):
Combining the SAPS metric with the SPECInt benchmark rating for the server hardware used for the test enables capacity comparisons and estimation.
Although not all
SAP modules (applications) are modeled,
BEA distributes with its WebLogic installer its Avitek MedRec (Medical Records) web application to demonstrate WebLogic Server features and BEA-recommended best practices.
MedRec Version 1.1.1 for WebLogic v8.1 is an end-to-end sample J2EE application that simulates an independent, centralized medical record management system providing a framework for patients, doctors (physicians), and administrators to manage patient data using Java Swing and C# clients. Download the medrec_tutorial.zip
To deploy the MedRec application: recreate the \build directory within C:/Bea/WebLogic81/samples/server/medrec by using the MedRec (apache) ant task in \src. Run:
On Linux and other platforms, start MedRec from the WL_HOME\samples\domains\medrec directory, where WL_HOME is the top-level installation directory for the WebLogic Platform.
My breakdown of login and other pages offered by the BEA MedRec application.
MedRec includes a service tier of Enterprise Java Beans (EJBs) that work together to process requests from client applications in the presentation tier and from Web applications, Web services, workflow applications. The application includes message-driven, stateless session, stateful session, and entity EJBs.
Enter the http://127.0.0.1:7001/console as user "weblogic" as username and password.
The MedRec 1.0 Architecture Guide explains the Model-View-Controller design pattern:
at the presentation layer MedRed apps uses JavaServer Pages (JSPs) tags and Jakarta Struts 1.0 intelligence populating Enterprise Java Beans that request Actions within the service tier.
Expert One-on-One J2EE Development without EJB (Wrox, June 21, 2004) and
Professional Java Development with the Spring Framework (John Wiley & Sons, 2005)
Other resources on the Spring framework:
Instead of remoting with stateless session EJBs, "MedRec-Spring" exposes service beans via Springís HTTP Invoker architecture of POJOs (Plain Old Java Objects). Spring's Inversion of Control (IoC) features containers that inject dependencies into configured components.
This Aspect-oriented programming (AOP) approach is called "agile" because objects can now be coded without time-consuming and error-prone regard to resource configuration and referencing dependencies. Using Spring's interfaces enable reference to XML configuration files provided at run-time. MedRec DataSources, JMS services, MBean connections, and peer services are all provided to MedRec-Springís objects during runtime.
On start-up deployment of resources, Springís "lazy" initialization and look up services are activated via JMX to provide connections to WebLogic Serverís MBean servers.
Spring's JAX-RPC factory produces a proxy for a Web service. Since the Spring factory bean is configured outside compiled code, the application is more flexible.
JMX support by WebLogic Serverís MBeanServer is obtained through Springís MBeanServerConnectionFactoryBean, whose byproduct is an MBeanServerConnection established during application deployment and cached for referencing beans.
The MBeanServerConnectionFactoryBean exposes monitoring, runtime controls, and the active configuration of a specific WebLogic Server instance and WebLogic Serverís Diagnostics Framework. by returning the WebLogic Serverís Runtime MBean Server and the WebLogic Serverís Domain Runtime MBean Server.
Oracle ASB 11i Single and RAC systems
Trifork, a Danish/San Jose developer of the T4 Enterprise Application Server that competes with Oracle and other J2EE compatible application servers, at one time reported that it created in 3670 lines a J2EE reimplementation of the .NET PetStore 1.1, mimicking the 3758 lines of .NET code by reusing the database layer employing Java Struts as the view layer framework.
Steve Peterson noticed that Pet Shop provides no graphical information for each type of pet. So his Macromedia's PetMarket benchmark built for usability using Rich Internet Application features of Macromedia Flash MX and Macromedia Flash Remoting consume less bandwidth. The MX benchmark proved that its ColdFusion MX scales under 700 simultaneous users using Windows 2000 AS SP2 on 800 MHz machines!
MacWorld magazine's SpeedMark uses as baseline the 1.25 GHz Mac Mini to compare the how fast casual and power users can perform 15 "everyday" tasks using 9 real-world applications (including Apple's OS X 10.4.5 "Tiger" operating system).
The $2000 jAppServer2004 multi-tier benchmark application (at v1.05) measures the performance of a single J2EE v1.3 application server running all major J2EE technologies:
The app simulates Dealer, Manufacturing, Supplier and Corporate domain logical entities.
SPECjAppServer2004 result reports use the performance metric of the number of JOPS (jAppServer Operations Per Second) completed during the Measurement Interval. JOPS is composed of the total number of business transactions completed in the Dealer Domain, added to the total number of workorders completed in the Manufacturing Domain, normalized per second.
The app includes a Supplier Emulator Java Servlet that can run inside any Java enabled web server to emulate the sending and receiving of orders to/from suppliers.
The app includes a client driver (run on a separate machine) that exercises all parts of the underlying infrastructure that make up the application environment:
However, "SPECjAppServer2004 strives to stress the middle-tier rather than the client tier or the database server tier."
The Standard Performance Evaluation Corporation (SPEC) is a non-profit corporation formed to establish, maintain and endorse a standardized set of relevant benchmarks that can be applied to the newest generation of high-performance computers. SPEC develops suites of benchmarks and also reviews and publishes submitted results from their member organizations and other benchmark licensees.
The TPC-App benchmark web services app simulates activities of a distributor operating business-to-business transactional application servers operating in a 24x7 environment. TPC-App showcases the performance capabilities of application server systems.
The workload was published August 2005 to exercise commercially available application server products, messaging products, and databases associated with such environments, which are characterized by:
TPC-App result reports use the performance metrics of the number of SIPS (Service Interactions Per Second) completed by each application server during the Measurement Interval. "Total" SIPS refers to the entire cluster of servers in the entire configuration (SUT).
The lone report on 6/21/05 measured 174.9 SIPS per server.
The distinctiveness of TPC benchmarks is that the SIPS metric is associated with dollar costs, such as the $327.41/SIPS published for an IBM eServer xSeries x366 using a Intel Xeon DP 3.60GHz CPU running MS.NET 1.1 and Microsoft SQL Server 2000 on Windows 2003 Standard Edition.
"The workload was designed specifically to stress the Application Server. As such, the work to be performed by the database was purposely minimized."
"TPC-App does not benchmark the logic needed to process or display the presentation layer (for example, HTML) to the clients."
The non-profit Transaction Processing Performance Council is based in San Francisco, California, USA.
The PerformaSure package includes a benchmark J2EE app specifically designed to test speed and scalability. It's now a component of Quest Software's Application Performance Management (APM) Suite for the J2EE platform
xfire Benchmark Factory from from xaffire.com
Quest Software generates load and measures performance results along with error rate tracking.
Your first name:
Your family name:
Your location (city, country):
Your Email address:
Top of Page