Find JSRs
Submit this Search


Ad Banner
 
 
 
 

Summary  |  Proposal  |  Detail (Summary & Proposal)
JSRs: Java Specification Requests
JSR 39: JavaTM Servlet and JSP Performance Benchmark

Stage Access Start Finish
Withdrawn   02 Oct, 2001  
CAFE   09 Oct, 1999 22 Nov, 1999
JSR Approval   02 Oct, 1999 08 Oct, 1999
Status: Withdrawn
Reason: Withdrawn at the request of the submitter.
JCP version in use: 1.0
Java Specification Participation Agreement version in use: 1.0


Description:
The specification will provide a comprehensive benchmark suite for JavaTM Servlets and JSPTM pages that exercises the key areas that impact performance of these in real life applications.

Please direct comments on this JSR to the Spec Lead(s)
Team

Specification Leads
  Ruslan Belkin America Online (AOL)
Expert Group
  America Online (AOL) Art Technology Group Inc.(ATG) BEA Systems
  IBM Sun Microsystems, Inc. Unify

This JSR has been Withdrawn
Reason: Withdrawn at the request of the submitter.

Original Java Specification Request (JSR)

Identification | Request | Contributions

Section 1: Identification

Submitted by:

Ruslan Belkin and Viswanath Ramachandran
Netscape Communications (a subsidiary of America Online Inc.)

E-Mail: ruslan@netscape.com, vishy@netscape.com

This JSR is endorsed by the following Java Community Process Participants:

Section 2: Request

The target Java platform

This work will be of fundamental interest to both implementors and users of the J2EETM platform.

The need of the Java community this work addresses.

The Servlet and JSP APIs are fast winning widespread industry adoption and acceptance as standards. Whenever a protocol or API wins widespread adoption, efforts to produce benchmarking or performance suites is not far behind. This serves as a tool for the various vendors to measure and differentiate their products. There are different ways for benchmark suites to be developed. One is the proprietary approach, where typically certain software or measurement companies produce their suites which are then adopted as de facto industry standards. The drawback of this is that the vendors may not agree that the benchmark is really suitable or comprehensive. The benchmarks also tend to not closely follow the updates in the specification itself. The second is the open, multi-vendor approach where companies in a given space cooperate to produce a widely accepted standard suite of benchmarks. For example the TPC benchmarks such as TPC-B, TPC-D etc.

We feel that in keeping with the open development of the Servlet and JSP standards, the open multi-vendor approach is best for the Servlet and JSP benchmarks. Hence it makes sense for the Servlet and JSP benchmarks be developed under the Java Community Process.

Initially, the vendors of Servlet and JSP Containers will be the most interested in the development of this specification (typical servlet containers being web servers, application servers and third party addons). They are the most likely participants in the expert froup. The benchmark will help them measure, tune and differentiate their products. As the benchmark reaches maturity however, we fully expect that it will be sought after by hardware vendors, for them to measure, test and differentiate their hardware. This is the case for HTTP and database benchmarks.

An explanation of why the need isn't met by existing specifications.

There is no benchmark for Servlets or JSP. While some work has been done on eCommerce benchmarks and generic Web server benchmarks, these are not from a perspective of J2EE.

The Specification to be developed and how it addresses the need.

The specification will provide a comprehensive benchmark suite for Servlets and JSPs that exercises the key areas that impact performance of these in real life applications. In the case of Servlets, the areas to be covered include

  • Interaction with underlying protocol layer (ServletRequest, ServletResponse, ServletContext and mapping of requests to servlets). Vendors have adopted a variety of implementation techniques here (out of process, in process, app server integration) and it will be very interesting to specify how to measure the performance of these.
  • Sessions. This is a key area, because there are a variety of implementation techniques and levels of service for the session layer. Fault tolerant, distributed, database centric, transactional etc sessions all exist. Vendors will be interested in measuring and differentiating thesmselves here using the benchmarks.
  • Security. Recent directions in the Servlet API regarding programmatic security deal with HTTP authentication techniques, some of which are not widely deployed. Benchmarking will help to distinguish reliable and high performance implementations of these, and accelerate their adoption.
  • Integration: Serlet engines usually do exist in the particular environment of the web or application server and excersise different functionality of such underlaying environment. It will be useful to see how well the servlet engine will perform when forwarding/including requests/content from the servlet to non-servlet objects, such as CGIs or static files.

All of these may not be addressed in the first version, we can look prioritize them.

In the case of JSPs, the areas to be covered include:

  • Compilation speed (JSP to Servlet)
  • Benchmarks for applications which mostly have template data (ideally in the case of a JSP page with only template data, it should perform similar to static HTML).
  • Benchmarks to measure performance of various builtin objects such as pageContext and scopes such as page scope.
  • Include and forward directives
  • Various other directives such as useBean
  • Benchmarks that are sensitive to buffering

Detailed description of the underlying technology or technologies.

Already covered in other sections.

Proposed package name for API Specification

javax.benchmark.servlet.
javax.benchmark.servlet.http
javax.benchmark.jsp

We will need this to be given more thought within Java Software Division, whether these should be in the javax hierarchy. Otherwise the standard benchmark suite can be in package com.sun.benchmark.* or other similar naming. The actual naming is not very important and can be decided based on legal/clarity issues, once the work is started.

Possible platform dependencies (such as an anticipated need for native code).

The entire test suite itself should consist of pure Java Servlets and JSP. We may need to look at drivers for the test suite in native code, but it is very likely that either these can be implemented in Java, or existing HTTP, HTTP 1.1 and HTTP/SSL drivers such as those in WebBench can be used. We will model the benchmark very closely on the J2EE compatibility test suite, so as to re-use as much as possible of the infrastructure already developed.

Security implications.

With Servlet 2.2, various aspects of the security model of the Web (i.e. basic, form and certificate based authentication) are exposed in the servlet API and container. The benchmark will need to cover these features, in performance measurement. Certificate based authentication (client auth) and single sign-on in particular has not been widely deployed and it will be interesting for the benchmark to give a means of measuring the scalability of this mechanism.

Internationalization and Localization implications.

Support for internationalization and local content generation often becomes a performance issue, it might make sense for the benchmark suite to include a set of tests that exercise the I18N and L10N areas of the Servlet and JSP specifications.

Risk assessment (impact of work on target platform, impact if work not carried out, difficulties in carrying out RI and/or CTS).

There is no significant risk of this work except that there may be not sufficient interest in the community for it. That can only be judged from seeing the response to the JSR on the Java Community Process web site. The RI (i.e. set of Java Servlets and JSP that constitute the benchmark) should be closely tied with the specification itself. No CTS is anticipated (does not make sense).

Existing specifications that might be rendered obsolete or deprecated by this work.

NONE known. The work would be complementary to the ongoing work on Servlet and JSP specifications. Ad hoc efforts in the industry to develop Servlet or JSP benchmarks would be rendered obsolete.

Existing specifications that might need revisions as a result of this work.

There is a slight chance that the working group for this specification may hit upon improvements for the Servlet API and JSP specifications. For example, in the process of specifying the benchmark, if performance experts realize that there are any inherently hard to scale features in the API, this could be reported back to the Servlet and JSP API working groups. Also we anticipate that when this benchmark work reaches mature status it can be folded in with the Compatibility Test Suite (CTS) for J2EE or with the ECPerf benchmark for EJB performance.

Section 3: Contributions

Detailed list of existing documents, specifications, or implementations that describe the technology.

The Servlet API Specification and JavaServer Pages Specification will be used in the formulation of the suite. The suite needs to cover the API. Also the J2EE compatibility test suite (CTS) can be used as a starting point for API converage. The working group can model its processes and deliverables after the ECPerf / EJB performance work.

SpecWeb (www.spec.org) is a consortium of companies that specifies a variety of benchmarks including SpecWeb99, a benchmark for HTTP protocol performance. Since Servlets are closely tied with HTTP, we will be referring to this suite while developing servlet benchmarks. Another web server benchmark that may be a starting point for our efforts is WebBench from Ziff Davis. In particular the drivers of WebBench for HTTP, HTTP 1.1 and HTTP/SSL will be of use.

In order to have a starting point for JSP benchmarks, we need to identify benchmark suites that provide significant dynamic content generation. For example Microsoft's ASP Benchmarking using Web Capacity analysis tool.

Transaction Processing Council (www.tpc.org) is a consortium of companies that specifies a variety of benchmarks related primarily to databases and transactions performance. Recently, they also are also working on TPC-W, a web eCommerce benchmark.