Select Page


The application is a performance testing utility that executes performance tests using any supported test tool’s CLI.  It stores and analyses the results and reports them back to the CI/CD pipeline automation tool.  Currently, Jenkins and Bamboo CI/CD automation tools are supported.
Performance tests can also be executed independently so that any bespoke performance testing outside currently supported CI/CD pipeline tools can be recorded and analysed.
After each performance test has been executed, the results are stored in a database so that the entire execution history for each scenario can be traced and reported upon.  The web application enables you to report and analyse any combination of test runs post execution.
Available reports include Excel spreadsheets containing run statistics and highlighting metrics outside of pre-defined SLA’s and interactive HTML graphs.


Transaction SLAs:

For each transaction, you can set a SLA that will highlight either a breach of a (+/-) percentage threshold, or a breach of a (+/-) maximum set limit.  Each reported metric for each transaction can have its own defined SLA.

A (+/-) threshold value can be set to highlight the transactions that have breached upon an increase or decrease in percentage.  The analysis will compare if the secondary test execution is in breach compared to the primary test execution.  If the secondary transaction metric is in breach, an alert will be written to the Excel report.

A (+/-) maximum value can be set to highlight the transaction metrics in any tests that have breached their maximum SLA value.

Both the threshold and maximum values are explained in more detail in the reports section below.  This functionality is configured by using the web application.

Excel Reports: 

The Excel reports are generated using the web application.  They report upon the entire run history for any defined scenario.  Currently, there are four Excel reports available.

  • Compare Two Runs – This report highlights the deltas between two test runs.
  • Averages – This report generates the averages across multiple test runs.
  • Compare Multiple Runs – This takes a primary test result and compares them against the average of several other specified test runs.
  • Transaction History – This report generates a transaction breakdown across several test runs.

You can download examples of the test reports here.

For each SLA within every transaction, the application has been configured for all metrics to raise an alert when a threshold of (+/-) 10% from primary to secondary or a maximum of 300 units has been breached.

  • For (+/-) percentage threshold (primary to secondary) SLA breaches, cells are highlighted in amber.
  • For maximum threshold SLA breaches, cells are highlighted in red.
  • Hovering over any metric with defined SLA’s will display a tool tip containing the SLA content and its pass/fail criteria.
  • Where no SLA’s are defined for a particular metric, comparison based reports will simply highlight those metrics that have a higher unit value primary to secondary. These cells will be highlighted in amber but with no tool tip available.


Database support: 

Currently, the application supports the following databases:  MS SQL Server, MySQL, Oracle, Postgres and Sybase.

Import Scenario:

This feature allows you to browse to your test scenario and automatically import the transaction names into the database.  Once a test scenario has been imported or newly created manually, it can then be used to start capturing the test run data.  There is also functionality to manually add/delete/modify your test scenario and transaction details if your scenario changes post creation.  This functionality is executed using the web application.

Interactive Graphs:

For each performance test execution be it either through a CI/CD pipeline or a manual execution using the web application, interactive graphs are created by default.

A separate graph gets created for each performance metric and for every test run.  These will either get saved into your CI/CD automation tool working directory, or the results folder set for your manual test executions as per the web application’s configuration.

For an example of the graphs that get generated for each test run, you can download some examples here

Regarding the examples, please note:

  • The application settings from the above link are configured to report the entire run history. These results are based upon 348 successful tests executions via Jenkins.
  • The application can be configured to report upon any number of previous test runs. Please see some examples here for when the application has been configured to report upon the previous 10 runs only.
  • Within each graph, transactions can be shown or hidden by clicking on the transaction name (right hand legend).
  • You can zoom in on the plot to highlight specific release periods.
  • There is an ‘Autoscale’ feature in the toolbar that resets the zoom for the graph if it has been modified.
  • The tooltips for each plot highlight the x and y values and transaction name for each data point (for example, build name and response time).

CI/CD Automation Tools: 

Currently, this application has been tested using Jenkins and Bamboo.

Manual Test Execution: 

When using the web application, you can manually execute performance tests for bespoke testing outside of a CI/CD pipeline.  Multiple instances of the same test scenario can be created to distinguish between CI/CD pipeline tests and those that are executed manually.