Running tests

To check how the application or the system responds, you can run the recorded tests. Test scripts and their dependent assets are stored in a Git repository. HCL OneTest™ Server connects to the Git repository to display the test scripts in the Execute page.

Before you begin

You must have completed the following tasks:

  • Be assigned the Tester or Project Owner role to run the tests.
  • Confirmed with your Administrator that the appropriate license parameters to point to the license server during software installation were set.
  • As the Project Owner, connected to the appropriate folder and branch of the Git repository to view test assets on the Execute page.

About this task

The following table lists the supported and not supported tests run from HCL OneTest Server:
Product Supported test runs Not supported test runs
HCL OneTest Performance VU Schedule, Rate Schedule, Compound test A single test script of any test extension cannot be run.

Tests belonging to 32-bit test extensions and SOA Quality are not run as part of a VU Schedule, a Rate Schedule, and a Compound test.

HCL OneTest UI Accelerated Functional Testing (AFT) suite or Compound test with only Web UI tests Tests belonging to the Functional Test perspective are not run independently or as part of an AFT suite or a Compound test.

A Web UI test and a mobile test cannot be run independently outside of a Compound test or AFT suite.

HCL OneTest API Test suite Test suites that have scenarios with references satisfied by local stubs.

Test suites that have tests with subscribe actions that operate in the watch mode.

A stand-alone API test case cannot be run independently. An API test case must be part of a Test suite.

Note: All the supported test types can be run only on the Mozilla Firefox browser.
HCL OneTest API suites might reference local stubs and depend on user libraries of the target transport. If you have such suites, you must have completed the following tasks before you can execute them:
  • Removed local stubs or published them to HCL Quality Server for API test suites. To change the stub reference from local to remote, see Publishing stubs and Scenario reference settings.
  • Copied the library files (JAR files) of the transport(s) to a directory under UserLibs on HCL OneTest Server for API test suites. For example /myFiles/Userlibs/<userlibraryname> where <userlibraryname> is either blank or one of the following names:


    Directory under UserLibs to which third-party library files must be copied.

    File Not Applicable
    FIX Not Applicable
    HTTP Not Applicable
    MQTT Not Applicable
    RabbitMQ Not Applicable
    Software AG webMethods Integration Server webMethods
    TIBCO Rendezvous TIBCO
    TIBCO SmartSockets TIBCO
    WebSphere Application Server 
Service Integration Bus (SiBus) WAS
    WebSphere MQ WMQ
    Database JDBC
After copying the user libraries, you must run the following Docker command to create or update the volume containing library files to be used by HCL OneTest Server.
Attention: You must run the following Docker command when no tests are running to prevent concurrent access problems:
docker run -rm -v /<anyFolder>/UserLibs:/ulsrc -v userlibs:/uldest alpine:latest cp -r /ulsrc/.  /uldest/UserLibs

See the Docker command reference for more information about Docker run commands.

For example, the following command creates or updates the userlibs volume from files and directories under /opt/myFiles/UserLibs.
docker run -rm -v /opt/myFiles/UserLibs:/ulsrc -v userlibs:/uldest alpine:latest cp -r /ulsrc/.  /uldest/UserLibs
Remember: When using a transport with the host name set to localhost for the HTTP/TCP proxy, you must replace the host name with fully-qualified-domain-name or IPAddress of the proxy host.
To run JMeter tests as part of the VU Schedule or Rate Schedule from the server, you must have installed the JMeter application on the computers that have HCL OneTest Performance Agent and set the environment variable JMETER_HOME pointing to the JMeter installation directory.

If there is a long list of test assets and you want to view the tests by the type or by last results, you can filter the list by creating various queries. See Creating filter queries.


To run a test:

  1. Initiate the test run on the Execution page. If the tests are already associated with an environment, dataset, or variables in the desktop client, you can choose to configure these options when running the test from the server.
  2. Create or pick a label for the test run to identify the run. After the run completes, the label is displayed against the run in the Results page. The label, once created, can be used by all the members of the project.
  3. Select secrets to be used for the test run. The Environment tab is applicable only for the API test suites. The environment asset (.ghe) was created from the desktop client and added to the Git repository. For more information about secrets, see Protecting API test assets by using secrets.
    Note: Secrets are created by the project owner for you to select in the Environment tab.
  4. Select the dataset in the Data Source tab to override the dataset that was associated in the desktop client. The dataset that you override should be compatible with the existing dataset. For example, if the existing data with the test uses columns 'Name', 'Age', and 'Gender', you can override it with a dataset that uses a superset with columns 'ID', 'Name', 'DOB', 'Address', and 'Gender'.
    Note: If the test contain encrypted dataset, the Project Owner must classify it in the Data Security tab on the Project page before running the test. See Managing an encrypted dataset.
  5. If the tests use variables, you can override that by choosing another set of variables from the Variables tab. To add new variables manually, click the Add icon Add. To add new variables from your local computer or from the Git repository that is associated with your server project, click the Upload icon Upload and select Upload from local system or Browse from Server.
  6. Run the test.
    Note: The data configurations that you made during the test run are not saved. You have to configure again on subsequent runs.
  7. To view the progress of the run, go to the Progress page. This page displays high-level progress of the test runs triggered by you. From this page, you can closely monitor the run of the Schedules, Compound Test, and AFT Suites, by clicking Monitor Test. See Monitoring a test run.
  8. After the run completes, the test report can be viewed from the Results page. See Viewing test results and reports. You can also check the Execution Logs and Test Logs to understand how the test ran. See Checking logs.

What to do next

You can view the verdict of the run. The verdicts are - Pass, Fail, Inconclusive. Each verdict is indicated by a color. For example, Green indicates pass, Red indicates failure, and Blue indicates inconclusiveness.

The following tasks explain how to monitor a test run, check the logs, and run tests on remote computers.

Monitoring a test run

Long-running tests require continuous monitoring so that they run as expected and provide correct results. Due to various reasons, sometimes the tests start failing but continue to run till it completes. In such cases, you might want to adopt different techniques such as changing the number of virtual users, changing the rate of the run, or changing the log level to achieve the desired results. You can also stop the test run.

About this task

You can monitor the run for VU Schedule, Rate Schedule, Compound Tests, and AFT Suites. For Compound Tests, the only action available is to stop the run.


  1. Start a test run from the Execution page.
  2. Go to the Progress page. When the status of the run changes to Running, from the Action column, click Monitor Test.
    Moniter test

    The Statistics report opens.

  3. From the Running menu, select an appropriate action. To view the list of actions available, see Controlling the test runs.

Checking logs

To verify how the test ran or to debug test run failure, you can check the Test Log and Execution Log.

About this task

The Test Log displays the interaction between the desktop client and the application or system under test. After the test run completes, the verdict of the run can be Pass, Fail, or Inconclusive. If the verdict of the run is Pass, the Test Log is available in the Results page.
Test Log

The Execution log displays the console message of the runtime process that runs the test. This log is useful in determining the cause of the failure if the verdict of the run is Fail or Inconclusive. You can view the Execution Log from the Progress page.