Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Info
  • For test suites executions on desktop projects, the following charts are not displayed:
    • Test Execution Results Trend
      • Execution Trend on Web
    • Execution Trend on Mobile
    • Test Execution Trend for Both (Mobile & Web Executions)
    • Failures by Browser
    • Failures by Operating System
  • Web Dashboards display the suite execution results as per the time zone of the QAS server.

Automation Test Progress Report

This section of the dashboard provides the cumulative number of test cases executed and their split per execution status. You can view the following information for the project and dates selected in the filter criteria.

  • Tests Executed: Displays the total number of test cases that have been executed.
  • Total Pass: Displays the total number of passed test cases.
  • Total Fail: Displays the total number of failed test cases.
  • Total Defects: Displays the total number of test cases that have defects.
  • Tests Not Executed: Displays the total number of test cases that are marked for execution but did not get executed due to one reason or another.

You can click any number shown in the box to view the complete list of test cases with respective status.

30-Day Trend

The line in each box indicates the trend of these test cases for the 30-day period prior to the selected end date. Hovering on the markers on the line will provide information for the value at that point.


On this page:

Table of Contents
maxLevel2
minLevel2



Test Execution Results Trend

This chart displays the trend of test case execution results over time. In this stacked column chart, each bar represents a single date (mentioned on the x-axis) and test execution results are stacked on each other. You can filter this chart for specific Date, Project or, Test Suite(s) based on your requirements by selecting corresponding options from the filters. The default view of the chart is at a day level. Higher levels like Week, Month, and so forth can be selected from the Group by filter provided. Colors of these stacks represent Qualitia test case execution results as follows:

  • green refers to the passed test cases,
  • red refers to the test cases with defects,
  • amber refers to the failed test cases,
  • grey refers to the test cases that are Not Executed.
Info

If test cases are marked for the execution and are not executed due to on-error flags or any such reasons, they are reported as Not Executed.


Execution Trend on Web

This chart shows the total count of executions run on the Web, with a split by status for each platform.

You can filter the results based on the execution environment that includes browsers and operating systems, which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test case executions.

Execution Trend on Mobile

This chart shows the total count of executions run on Mobile devices, with a split by status for each platform

You can filter the test case based on the execution environment that includes mobile browsers and mobile operating systems which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test cases.

Test Execution Trend for Both (Mobile & Web Executions)

This stack column chart displays execution results for tests that have both Mobile & Web components. The x-axis shows the desktop / mobile browsers used while execution or the desktop / mobile operating systems where test cases were executed. The test case status is represented through stacks.

You can select whether you would like to view the chart for Web/Mobile/Both and whether you would like to see the chart for browsers or operating systems. The selected dimension forms the x-axis for the chart. You can also further filter and see only the specific Browser(s) or Operating System(s) that you would like to see in the chart. The Y-axis displays the number of test cases.

You can filter this chart based on the projects, suites and dates.

Hovering over the column displays the total number of test cases and the number of test cases for each status. You can click the column to view the complete list of test cases and other related information.

Failures by Browser

The vertical stacked columns in this chart displays the number of failed tests against the browsers on which the test cases were executed. For the purpose of this chart, test cases with occurrences of both, Defects and Failures, have been included to determine the number of failures. You can click any stack in the columns on the chart to view the detailed list of test cases and other related information.  This chart enables you to quickly identify whether there are any browser(s) on which test cases have higher incidence of defects or failures.

This chart considers test case executions for the web environment only. Also, if execution browser is not identified, such reports will fall under the unidentified section.

Failures by Operating System

The vertical stacked columns in this chart displays the number of failed tests against the operating system on which the test cases were executed. This chart considers all the test cases that are failed or have defects in them. You can click any stack from the columns in the chart to view the detailed list of test cases and other related information about the same. This chart enables you to quickly identify whether there are any operating systems on which test cases have higher incidence of defects or failures.

This chart considers test case executions for the web environment only. Also, if the operating system is not identified, such reports will fall under the unidentified section.

Test case classification by Failure Rate

This chart categorizes test cases based on their failure rate across the selected period. The purpose is to observe and list out the test cases that have a higher frequency of failures. The y-axis of the chart shows the number of test cases and the x-axis shows the failure frequency that categorizes the test cases. 

The failure rate of the test cases is calculated as follows:

Failure rate = (Total number of occurrences of Fail status / total number of occurrences of Passed, Failed, and Defected status) * 100

Note: While calculating this failure rate, test cases with the status not executed (status = -1) are not considered. Also, if any test case is executed multiple times, all the multiple instances are considered while calculating the failure rate.

Hover on any of the bars from the chart to view the Number of Test Cases with the failure rate.

Upon clicking any bar you can view complete list of test cases falling under the selected frequency rate .

This page shows the latest status of a test case and the execution trend per iteration across all suites. You can also filter test case list based on Suite Name. In an Iteration trend, it shows the status of all its executions along with iteration number.

The statuses of different test case executions are shown through the following colors:

  • green refers to the passed test cases,
  • red refers to the test cases with defects,
  • amber refers to the failed test cases,
  • grey refers to the test cases that are Not Executed.

Upon clicking an iteration, the drill-down displays the complete information of the corresponding test case iteration in the grid view.

Test Cases Created/Modified Report

This column chart displays the trend in the number of test cases created and modified in the selected project and date range. You can group the charts-based on date, week, month and year level. By default, data will be grouped by date.

For the purpose of this chart, suppose a test case was created and updated in the same day, it would only appear in the No. of test cases created column. 

The X-axis shows the dates on which the test cases were created and modified whereas the Y-axis shows the number of test cases. 

You click any column to view the complete list of test cases along with the test case(s) metadata which were created or modified at that time.

Suite Stability Comparison

This column chart helps us identify test suites which have a high failure rate across executions. You can filter this chart based on a Project, Dates and Test Suite(s). The x-axis shows names of the test suites whereas the Y-axis shows failure rate in percentage.

The failure percentage is calculated as: [(Number of occurrences of TCs with Fail Status) / (Number of occurrences of TCs with Pass, Fail or Defect status)] * 100

You can hover over column to view name of the test suite and the percentage of failure in the tooltip and you can click the column to view the detailed information about the suite in the drill view.

To view more charts specifically about the suite executions, click the icon and you will be navigated to another page.


Suite Stability Trend

This line graph helps to plot the percentage of the failure across suite executions and shows the trend in this metric over time. The trend line enables the user to observe if the number of failures has increased/decreased over time, and thereby, identify whether the suite is stabilized. The y-axis plots percentage of failure and the x-axis displays suite execution dates.

Failures are counted using the formula = [(Total number of occurrences of TCs with Fail Status) / (total number of occurrences of TCs with Pass, Fail or Defect status)] * 100

You can filter this chart based on the date range and test suite. Charts are sorted on ascending order of dates. You can hover on a point to view date and the exact percentage of the failure.

Clicking on the line navigates user to the test case details of the execution for that suite.


Suite Execution Switches Between Two Iterations

This horizontal bar chart helps to analyze the behavior of test cases that were part of a test suite across two successive iterations. For comparison, you can select two iterations of the Test Suite using the drop-down options. By default, the latest 2 suite iterations are selected for the comparison. This chart classifies the behavior of the test cases into the following four categories:

  • Pass to Fail: This category represents test cases that passed in the 1st selected iteration and Failed (or had a Defect) in the 2nd selected iteration.
  • Fail to Fail: This category represents test cases that failed in both the iterations.
  • Fail to Pass: This category represents test cases that failed in the 1st iteration and passed in the 2nd iteration.
  • Pass to Pass: This category represents test cases that were consistently passing in both the iterations. Analyzing this chart, you can get a list of test cases that need to be looked at on priority.

In the context of this chart, a test case that has either failed or had a defect will be considered as Failed.

Hovering over a bar shows the category and the number of test cases in that category.


Suite Execution Time Statistics Across Last 5 Iterations

This is a dual Y-axis combo column chart. The left side Y-axis shows the execution time in seconds and the right-side Y-axis shows the number of test cases which were part of the test case execution. This chart displays data for a test suite across the last 5 iterations. You can select the desired suite from the top right corner of the page. If a Suite has fewer than 5 iterations for the date range selected, then the chart will display as many iterations are available for that Suite during that period. The column shows the execution time metric and the line represents the number of test cases executed as a part of the suite execution. The x-axis displays iteration numbers.

Hovering over the column shows the name of the metric and its value, and the iteration number of the Suite in a tool-tip.


Test Cases with Defects and Failures

You can filter this chart based on the defects and failures.

Defects

This chart enables focusing on the test cases in which defects are occurring during the executions. You can either focus on the test cases presenting with high occurrences of defects or test cases with lower occurrences of defects.

You can select the project and date range for which you want to view the list of test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart will be filtered to show the top 20 test cases with highest defects.

Each bar in the chart corresponds to a single test case. X-axis displays the names of test case falling under the selected filter criteria and Y-axis displays the number of defects found in the test cases.

You can click the bar to view the detailed information about that test cases.

Failures

This chart enables focusing on the test cases which are failing during the executions. You can either focus on the test cases presenting with high occurrences of failures or test cases with lower occurrences of failures.

You can select a project and date range from which you want to view the test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart shows top 20 test cases with the highest failure rate.

Each bar in the chart corresponds to the single test case. X-axis displays the names of test case falling under the selected filter criteria whereas, Y-axis displays the number of occurrences for test case with failed status.

You can click the bar to view the detailed information about the test case.


Test Cases with Defects and Failures

You can filter this chart based on the defects and failures.

Defects

This chart enables focusing on the test cases in which defects are occurring during the executions. You can either focus on the test cases presenting with high occurrences of defects or test cases with lower occurrences of defects.

You can select the project and date range for which you want to view the list of test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart will be filtered to show the top 20 test cases with highest defects.

Each bar in the chart corresponds to a single test case. X-axis displays the names of test case falling under the selected filter criteria and Y-axis displays the number of defects found in the test cases.

You can click the bar to view the detailed information about that test cases.

Failures

This chart enables focusing on the test cases which are failing during the executions. You can either focus on the test cases presenting with high occurrences of failures or test cases with lower occurrences of failures.

You can select a project and date range from which you want to view the test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart shows top 20 test cases with the highest failure rate.

Each bar in the chart corresponds to the single test case. X-axis displays the names of test case falling under the selected filter criteria whereas, Y-axis displays the number of occurrences for test case with failed status.

You can click the bar to view the detailed information about the test case.


Defects Logged Report

This report shows a grid view of defects that have been logged by Qualitia with the defect management tool, after the test case execution. This includes defects that have been logged automatically after the test case executions and defects that are logged from the Reports section of Qualitia Automation Studio.

This report displays details of the actual defects logged with the defect management system, and hence the number of defects shown in this report will not be the same as the number of test case occurrences with Defect status shown in the KPIs and other charts of the dashboard. The report is displayed based on the Date, Projects, and Test Suites selected in the main dashboard page.


Trends in Defects Logged by Automation Tool

This simple line chart plots the total number of defects that have been logged by Qualitia Automation Studio at the end of executions with the client’s defect management tool over a selected period.

 The X-axis shows the dates in which test case executions have occurred whereas the Y-axis shows the number of defects that have been logged during test case executions. 

 You can filter this chart based on the Date, Project, and Suites. The default view of the chart is at a day level. Higher levels such as WeekMonth, and so forth can be selected from the provided Group by filter.

Hover on a dot to view the exact number of defects and click the data point to view the complete list of defects and other relevant details about these defects.



Drill View - Grid showing test case level details

Test Case Execution Drill View informs about some important aspects associated with the execution of test cases in QAS. This view displays the metadata about these test cases execution date, the start and end time, host where the test case was executed and which platform it was run against, and so forth. All this information is presented in the form of a grid.

This view displays the attributes in the following order: Date, Project Name, Test Suite Name, Test Suite Type, Test Scenario Name, Test Script Name, User (who executed the test script), Status, Execution Platform, Browser Name, Browser Version, OS Name, OS Version, Host Name, Host Type, Mobile Browser Name, Mobile Browser Version, Mobile OS Name, Mobile OS Version, Mobile Host Name, Test Case Name, Test Case Author, Test Case Priority, Execution Start Time, Execution End Time, Run Duration.

These attributes are classified into meaningful groups as mentioned below:

  • All environment-related columns are grouped together with headers in the green color.
  • All other attribute headers are displayed in the blue color.

You can invoke drills from other views of the dashboard to view contextual information. At such times, only the filtered data is displayed matching the drilled down conditions.

For example, if you click the number from the ‘Tests Executed’ box of the automation test progress report, the Drill view displays only the list of test cases that are executed for the selected project and date range.



Datamart Refresh Status

The data shown in the dashboards is fetched from a datamart, which consolidates test executions data from associated Qualitia Project and Results databases. This datamart database is currently updated once every day using an Extract, Transform and Load Process.

This chart displays the history of datamart loading events. It helps to identify if any failures have occurred when loading the datamart, and confirm that the data on the dashboard is correct and up to date.

The chart displays this information in a simple grid form with the start and the end time of the datamart process and the status against it. In the cases of failures, this chart shows the failure point and possible reason for failure, which helps to identify the area that needs to be fixed, for successfully loading the datamart.

This chart shows 5 records per page, and you can use the pagination options to view further records.





Platform Utilization Heatmap

This chart illustrates the overall utilization of the automation platform during the current year in a calendar view with a heatmap. You can easily view the number of test cases that have been run across the year on the daily basis. This number is a combination of test cases executed using automation studio and test cases executed using the CI tools (through offline packages). Only offline executions run in a connected mode with QAS Server will be included in this chart as the results for these would have been synced with QAS. Complete offline executions will be included only if their results have been manually imported into QAS using the offline report utility.

This chart illustrates the usage of the automation platform with different colors. A heatmap uses a proportionate color scheme where red indicates the least number of test case executions whereas green indicates the high number of test case executions. In case there are no test cases executed on any day, that field is left blank (white) for easy identification.

You can filter this chart to view the total number of test cases executed or the total number of hours for which automation platform is utilized for test case execution. When the view by test efforts is selected, you can view the details by hovering your mouse pointer on the desired field.

You can also view the complete list of test cases in the grid view by clicking in the field.