Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Automation Test Progress Report

This section of the dashboard provides the cumulative number of test cases executed and their split per execution status. You can view the following information for the project and dates selected in the filter criteria.

  • Tests Executed: Displays the total number of test cases that have been executed.
  • Total Pass: Displays the total number of passed test cases.
  • Total Fail: Displays the total number of failed test cases.
  • Total Defects: Displays the total number of test cases that have defects.
  • Tests Not Executed: Displays the total number of test cases that are marked for execution but did not get executed due to one reason or another.

You can click any number shown in the box to view the complete list of test cases with respective status.

30-Day Trend

The line in each box indicates the trend of these test cases for the 30-day period prior to the selected end date. Hovering on the markers on the line will provide information for the value at that point.


On this page:



Test Execution Results Trend

This chart displays the trend of test case execution results over time. In this stacked column chart, each bar represents a single date (mentioned on the x-axis) and test execution results are stacked on each other. You can filter this chart for specific Date, Project or, Test Suite(s) based on your requirements by selecting corresponding options from the filters. The default view of the chart is at a day level. Higher levels like Week, Month, and so forth can be selected from the Group by filter provided. Colors of these stacks represent Qualitia test case execution results as follows:

  • green refers to the passed test cases,
  • red refers to the test cases with defects,
  • amber refers to the failed test cases,
  • grey refers to the test cases that are Not Executed.

If test cases are marked for the execution and are not executed due to on-error flags or any such reasons, they are reported as Not Executed.


Test Cases with Defects and Failures

You can filter this chart based on the defects and failures.

Defects

This chart enables focusing on the test cases in which defects are occurring during the executions. You can either focus on the test cases presenting with high occurrences of defects or test cases with lower occurrences of defects.

You can select the project and date range for which you want to view the list of test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart will be filtered to show the top 20 test cases with highest defects.

Each bar in the chart corresponds to a single test case. X-axis displays the names of test case falling under the selected filter criteria and Y-axis displays the number of defects found in the test cases.

You can click the bar to view the detailed information about that test cases.

Failures

This chart enables focusing on the test cases which are failing during the executions. You can either focus on the test cases presenting with high occurrences of failures or test cases with lower occurrences of failures.

You can select a project and date range from which you want to view the test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart shows top 20 test cases with the highest failure rate.

Each bar in the chart corresponds to the single test case. X-axis displays the names of test case falling under the selected filter criteria whereas, Y-axis displays the number of occurrences for test case with failed status.

You can click the bar to view the detailed information about the test case.


Failures by Browser

The horizontal bar in this chart displays the number of failed tests against the browsers on which the test cases were executed. You can click any bar from the chart to view the detailed list of test cases and other related information. For the purpose of this chart, test cases with occurrences of both, Defects and Failures, have been included to determine the number of failures. This chart enables you to quickly identify whether there are any browser(s) on which test cases have higher incidence of defects or failures.

This chart considers test case executions for the web environment only. Also, if execution browser is not identified, such reports will fall under the unidentified section.


Failures by Operating System

The horizontal bar in this chart displays the number of failed tests against the operating system on which the test cases were executed. You can click any bar from the chart to view the detailed list of test cases and other related information about the same. This filter considers all the test cases that are failed or have defects in them. This chart enables you to quickly identify whether there are any operating systems on which test cases have higher incidence of defects or failures.

This chart considers test case executions for the web environment only. Also, if the operating system is not identified, such reports will fall under the unidentified section.


Execution Trend on Web

This chart shows the total count of executions run on the Web, with a split by status for each platform.

You can filter the results based on the execution environment that includes browsers and operating systems, which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test case executions.


Execution Trend on Mobile

This chart shows the total count of executions run on Mobile devices, with a split by status for each platform

You can filter the test case based on the execution environment that includes mobile browsers and mobile operating systems which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test cases.


Suite Stability Trend

This line graph helps to plot the percentage of the failure across suite executions and shows the trend in this metric over time. The trend line enables the user to observe if the number of failures has increased/decreased over time, and thereby, identify whether the suite is stabilized. The y-axis plots percentage of failure and the x-axis displays suite execution dates.

Failures are counted using the formula = [(Total number of occurrences of TCs with Fail Status) / (total number of occurrences of TCs with Pass, Fail or Defect status)] * 100

You can filter this chart based on the date range and test suite. Charts are sorted on ascending order of dates. You can hover on a point to view date and the exact percentage of the failure.

Clicking on the line navigates user to the test case details of the execution for that suite.


Suite Stability Comparison

This column chart helps us identify test suites which have a high failure rate across executions. You can filter this chart based on a Project, Dates and Test Suite(s). The x-axis shows names of the test suites whereas the Y-axis shows failure rate in percentage.

The failure percentage is calculated as: [(Number of occurrences of TCs with Fail Status) / (Number of occurrences of TCs with Pass, Fail or Defect status)] * 100

You can hover over column to view name of the test suite and the percentage of failure in the tooltip and you can click the column to view the detailed information about the suite in the drill view.

To view more charts specifically about the suite executions, click the icon and you will be navigated to another page.


Suite Execution Switches Between Two Iterations

This horizontal bar chart helps to analyze the behavior of test cases that were part of a test suite across two successive iterations. For comparison, you can select two iterations of the Test Suite using the drop-down options. By default, the latest 2 suite iterations are selected for the comparison. This chart classifies the behavior of the test cases into the following four categories:

  • Pass to Fail: This category represents test cases that passed in the 1st selected iteration and Failed (or had a Defect) in the 2nd selected iteration.
  • Fail to Fail: This category represents test cases that failed in both the iterations.
  • Fail to Pass: This category represents test cases that failed in the 1st iteration and passed in the 2nd iteration.
  • Pass to Pass: This category represents test cases that were consistently passing in both the iterations. Analyzing this chart, you can get a list of test cases that need to be looked at on priority.

In the context of this chart, a test case that has either failed or had a defect will be considered as Failed.

Hovering over a bar shows the category and the number of test cases in that category.

Note: For beta release, drill down option is not available. 


Suite Execution Time Statistics Across Last 5 Iterations

This is a dual Y-axis combo column chart. The left side Y axis shows the execution time in seconds and the right-side Y-axis shows the number of test cases which were part of the test case execution. This chart displays data for a test suite across the last 5 iterations. You can select the desired suite from the top right corner of the page. If a Suite has fewer than 5 iterations for the date range selected, then the chart will display as many iterations are available for that Suite during that period. The column shows the execution time metric and the line represents the number of test cases executed as a part of the suite execution. The x-axis displays iteration numbers.

Hovering over the column shows the name of the metric and its value, and the iteration number of the Suite in a tool-tip.


Drill View - Grid showing test case level details

Test Case Execution Drill View informs about some important aspects associated with the execution of a test cases in QAS. This view displays the metadata about these test cases execution date, the start and end time, host where the test case was executed and which platform it was run against, and so forth. All this information is presented in the form of a grid.

This view displays the attributes in the following order: Date, Project Name, Test Suite Name, Test Suite Type, Test Scenario Name, Test Script Name, User (who executed the test script), Status, Execution Platform, Browser Name, Browser Version, OS Name, OS Version, Host Name, Host Type, Mobile Browser Name, Mobile Browser Version, Mobile OS Name, Mobile OS Version, Mobile Host Name, Test Case Name, Test Case Author, Test Case Priority, Execution Start Time, Execution End Time, Run Duration.

These attributes are classified into meaningful groups as mentioned below:

  • All environment related columns are grouped together with headers in the green color.
  • All other attribute headers are displayed in the blue color.

You can invoke drills from other views of the dashboard to view contextual information. At such times, only the filtered data is displayed matching the drilled down conditions.

For example, if you click the number from the ‘Tests Executed’ box of the automation test progress report, the Drill view displays only the list of test cases that are executed for selected project and date range.

  • No labels