Header

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Test Execution Results Trend

This chart shows how test case results have changed over time. It's a stacked column chart, where each bar represents a date (shown on the x-axis) and the test results are piled on top of each other. You can change the filters to see data for a specific date, project, or test suite(s). The default view is for one day, but you can switch to view by week, month, etc. using the Group by filter. The color of the stacks represent test case results:

  1. Green stands for passed test cases.

  2. Red stands for test cases with defects.

  3. Amber stands for failed test cases.

  4. Grey stands for test cases that haven't been run.

If any test cases were planned to be run but didn't because of error flags or other reasons, they are shown as Not Executed.

Execution Trend on Web

This chart shows the total number of test runs on the web, split by status for each platform.

You can filter the results based on the browser and operating system used during these test cases. The X-axis shows details about the Browser or Operating System, while the Y-axis shows the number of test case runs.

 

Execution Trend on Mobile

This chart shows the total number of test runs on mobile devices, split by status for each platform.

You can filter the test cases based on the browser and operating system used on the mobile devices during these test cases. The X-axis shows details about the Browser or Operating System, while the Y-axis shows the number of test cases.

 

Test Execution Trend for Both (Mobile & Web Executions)

This stacked column chart shows results for tests that have both Mobile & Web parts. The X-axis shows the desktop/mobile browsers or desktop/mobile operating systems used during the tests. The test case status is shown in the stacks.

You can choose whether to view the chart for Web/Mobile/Both and whether to see the chart for Browser or Operating System. Whatever you select forms the x-axis of the chart. You can also narrow it down to specific browsers or operating systems. The Y-axis shows the number of test cases.

Hover your mouse over a column to see the total number of test cases and how many there are for each status. Click the column to see a full list of test cases and more information.

 

Failures by Browser

This chart uses vertical stacked columns to show how many tests failed on different browsers. Test cases that have both defects and failures are included in the count of failures. Click any stack in a column to see a detailed list of test cases and more info. This chart helps you quickly spot if there are any browsers where test cases fail more often.

This chart only shows data for web environment test cases. If the browser used during a test run isn't identified, it's listed under 'unidentified'.

 

Failures by Operating System

 

This chart uses vertical stacked columns to show how many tests failed on different operating systems. All test cases that failed or have defects are included. Click any stack in a column to see a detailed list of test cases and more info. This chart helps you quickly spot if there are any operating systems where test cases fail more often.

This chart only shows data for web environment test cases. If the operating system used during a test run isn't identified, it's listed under 'unidentified'.

 

Sorting Test Cases by Failure Rate

 

This chart categorizes test cases based on how often they fail over a selected period. Its purpose is to identify and list the test cases that fail most frequently. The Y-axis of the chart shows the number of test cases, while the X-axis displays the frequency of failure used to categorize the test cases.

 

The failure rate of test cases is calculated as follows:

Failure Rate = (Total occurrences of 'Fail' status / Total occurrences of 'Passed', 'Failed', and 'Defected' statuses) * 100

 

Note: In this calculation, test cases with a status of 'Not Executed' are not considered. Additionally, if a test case has been run multiple times, each instance is included in the failure rate calculation.

 

Hover over any bar in the chart to see the number of test cases with that failure rate.

 

By clicking on any bar, you can view a complete list of test cases that fall under the selected failure frequency rate.

 

This page shows the most recent status of a test case and the trend of its execution per iteration across all suites. You can filter the list of test cases based on Suite Name. In the 'Iteration Trend', it displays the status of all the test case's runs along with the iteration number.

 

The statuses of different test case executions are represented with the following colors:

·       Green stands for passed test cases.

·       Red stands for test cases with defects.

·       Amber stands for failed test cases.

·       Grey stands for test cases that have not been run.

 

By clicking an iteration, the drill-down displays complete information about the corresponding test case iteration in a grid view.

  • No labels