Test Execution Results Trend
This chart displays the trend of test case execution results over time. In this stacked column chart, each bar represents a single date (mentioned on the x-axis) and test execution results are stacked on each other. You can filter this chart for specific Date, Project or, Test Suite(s) based on your requirements by selecting corresponding options from the filters. The default view of the chart is at a day level. Higher levels like Week, Month, and so forth can be selected from the Group by filter provided. Colors of these stacks represent Qualitia test case execution results as follows:
- green refers to the passed test cases,
- red refers to the test cases with defects,
- amber refers to the failed test cases,
- grey refers to the test cases that are Not Executed.
Info |
---|
If test cases are marked for the execution and are not executed due to on-error flags or any such reasons, they are reported as Not Executed. |
Test Cases with Defects and Failures
You can filter this chart based on the defects and failures.
Defects
This chart enables focusing on the test cases in which defects are occurring during the executions. You can either focus on the test cases presenting with high occurrences of defects or test cases with lower occurrences of defects.
You can select the project and date range for which you want to view the list of test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart will be filtered to show the top 20 test cases with highest defects.
Each bar in the chart corresponds to a single test case. X-axis displays the names of test case falling under the selected filter criteria and Y-axis displays the number of defects found in the test cases.
You can click the bar to view the detailed information about that test cases.
Failures
This chart enables focusing on the test cases which are failing during the executions. You can either focus on the test cases presenting with high occurrences of failures or test cases with lower occurrences of failures.
You can select a project and date range from which you want to view the test cases. You can choose between top 10 / 20 or bottom 10 / 20 test cases by selecting the appropriate option from the top of the chart. For example, if you select the Top 20 option, the chart shows top 20 test cases with the highest failure rate.
Each bar in the chart corresponds to the single test case. X-axis displays the names of test case falling under the selected filter criteria whereas, Y-axis displays the number of occurrences for test case with failed status.
You can click the bar to view the detailed information about the test case.
Failures by Browser
The horizontal bar in this chart displays the number of failed tests against the browsers on which the test cases were executed. You can click any bar from the chart to view the detailed list of test cases and other related information. For the purpose of this chart, test cases with occurrences of both, Defects and Failures, have been included to determine the number of failures. This chart enables you to quickly identify whether there are any browser(s) on which test cases have higher incidence of defects or failures.
This chart considers test case executions for the web environment only. Also, if execution browser is not identified, such reports will fall under the unidentified section.
Failures by Operating System
The horizontal bar in this chart displays the number of failed tests against the operating system on which the test cases were executed. You can click any bar from the chart to view the detailed list of test cases and other related information about the same. This filter considers all the test cases that are failed or have defects in them. This chart enables you to quickly identify whether there are any operating systems on which test cases have higher incidence of defects or failures.
This chart considers test case executions for the web environment only. Also, if the operating system is not identified, such reports will fall under the unidentified section.
Execution Trend on Web
This chart shows the total count of executions run on the Web, with a split by status for each platform.
You can filter the results based on the execution environment that includes browsers and operating systems, which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test case executions.
Execution Trend on Mobile
This chart shows the total count of executions run on Mobile devices, with a split by status for each platform
You can filter the test case based on the execution environment that includes mobile browsers and mobile operating systems which are used while executing these test cases. The X-axis shows the execution environment details such as a browser or operating system details whereas the Y-axis shows the number of test cases.
Suite Stability Trend
This line graph helps to plot the percentage of the failure across suite executions and shows the trend in this metric over time. The trend line enables the user to observe if the number of failures has increased/decreased over time, and thereby, identify whether the suite is stabilized. The y-axis plots percentage of failure and the x-axis displays suite execution dates.
Failures are counted using the formula = [(Total number of occurrences of TCs with Fail Status) / (total number of occurrences of TCs with Pass, Fail or Defect status)] * 100
You can filter this chart based on the date range and test suite. Charts are sorted on ascending order of dates. You can hover on a point to view date and the exact percentage of the failure.
Clicking on the line navigates user to the test case details of the execution for that suite.
Suite Stability Comparison
This column chart helps us identify test suites which have a high failure rate across executions. You can filter this chart based on a Project, Dates and Test Suite(s). The x-axis shows names of the test suites whereas the Y-axis shows failure rate in percentage.
The failure percentage is calculated as: [(Number of occurrences of TCs with Fail Status) / (Number of occurrences of TCs with Pass, Fail or Defect status)] * 100
You can hover over column to view name of the test suite and the percentage of failure in the tooltip and you can click the column to view the detailed information about the suite in the drill view.
To view more charts specifically about the suite executions, click the icon and you will be navigated to another page.
Suite Execution Switches Between Two Iterations
This horizontal bar chart helps to analyze the behavior of test cases that were part of a test suite across two successive iterations. For comparison, you can select two iterations of the Test Suite using the drop-down options. By default, the latest 2 suite iterations are selected for the comparison. This chart classifies the behavior of the test cases into the following four categories:
- Pass to Fail: This category represents test cases that passed in the 1st selected iteration and Failed (or had a Defect) in the 2nd selected iteration.
- Fail to Fail: This category represents test cases that failed in both the iterations.
- Fail to Pass: This category represents test cases that failed in the 1st iteration and passed in the 2nd iteration.
- Pass to Pass: This category represents test cases that were consistently passing in both the iterations. Analyzing this chart, you can get a list of test cases that need to be looked at on priority.
In the context of this chart, a test case that has either failed or had a defect will be considered as Failed.
Hovering over a bar shows the category and the number of test cases in that category.
Note: For beta release, drill down option is not available.
Suite Execution Time Statistics Across Last 5 Iterations
This is a dual Y-axis combo column chart. The left side Y axis shows the execution time in seconds and the right-side Y-axis shows the number of test cases which were part of the test case execution. This chart displays data for a test suite across the last 5 iterations. You can select the desired suite from the top right corner of the page. If a Suite has fewer than 5 iterations for the date range selected, then the chart will display as many iterations are available for that Suite during that period. The column shows the execution time metric and the line represents the number of test cases executed as a part of the suite execution. The x-axis displays iteration numbers.
Hovering over the column shows the name of the metric and its value, and the iteration number of the Suite in a tool-tip.
Drill View - Grid showing test case level details
Test Case Execution Drill View informs about some important aspects associated with the execution of a test cases in QAS. This view displays the metadata about these test cases execution date, the start and end time, host where the test case was executed and which platform it was run against, and so forth. All this information is presented in the form of a grid.
This view displays the attributes in the following order: Date, Project Name, Test Suite Name, Test Suite Type, Test Scenario Name, Test Script Name, User (who executed the test script), Status, Execution Platform, Browser Name, Browser Version, OS Name, OS Version, Host Name, Host Type, Mobile Browser Name, Mobile Browser Version, Mobile OS Name, Mobile OS Version, Mobile Host Name, Test Case Name, Test Case Author, Test Case Priority, Execution Start Time, Execution End Time, Run Duration.
These attributes are classified into meaningful groups as mentioned below:
- All environment related columns are grouped together with headers in the green color.
- All other attribute headers are displayed in the blue color.
You can invoke drills from other views of the dashboard to view contextual information. At such times, only the filtered data is displayed matching the drilled down conditions.
For example, if you click the number from the ‘Tests Executed’ box of the automation test progress report, the Drill view displays only the list of test cases that are executed for selected project and date range.
Test Cases Created/Modified Report
This column chart displays the trend in the number of test cases created and modified in the selected project and date range. You can group the charts-based on date, week, month and year level. By default, data will be grouped by date.
For the purpose of this chart, suppose a test case was created and updated in the same day, it would only appear in the No. of test cases created column.
The X-axis shows the dates on which the test cases were created and modified whereas the Y-axis shows the number of test cases.
You click any column to view the complete list of test cases along with the test case(s) metadata which were created or modified at that time.
Platform Utilization Heatmap
This chart helps you in analyzing the utilization of the automation platform in a calendar view. You can see the number of test cases that have been executed across an entire year. This number includes executions from the Suite screen of Qualitia Automation studio and the executions triggered through CI/CD pipelines (offline package). You can select to view the number of tests executed or the number of test efforts (in hours) on the chart.
A heatmap shows a proportionate color scheme to depict the usage, Red for least number of executions and Green for the high number of executions. If tests have not been run on a particular day then it is left in the white color. Test cases that had been marked for execution but which did not get executed due to some or the other reason, will be excluded from the count of tests executed in this chart.
You can use date, project, suite to filter this chart. Also, you can click on any cell in the heatmap to view the list of test cases executed in that month and day combination. For example, clicking on the value in the Monday row in March column shows the details of the tests executed on all the Mondays in the month of March.
Defects Logged Report
This report shows a grid view of defects that have been logged by Qualitia with the defect management tool, after the test case execution. This includes defects that have been logged automatically after the test case executions and defects that are logged from the Reports section of Qualitia Automation Studio.
This report displays details of the actual defects logged with defect management system, and hence the number of defects shown in this report will not be the same as the number of test case occurrences with Defect status shown in the KPIs and other charts of the dashboard.The report is displayed based on the Date, Projects, and Test Suites selected in the main dashboard page.
Test Execution Trend for Both (Mobile & Web Executions)
This stack column chart displays execution results for tests that have both Mobile & Web components. The x-axis shows the desktop / mobile browsers used while execution or the desktop / mobile operating systems where test cases were executed. The test case status is represented through stacks.
You can select whether you would like to view the chart for Web/Mobile/Both and whether you would like to see the chart for browsers or operating systems. The selected dimension forms the x-axis for the chart. You can also further filter and see only the specific Browser(s) or Operating System(s) that you would like to see in the chart. The Y-axis displays the number of test cases.
You can filter this chart based on the projects, suites and dates.
Hovering over the column displays the total number of test cases and the number of test cases for each status. You can click the column to view the complete list of test cases and other related information.