Automated requirement coverage calculation display of the results

Let’s discuss the Automated requirement coverage. This is a challenging subject because anything we wish to cover needs to be countable and modular. There is no issue with calculating code coverage because you are counting lines of run code. On the other hand, requirements occur in a variety of formats and are typically connected to a specific system, like TestRail. Hence, while trying to count Automated requirement coverage you can have an apples-and-oranges issue if you’re not into a certain system or if your teams utilize a variety of systems.

I’d like to discuss a solution to this issue that was created at my old employer with you in this article. There, testing was entirely automated, with just a few manual tests being performed. The company benefited from the requirement coverage metric.

Following a brief discussion of the overall strategy, we’ll go over how PyTest and Allure Report was used to put it into practice. See the GitHub website or the docs for more information on Allure Report.

The strategy

An artifact that can be mapped to an automated test is what we refer to as an Automated requirement coverage. The following benefits come from having an automated test for each requirement:

If a requirement is tested, we always know.

Calculating coverage is simple since we always know what a particular test is doing.

Our method is quite ubiquitous and can be applied to requirements, manual test cases, analyst-written user stories, and anything else to which a test can be mapped.

Engineers, managers, and analysts can track automation using the coverage statistic that we derive and establish goals based on it. Also utilized for automated tests is the metric.

Defining specifications

There are various methods for defining Automated requirement coverage:

Jira. To handle test cases and how they relate to requirements, we can utilize tools like X-Ray or comparable extensions.

Confluence. In this case, requirement coverage is typically computed using third-party technologies.

Text documents. The practice of directly storing requirements or use cases as text files in the project repository is becoming more and more common (.rst, .md, etc.). Since a reporter like Allure doesn’t have direct access to the requirements in this situation, we can’t rely on standard methods.

Development was done by a number of distinct teams at the organization where this strategy was used, and each team chose its method for storing requirements. There wasn’t a single method for figuring out coverage for all teams as a result.

The automated test needs to be connected to the relevant requirement in each of the aforementioned situations. A Jira issue ID, a link to a Confluence page, or the location of a file containing the requirement can all be used for this. Learn more

Determining coverage

Calculating coverage is a rather easy operation. Now we have information on both requirements and automated tests, we must connect the two. In a perfect world, requirements and tests would be 1:1 mapped. Some circumstances are actually possible:

Requirements are deemed not covered if they have no connection to automated tests.

It is advisable to analyze automated tests that are unrelated to the requirements. Maybe a prerequisite was overlooked? Or perhaps the automated test is performing an odd or unneeded task? Or did someone simply forget to remove the test after removing its requirement?

Ensuring atomicity of requirements

Requirements must be sufficiently atomic to maintain a 1:1 link between tests and requirements. It may take several tests to completely cover a need that is too general (such as verifying that the user can log into the system). Yet, the mere existence of even one test would still indicate complete coverage, which is deceptive. As a result, our requirements are more specific than those of a typical corporation. A requirement is something you can map an automated test onto, according to our definition.

Together, we ensure that the criteria are atomic. Automation engineers write some of the requirements, while developers and analysts provide them with others. Requirements that can’t be automated are divided into several categories.

Using the Allure Framework to display the outcomes

The process of visualizing outcomes is a little more challenging than the one of producing them. The issue is that if we transferred the data to Grafana or Confluence for visualization, a completely new layer of entities would need to be maintained. We would be tinkering with the programs that would be gathering test data for those systems all the time. Moreover, using this strategy would make it difficult to demonstrate requirement coverage for a particular repository branch.

We decided to attempt to incorporate the test execution report and coverage analysis. We don’t need to add any new layers to the tech stack because Allure Framework is used for both. With a link to the required source, we offer requirements as fictitious empty tests (Jira, Confluence, or text file). We display the true test when an automated test is available, and the fake test when it is not. This is how it appears: