How transparent is the user story test status to your team? How easily can your team assess if the testing is complete for a user story? AssertThat's test automation integration brings together the manual and automated test status for each user story in one place Jira.
Making automated test results easily accessible does more than save time and effort. Commonly, test automation frameworks run on separate build infrastructures which complicates access. Easy access to and presentation of test results demonstrates transparency, which is a core value of effective Agile teams.
Compounding this issue is how the test automation results are normally presented. Typically, automation results are reported separately to scenarios, which makes it harder to establish traceability to user stories. This makes it a painful process to establish the status of all the tests for a user story.
AssertThat BDD plugin addresses access and traceability by having your Gherkin scenarios linked to your automated test execution results all in Jira, making the results transparent for your team, your managers, and the business.
If you don’t feel like reading the whole thing, skip to the video in the end and see the plugin in action.
Simple to set up (we’ll talk about integration using Maven in this post, other types of integration will follow shortly), just add AssertThat BDD plugin to your pom and that’s pretty much it.
Just add the following plugin to you pom:
<plugin> <groupId>com.assertthat.plugins</groupId> <artifactId>assertthat-bdd-maven-plugin</artifactId> <version>1.1</version> <configuration> <projectId> <!--Jira project id e.g. 10001--> </projectId> <!--Optional can be supplied as environment variable ASSERTTHAT_ACCESS_KEY --> <accessKey> <!-- ASSERTTHAT_ACCESS_KEY --> </accessKey> <!--Optional can be supplied as environment variable ASSERTTHAT_SECRET_KEY --> <secretKey> <!-- ASSERTTHAT_SECRET_KEY --> </secretKey> </configuration> <executions> <execution> <configuration> <!--Optional - default ./features--> <outputFolder>src/test/resources/com/assertthat/features</outputFolder> <!--Optional - all features downloaded by default - should be a valid JQL--> <jql>project = XX AND key in ('XXX-1')</jql> <!--Optional - default automated (can be one of: manual/automated/both)--> <mode>automated</mode> </configuration> <id>features</id> <goals> <goal>features</goal> </goals> <phase>pre-integration-test</phase> </execution> <execution> <id>report</id> <goals> <goal>report</goal> </goals> <phase>post-integration-test</phase> <configuration> <!--Optional - default ./report--> <jsonReportFolder>target/report/surefire-reports/cucumber/</jsonReportFolder> <!--Optional - default - **/*.json --> <jsonReportIncludePattern>**/cucumber.json</jsonReportIncludePattern> </configuration> </execution> </executions> </plugin>
Let me explain what’s going on here.
projectId - Jira project id where the features are stored.
accessKey, secretKey - credentials used for downloading features and uploading reports, which are available from the AssertThat Integration page in Jira.
outputFolder - this is the folder where feature files are downloaded to, and is the folder supplied to Cucumber to look for the features.
jql - this is REALLY powerful one, its JQL that allows you to search Jira like a boss, you can say run only scenarios that are linked to open bugs (
project = XXX AND resolution = Unresolved and issuetype='Bug) or run scenarios that are linked to issue in current sprint (
project = XXX and sprint=1). So it provides super-flexible way of dynamically filtering scenarios that you want to run.
mode - manual, automated or both - by default only automated scenarios are downloaded. This provides a way of overriding what is run, so that you can run manual or all scenarios if you need to for some reason.
jsonReportFolder - that is where cucumber.json is stored, and has to match the path that you provide in cucumber plugin itself.
jsonReportIncludePattern - this is ant-like pattern that allows you to filter which json files are uploaded to Jira.
If you running the tests on the integration-tests phase (imho that’s how it should be), then features will be downloaded on pre-integration-test phase and the report uploaded on post-integration-test. But if its not the case you’ll have to adjust phases in plugging accordingly.
This is how the end report will look like when the results have been imported into Jira:
You can easily drill down into individual feature results:
and failed scenarios, which can be assigned for investigation, linked to defects and commented on.