Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Wednesday, March 11, 2020

Tracking functional coverage from your api / functional UI (e2e) tests

Tracking and having a high coverage of your product code via automated tests is an important way of building a quality product.

It is easy to measure code coverage when running unit tests. You will find a plethora of tools (free & commercial) with a variety of features for any programming language you may be using. You can integrate these tools as part of your pre-commit hooks (i.e. you will not be able to push your code to version control if the limits set for code coverage fall below the minimum limit set), and also as part of your CI builds (i.e. fail the build if the code coverage limit falls below the expected limit).

The reason capturing code coverage works easily for unit tests, and maybe integration tests, is that these tests run in isolation, directly on the code. You do not need to have the product deployed to any environment to run the tests to measure the coverage. I found these great resources you can check out to understand more about how code coverage works:
http://www.semdesigns.com/Company/Publications/TestCoverage.pdf
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html

However, very frequently this question comes up - how can we measure code coverage of the API tests, or of the functional UI (e2e / end-2-end) tests? I remember this question coming up since the past 8-10 years at least. Every time, I have given the same answer because I have not come across, nor seen any better way of answering this question. It is time I wrote it down for easier access to others as well.

Solution #1

Preconditions:

A big criteria for the above strategy to work is to ensure the environment is isolated - i.e. NO ONE is using the environment (for navigating through the product, testing of any kind other than the tests being triggered to measure coverage).
  • Deploy the product-under-test to an isolated environment, and start measuring code coverage. 
  • Then trigger the API / UI tests, 
  • That will tell you how much code coverage is achieved by these tests.
The above answer has some big gaps though:
  • What are you trying to understand from the code coverage of your API / UI tests? What value will it bring to the team? How will it make the product better?
  • Do you expect the code coverage of your API / UI tests to be similar / identical / better than the unit tests? IF yes, we need to have a different conversation about the Test Pyramid


 

Solution #2

However, I believe there is a better approach to this. 

Based on the Test Automation Pyramid shown above, the API / Web Service tests and UI tests are business facing tests. In this case, it will add more value to measure what functional / component coverage the tests have. 

With this approach, the code coverage from the Technology Facing Tests (Unit / Integration / ...) will focus on technical aspects of coverage, and the Business Facing Tests (API / UI) will focus on functional aspects of coverage. 

When looked at together, this will give a better sense of understanding of the overall quality of the product.

Tracking Functional Coverage

So, how can we track functional coverage? 

Unfortunately, there does not seem to be an out-of-the-box way to do this. But below is how I have implemented this before:
  • I used cucumber-jvm in my test framework
  • For each (end-2-end Test) Scenario, in addition to the tags required for the test (as per the test framework design), I added the following types of tags to the test:
    • functional areas touched by the test
    • components / modules touched by the test
  • I used cucumber-reporting plugin to generate rich, html reports
    • The beauty of the reports generated by cucumber-reporting reports is that I can now see for my test execution the different statistics of the tags when the tests ran. 
    • Below is an example for the cucumber-reporting github page:
 
    • With this report, you can now get to know:
      • the overall tag execution state for the complete test run
      • the number of times the tag was run as part of how many test scenarios
      • drill down into tag-specific reports as well

Why is the above report important?

As mentioned above, I am adding custom tags to each scenario - based on module / functionality / etc. If there is any critical functionality of my product, I would want to have more concentration of tests covering that feature / module, compared to others. 

Another way to visualize the tag statistics is in form of tag heat maps. You want your critical functionality to have a good sized bubble in the heat map. Also, any small bubbles, or non-existing bubbles would mean you do not have coverage for that feature / module.

The above example is one of the easiest ways to implement feature coverage for API / UI tests. But it is very likely you are not using cucumber-jvm, and cucumber-reporting plugin. But if this approach makes sense to you, then you could very easily implement it in your test framework, using the constructs and features of that programming language and tools.

Saturday, March 17, 2018

Measuring Consumer Quality - The Missing Feedback Loop

I spoke in vodQA at ThoughtWorks, Pune on "Measuring Consumer Quality - the Missing Feedback Loop". 

This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.


Abstract:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers 
  • How do you know your product is working well? 
  • How do you know your Consumers are engaged with your product? 
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change? 

Video:


Slides can be found here.

Pictures:



Thursday, August 6, 2015

Agile Testing - Metrics can be fun too!

Metrics are meaningless unless in the right context. In this case, my "right" context is purely a "feel-good-factor". 

In April 2011, I published the "Agile QA Process" paper on SlideShare. I am very happy to see it has received over 30000 views and has been downloaded over 1400 times!

On a similar note, I created a mindmap for Test Insanse - titled - "Agile QA - Capabilities & Skills". That also seems to be hitting a good note - with almost 1000 views in under 25 days!

So in this case - Metrics are fun! I don't mind this ego boost to continue writing more, and sharing more!