Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Wednesday, March 11, 2020

Tracking functional coverage from your api / functional UI (e2e) tests

Tracking and having a high coverage of your product code via automated tests is an important way of building a quality product.

It is easy to measure code coverage when running unit tests. You will find a plethora of tools (free & commercial) with a variety of features for any programming language you may be using. You can integrate these tools as part of your pre-commit hooks (i.e. you will not be able to push your code to version control if the limits set for code coverage fall below the minimum limit set), and also as part of your CI builds (i.e. fail the build if the code coverage limit falls below the expected limit).

The reason capturing code coverage works easily for unit tests, and maybe integration tests, is that these tests run in isolation, directly on the code. You do not need to have the product deployed to any environment to run the tests to measure the coverage. I found these great resources you can check out to understand more about how code coverage works:
http://www.semdesigns.com/Company/Publications/TestCoverage.pdf
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html

However, very frequently this question comes up - how can we measure code coverage of the API tests, or of the functional UI (e2e / end-2-end) tests? I remember this question coming up since the past 8-10 years at least. Every time, I have given the same answer because I have not come across, nor seen any better way of answering this question. It is time I wrote it down for easier access to others as well.

Solution #1

Preconditions:

A big criteria for the above strategy to work is to ensure the environment is isolated - i.e. NO ONE is using the environment (for navigating through the product, testing of any kind other than the tests being triggered to measure coverage).
  • Deploy the product-under-test to an isolated environment, and start measuring code coverage. 
  • Then trigger the API / UI tests, 
  • That will tell you how much code coverage is achieved by these tests.
The above answer has some big gaps though:
  • What are you trying to understand from the code coverage of your API / UI tests? What value will it bring to the team? How will it make the product better?
  • Do you expect the code coverage of your API / UI tests to be similar / identical / better than the unit tests? IF yes, we need to have a different conversation about the Test Pyramid


 

Solution #2

However, I believe there is a better approach to this. 

Based on the Test Automation Pyramid shown above, the API / Web Service tests and UI tests are business facing tests. In this case, it will add more value to measure what functional / component coverage the tests have. 

With this approach, the code coverage from the Technology Facing Tests (Unit / Integration / ...) will focus on technical aspects of coverage, and the Business Facing Tests (API / UI) will focus on functional aspects of coverage. 

When looked at together, this will give a better sense of understanding of the overall quality of the product.

Tracking Functional Coverage

So, how can we track functional coverage? 

Unfortunately, there does not seem to be an out-of-the-box way to do this. But below is how I have implemented this before:
  • I used cucumber-jvm in my test framework
  • For each (end-2-end Test) Scenario, in addition to the tags required for the test (as per the test framework design), I added the following types of tags to the test:
    • functional areas touched by the test
    • components / modules touched by the test
  • I used cucumber-reporting plugin to generate rich, html reports
    • The beauty of the reports generated by cucumber-reporting reports is that I can now see for my test execution the different statistics of the tags when the tests ran. 
    • Below is an example for the cucumber-reporting github page:
 
    • With this report, you can now get to know:
      • the overall tag execution state for the complete test run
      • the number of times the tag was run as part of how many test scenarios
      • drill down into tag-specific reports as well

Why is the above report important?

As mentioned above, I am adding custom tags to each scenario - based on module / functionality / etc. If there is any critical functionality of my product, I would want to have more concentration of tests covering that feature / module, compared to others. 

Another way to visualize the tag statistics is in form of tag heat maps. You want your critical functionality to have a good sized bubble in the heat map. Also, any small bubbles, or non-existing bubbles would mean you do not have coverage for that feature / module.

The above example is one of the easiest ways to implement feature coverage for API / UI tests. But it is very likely you are not using cucumber-jvm, and cucumber-reporting plugin. But if this approach makes sense to you, then you could very easily implement it in your test framework, using the constructs and features of that programming language and tools.

Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.


Wednesday, September 25, 2019

Analytics - The Brain of the Software



An Analogy 



I am not a doctor, nor did I enjoy biology too much in my curriculum as a student.  However, I do know that the body has many organs and each organ plays a vital role in the well being of the individual.

Each organ has to:

  • function correctly (movement, senses, core functions, etc.)
  • has to perform as per expectations in different conditions the individual may be going through (walking, running, swimming, etc.)
  • has to be secure from external parameters (heat, cold, rain, what we eat / drink, etc.)
  • has to have a proper user experience (ex: if the human hands had webs like ducks, would we be able to hold a pen correctly to write?



  • I would like to think of the brain as the super computer which keeps track of what is going on in the body, if each piece playing its part correctly, or not. And if there is something unexpected going on, then there are mechanisms of giving that feedback internally and externally so that course correction would be possible.

How does this relate to software?

Software is similar in some ways. For any software product to work, the following needs to be done:

Functionality works as expected


  • The architecture, testability of the system will allow for various types of testing activities to be performed on the software to ensure everything works as expected
  • Test Automation practices will give you quick feedback



There is a plethora of open-source and commercial tools in this space to help in this regard - the most popular open-source tools being Selenium and Appium.


Software is performant


  • We can do performance testing at various different levels to ensure at different loads and conditions, the users will be able to use the product in a seamless fashion
  • There are many tools to assist in Performance Testing - some popular ones being JMeter and Gatling.

Software is secure


  • Building and testing for security is critical as you do not want user information to be leaked or manipulated and neither do you want to allow external forces to control / manipulate your product behaviour and control
  • The Test Automation Pyramid hence also includes NFRs





User experience is validated, and consistent


  • In the age of CD (Continuous Delivery & Continuous Deployment), you need to ensure your user experience across all your software delivery means (browsers, mobile-browsers, native apps for mobiles and tablets, etc.) is consistent and users do not face the brunt of UI / look-and-feel issues in the software at the cost of new features
  • This is a relatively new domain - but there are already many tools to help in this spaces as well - the most popular one (in terms of integration, usage and features) being the AI-powered Applitools
Visual Validation is the new tip of the Test Automation Pyramid!





What is the brain of the software?

The above is all good, and known in various ways. But what is the "brain" of the software? How does one know if everything is working fine or not? Who will receive the feedback and how do we take corrective action on this?

Analytics is that piece in the Software product that functions as the brain. It keeps collecting data about each important piece of software, and provides feedback on the same.

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of

  • understand their Consumers (engagement / usage / patterns / etc.),
  • understand usage of product features, and,
  • do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

Unfortunately, Analytics is not known much to the Software Dev + Test community. We know it very superficially - and do what is required to implement it and quickly test it out. But what is analytics? Why is it important? What is the impact of this not working well? Not many think about this.

I have been testing Analytics since 2010 ... and the kind of insights I have been able to get about the product have been huge! I have been able to contribute back to the team and help build better quality software as a result.

But I have to be honest - it is painful to test Analytics. And that is why I created an open-source framework - WAAT - to help automate some of this testing activities.

I also do workshops to help people learn more about Analytics, its importance, and how they can automate this as well.

In the workshop, I do not assume anything and approach is to discuss and learn by example and practice, the following

  • How does Analytics works (for Web and Mobile)?
  • Test Analytics manually in different ways
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see a demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation scripts on your machine to validate the same

Takeaways from the workshop

We will learn by practicing the following:
  • What is Analytics?
  • Techniques to test analytics manually.
  • How to automate the validation of analytics, via a demo, and if time permits, run the automation from your machine as well.
Hope this post helps you understand the importance of Analytics and why you need to know more about it. Do reach out to me if you want to learn more about it.

Next upcoming Analytics workshop is in TestBash Australia 2019. Let me know if you would be interested in attending the same


Friday, June 14, 2019

Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019


What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"

Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
  • Functional Automation approach - identify and automate user scenarios, across supported regions
  • Testing approach - what to test, when to test, how to test!
  • Manual Sanity before release - and why it was important!
  • Staged roll-outs via Google’s Play Store and Apple’s App Store
  • Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
  • Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

Slides:



Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar

Monday, June 3, 2019

Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant

I spoke about Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT event organised by Globant India Pvt. Ltd.




The event was very well organised and I had the opportunity to interact with a full house, and also later meet and talk with a lot of interesting people - curious about current state of testing, test automation and how AI can impact it in the future.

Agenda:



Below is the abstract of my talk:

The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.

With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.

In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.


Recording from the talk:




Some pictures:






.

Tuesday, March 19, 2019

Collaboration - A Taboo!

In AgileIndia 2019 in Bangalore, as part of the Agile Mindset theme, I played a tweak of the Taboo game - to make it a Collaboration game.

Abstract: 

When one has fun at work, work becomes fun. However, daily pressures, metrics, KPIs, and what not, have dissolved the fun, and made work drudgery in various ways. 

This creates stress for individuals, in teams, and across teams, there is mistrust, unnecessary competition, blame, finger-pointing ….

What better way to learn, and re-learn the basics of life, work, team-work - than to play a game, have fun, and correlate it with how life and work indeed should be treated as a game, and we should have fun in this journey. Only then can people truly succeed, and so can organisations.

Here, we will play a game – “Collaboration - A Taboo!” – where you will 

  • Re-learn collaboration techniques via a game! 
  • Learning applicable for individuals and teams, in small or big organisations
  • Re-live your childhood when playing this game

Be prepared for a twist which will leave you thinking!

Slides:



Saturday, March 16, 2019

Visual validation - The Missing Tip of the Automation Pyramid


At yet-another-vodQA at ThoughtWorks, this time in the Pune edition on 16th March 2019, I spoke about Visual validation - The Missing Tip of the Automation Pyramid


Abstract:

The Test Automation Pyramid is not a new concept. The top of the pyramid is our UI / end-2-end functional tests - which should cover the breadth of the product.

What the functional tests cannot capture though, is the aspects of UX validations that can only be seen and in some cases, captured by the human eye. This is where the new buzzwords of AI & ML can truly help.


In this session, we will explore why Visual Validation is an important cog in the wheel of Test Automation and also different tools and techniques that can help achieve this. We will also see a demo of Applitools Eyes - and how it can be a good option to close this gap in automation!



Slides are available from here






Video is available here:








Thanks to Priyank Shah for this pic!






I also received some awesome feedback for the same.





Thanks vodQA Team! Till next time, adios!

Thursday, February 14, 2019

Talks and workshops in Agile India 2019


In the upcoming Agile India 2019 in Bangalore, I will be speaking about:






If you have not yet registered, you can use this code to get a discount on your registration - anand-10di$c-agile 

In addition, there are some great pre and post conference workshops as well. I will be participating in "Facilitating for Effective Collaboration...One Nudge at a Time" workshop - conducted by Deborah Hartmann Preuss and Ellen Grove


This is going to be one amazing conference to learn, network and share ideas and experiences. See you there!


.

Monday, February 11, 2019

Test Automation in the World of AI and ML

My article on "Test Automation in the World of AI & ML" recently got published on InfoQ.


Here are the key takeaways mentioned in the article -

  • There are many criteria to be considered before building framework / selecting tools for Functional Test Automation
  • It is very important to prioritise framework / tools capabilities needed for the software-under-test
  • A good, scalable Test Automation Framework that provides fast and reliable feedback to the team enables collaboration and CI/CD
  • Debugging / RCA (root cause analysis) and support for libraries / tools used is an afterthought in most cases. Do not fall in that trap.
  • There are some promising commercial tools that fit seamlessly in the Agile way of working. Depending on the complete context, these tools may be a good choice over building your own framework for Functional Automation.

You can read the full article from here

Looking forward to comments on the same!


.

Friday, November 30, 2018

Recording from webinar on The Missing Feedback Loop now available

On 21st Nov, TestCraft.io hosted me in a webinar where I spoke about - "The Missing Feedback Loop - The Tools, Techniques, and Automation to Solve It". 

You can get the recording from here (https://hubs.ly/H0fBDN50).






Wednesday, November 28, 2018

A blog about my blog

With the risk of this ending up becoming a recursion, I want to share interesting statistics about my blog - essenceoftesting.blogspot.com and content shared on my SlideShare account.




Here are some charts from Analytics of my SlideShare and blog:

Top content from my SlideShare:




Blog Overview:



Audience:


Popular posts:





Referrers: