Monday, July 23, 2018

A few thoughts on Test Automation


Deepanshu Agarwal and Brijesh Deb asked some very interesting questions on a LinkedIn post. Since I have some verbose thoughts on this, thought it is better to respond via a blog post instead.

  • Why is Test Automation still considered suitable only for regression testing? What about writing automation tests sooner as in case of Test Driven Development?
    • [Anand] - Depends what you call test automation? If ONLY FUNCTIONAL, then its better to explore the product first, investigate / have conversations with developers on what lower-level tests are already automated, and then based on cost / risk-value analysis, decide what else needs to be automated at Functional layer.

      A tangential rant ....
      The reason we think about classifications such as SMOKE, SANITY, REGRESSION in Functional Automation ONLY has a big reason. These tests are inherently very slow, brittle and it takes a lot of effort to ensure these tests give poor feedback on exact point / reason of failure. 

      I have never seen any other form of tests - say Unit tests, which would be magnitudes in number larger than the functional tests (hopefully) ever have any such classification. We all just say, the unit tests ran, not the smoke unit tests ran. 

      We need to grow up and understand the reason behind this. We need to make our top-of-the-pyramid tests as less in number as possible. We need to ensure we use good programming / development practices and get quick and reliable feedback from these tests. Else we will keep focussing on the symptoms, and never get to the root cause.

      --- Rant ends

      Once we understand this, then it is a matter of understanding in the context what can and needs to happen first, and what next. In most cases TDD will work. But TDD as a Functional Spec may, or may-not be an overkill .... the team has to decide that.
  • Why do the automated tests always have to derived from manual tests?
    • [Anand] - What is a manual test? Something that a machine is not performing? How do you do "manual testing"? Is Exploratory Testing subset of Manual Testing, or the other way, or any other thoughts on that?

      From the perspective of "automated tests" - I read it as "automated functional tests" here. In that case, the answer for the above question holds true here as well.

      Continuing from that thought - I think the approach (of deriving automated tests from so-called manual tests) is better than thinking upfront what tests I am going to automate and then proceed with the implementation without any thought or regard to any other learning along the way.
  • The tests classified as manual tests are only focused at ensuring certain checks. What about actually running some tests to discover the unknown?
    • [Anand] I don't want to get into the 'checks' debate. It is futile!

      All I have understood is - you cannot just spend time looking at the requirements / specs and write down (in your mind / bullet points / story cards / some fancy ALM tool) your test cases / scenarios. 

      That list is just a starting point of your journey of exploration and experimentation with the product-under-test. If you think that what you have identified is your actual scope of testing, then ALL THE BEST to you, your team and your product - because there are going to be so many opportunities you have missed out to make the product better and usable for the end-user. Unfortunately, lot of organisation still look for "regression" testing cycles - where (you think) you execute all the tests that were identified in a time long ago. However, everyone knows, it is best case / best effort, IF AT ALL, of actually following each and every step of that regression cycle. Such a waste of time and effort - when more meaningful testing could have been performed during that time.
  • Why is that exploratory tests are still considered suitable only for manual testing? How about automating exploratory tests using AI?
    • [Anand] What is the meaning of "exploration"?
      As per a quick online search, this is what it means:


      Now - how can you automate the unknown / unfamiliar? You can use tools to help figure out what is unknown / unfamiliar ... but once you know it, then it does not remain 'unknown'. I think buzzwords like AI and ML are tools to help bridge the gap in the known and the unknown. But we would still need to guide and use these tools and technologies to our advantage, to aid in our exploration.

Friday, July 20, 2018

Implementing Soft Assertions

Back in 2009 / 2010, I was working on implementing end-2-end tests for a web site using Java / Selenium / TestNG based automation. The challenge I was facing was that the tests used to fail for trivial (but valid) reasons - and I wondered that the test did not even get to core validations before it failed. How will the team ever know about the main issues in the product if the test fails for trivial issues? 

That was a trigger point for me to think about Soft Assertions - what if there was a way to say if there is a type of failure that I want to know about, but the test can continue with the remaining set of validations - unless something does not make sense to proceed with.

Ex: If the text message of a field is incorrect, I can continue. But if login fails, no point in proceeding with the rest of the test.

This idea seemed interesting - so I came up with the following requirements from such an  implementation as listed below:

  • Clear distinction between what type of failure I can continue from, or not
    • Ex: assert.** is for hard asserts. verify.** is for soft asserts
  • All failures that I can continue from (i.e. soft asserts), need to be collated and at the end of the test, the complete list of those soft assert failures should be presented with the test result (and in the report), while the test failed just once
    • Ex: There were 5 soft assertion failures)
  • Capture relevant screenshots whenever Soft Assertion failed
  • If there was a hard assert along the way of the test execution, the test failure should include the prior soft assert failures along with the hard assert failure, as appropriate

For the actual implementation, I did the following (in 2009/2010):
  • I looked into the TestNG code base, and I could not really find any out-of-the-box support for what I wanted to do. 
  • So for lack of knowledge on better ways of implementation, 
    • I checked-out the TestNG code, 
    • added the Soft Assertion implementation, and, 
    • built a custom TestNG.jar file
    • checked-in the jar file as a library artefact in our automation framework. 

In hindsight, I should have sent that functionality as a PR to the project. 

But not all is lost, TestNG now (or maybe since some time now) has support for Soft Assertions - out-of-the-box. And it is pretty straightforward to implement / use it as well.

Implementing Soft Assertions in your test framework

See this gist for implementation that you can you use with TestNG (I tested with v6.10).

Using Soft Assertions in your tests

Here is how you can use the Soft Assertions in your tests.


Soft Assertions in any other tech stack?

What if you are not using TestNG, or Java - rather, what if you are using completely different programming language / tools / test-runner? Can you still use Soft Assertions? 

Absolutely YES! All you need to understand is the concept, and figure out the best way to implement the same, if any out-of-the-box solution does not exist in that tech stack. 

Hope this helps you!


Thursday, July 12, 2018

Return of the todo (learning) list

Since at least a decade, I have had a list of TODOs which I actively updated and maintained. The items on this list focused on - 

  • what new things I wanted to learn / experiment with
  • new conference talk ideas
  • open source ideas / updates
Unfortunately, due to various reasons (some of which were my doing as well), the focus on learning and experimentation got lost ... and I felt awful about it. I also shared my pain with some friends and colleagues of unable to find time / opportunity / focus / support to be in the continuous learning phase. But not much could be done / changed in those circumstances.

But I am very happy to share that the list is back, and back with a bang!!!

The days of learning and experimentation continue. My list is now overflowing, and continuing to grow with ideas and things I want to learn and experiment.

I am happy again!!

Monday, April 16, 2018

Essence of Testing - A new beginning

In my career so far, I have been very fortunate to have got great mentors, and a variety of opportunities to learn, add value, and share my experiences with others around me.

Here are some of these experiences:
  • Worked in various sized organisations across the globe in the past couple of decades
  • The teams have been big and small
  • Played a variety of roles - Quality Analyst, SDET (Software Development Engineer in Test), Product Quality Engineer (PQE), Automation Engineer, Consultant, Coach, Project Manager, Director - Quality, Support Engineer, etc.
  • Worked with teams having products in different domains - Health care, eCommerce, Banking / Finance, Retail, Entertainment (OTT), Research, etc. 
  • With in organisations (B2B and B2C space): WebMD, Borland Software, Microsoft (Redmond, USA), AmberPoint (USA / India), ThoughtWorks, Vuclip 
  • Shared experiences with other via Meetups and Conferences world-wide
  • Created opensource tools like WAAT, TTA, TaaS
After working as an employee in these organisations for almost a couple of decades now, I have taken a plunge to do something different.


I am now looking forward to work with Organisations and Teams, to help co-create optimized solutions towards shipping a quality product. Leave me a message on my blog, or send me an email at abagmar@gmail.com to talk more on how we can work together!


As part of this journey, even before I was able to buy my own laptop, and warm it up, I got an interesting consulting assignment, thanks to a dear friend.
This assignment was exactly what I needed to get started - it was a Discovery Workshop, with the following objective:
  • Learn and understand the current state of the team and the product they were building
  • Understand current (perceived) challenges
  • Suggest improvements for the team in areas of:
    • Tweaks in the current processes
    • Practices to be adopted / got better at / or stopped (as anti-patterns and not adding value)
    • Identify opportunities to Test better, and early
    • Suggest a Test Automation Pyramid that is fit-for-purpose for the team
    • Suggest strategy, tools and approach for end-2-end (e2e) functional automation

As I usually do, I started off the workshop with my favorite tool for note taking - Mind Maps! As the conversations evolved and got specific, I started with a Balanced Mind-Map  to Fishbone Mind-Map.




Eventually I consolidated my thoughts and created the Discovery Workshop Report for the client in the form of a slide deck, using the following approach.


  • Learning from Discovery
    • Talk with the Team members
    • See & understand the project management tool, quality of requirements, etc.
    • See the code - understand complexity, types of tests written, quality of tests, etc.
    • See the Testing related artifacts - test cases, test execution strategy, exploratory testing, etc.
    • See the CI server - how deployments happen, what causes the builds to fail, etc.
  • Recommendations
    • For all the above areas, I created recommendations on what different aspects may help the team move ahead in a better way
Discovery and Recommendations were based on each specific activity:
    • Process
    • Architecture
    • Requirements
    • Development
    • Testing
    • Deployments

Once the learning from the Learning and Recommendation stages were well understood (by me), I then created a Suggested Plan of Execution (in phases / milestones)
  • From the recommendations, I created milestone based plans on what can be started immediately, and what decisions the team needs to take to move forward in other areas

In this 2 week time, unknowingly, in retrospect, the whole engagement was quite agile. There were periodic demos of the POCs, regular progress sharing, and changing direction of discovery and recommendations based on learning.

The Client also appreciated the quality of conversations, and the results that were shared.

All in all, a very satisfying beginning to my new journey with Essence of Testing!