Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


5 comments:

  1. 10 ----> Test rerun I think can be configured in TestNG to see if it fails 3-4 times before it is actually seen as a bug
    Questions:
    1.What kind of analysis do you do on your bug report? Can that be automated?


    ReplyDelete
    Replies
    1. @Guruprasad - the functionality of "automatically rerun failed tests" in TestNG is probably the one of the most abused features by the QA / Automation folks. You should never rerun a test blindly. There is a reason the test failed:
      - it could be poor implementation of the test / test framework,
      - actual defect in the product code caused the failure in correlation with other tests / usage of the product while the test is running
      - network / infrastructure issue

      Regardless, there is an issue, which as a QA, one should find the root cause and fix it at the source.

      Regarding bug reports - it depends who the audience is, and what information will help them take decisions to make the product better.

      Delete
  2. Good questions. I ask this several times and improve upon these every single day. Value is crucial. Without it, all efforts are waste.

    ReplyDelete
    Replies
    1. Thanks @Mansoor. Question is, what is value, and how do you measure it? And then the main step, how do you make your automated tests give you the value you identified?

      Delete
  3. Interesting thoughts! Especially in the era that everybody is just running behind the “Automation” and I can understand fully where you are coming from especially from the era where we were too much buried in to manual activity (Even Test Case management Tools were rare!) And now with total focus in achieving Automation rather pursuing towards it!
    With most of the pain points are very much true while automating our functional test here is something I put my focus on –
    I rather try to focus on the Suites I want to automate which will have maximum reusability (e.g. Smoke & Sanity) which I can leverage and use multiple times without asking for any manual engagement.
    I am not a big fan of pushing for automating regression test cases since I really don’t feel there is enough ROI there with amount of efforts it comes with e.g Technical Debt, Maintenance, Coaching on the Coding standards etc
    The setup I have focused on is to run trigger Smoke Suites (To check whether Lights or ON!) for every Push on the environment prior to Production. And we execute Sanity in every nightly manner on all the environments prior to production i.e. Stage, Trunk etc
    Sometimes this Automation might also help you to support in odd hours or if you are dealing with multiple clouds and your deployment setup might be different for those and with only focusing on Smoke and Sanity it becomes maintainable
    We basically engage the core product teams to do failure analysis of their respective products rather than putting additional Burden on automation team and to keep maintain the scripts for Smoke and Sanity on weekly basis is part of their work so that it will be the latest and greatest and most importantly trustworthy set of automated tests!
    We measure the value for any upgrade of Servers, DBs, and Migrations the hours we use to spend through manual intervention just to check whether things are still working fine, we are saving on those through Automation execution.

    ReplyDelete