We all know that automation is one of the key enablers for those on the CI-CD journey.
Most teams are:
- implementing automation
- talking about its benefits
- up-skilling themselves
- talking about tooling
However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...
To try and understand more about this, can you answer the below questions?
In your experience, or in your current project:
- Does your functional automation really add value?
- What makes you say it does / or does not?
- How long does it take for tests to run and generate reports?
- In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
- How easy is it to debug and get to the root cause of failures?
- How long does it take to update an existing test?
- How long does it take to add a new test?
- Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
- What is the test passing percentage?
- Do you “rerun” the failing tests to see if this was an intermittent issue?
- Is there control on the level of parallel execution and switch to sequential execution based on context?
- How clean & DRY is the code?
In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable
Hence, for the amount of effort invested in implementing automation,
- Are you really getting the value from this activity?
- How can automation truly provide value for teams?