If yes, then read my blog post on Automating Functional / End-2-End Tests Across Multiple Platforms
which shares details on the thought process & criteria involved in
creating a solution that includes how to write the tests, and run it
across the multiple platforms without any code change.
Lastly, the open-sourced solution - teswiz
also has examples on how to implement a test that orchestrates the
simulation between multiple devices / browsers to simulate multiple
users interacting with each other as part of the same test.
Wednesday, June 2, 2021
Automating Functional / End-2-End Tests Across Multiple Platforms
Wednesday, December 9, 2020
Getting started with implementing Automation
Getting started with implementing tests for automation (web or native apps) may seem daunting for those who are doing this for the first time.
Assuming you are using open-source tooling like Selenium or Appium, there are multiple ways you can get started.
DIY - Build your own framework by scripting based on the documentation
Use Selenium-IDE for quick record and playback
Use TestProject Recorder for quick record and playback
Use TestProject SDK to build your own custom scripts for automating the tests
Each of the above approaches has its own pros-and-cons. Let's look at this in some detail:
Approach #1 - DIY - Build your own framework
Selenium: https://www.selenium.dev/documentation/en/
Appium: https://appium.io/docs/en/about-appium/intro/
Pros | Cons |
You can build all features and capabilities as per your design & requirement | *You need to learn a programming language |
*You have to build everyone on your own (though you can use supporting libraries) | |
* Depending on the context of the team, the above points can also be considered as an advantage
Approach #2 - Selenium-IDE
https://www.selenium.dev/selenium-ide
Pros | Cons |
Easy to set up | Basic reports |
Works in Chrome & Firefox | Works only for automating Web applications |
Code can be exported in various formats | |
Recorded tests can be run from command line | |
Tests can be run in your own CI | |
Will always be in-sync with underlying WebDriver |
Approach #3 - TestProject Recorder
https://testproject.io/easy-test-automation/
Pros | Cons |
Advanced recorder (lot of actions, validations, self-healing, customisations possible, and a lot of community Addons) | Recorder works only in Chrome, but tests can be executed on all browsers |
Recorder works for Web applications as well as Native Apps (in real devices or emulators) for Android and iOS (even iOS on Windows machine) | Generated code is very simple - good as a reference to see how the underlying implementation / interaction is done |
TestProject agent automatically determines all available browsers available and devices connected to the machine and execution can be customised accordingly | Each recorded test needs to be exported individually. No concept of reuse in this approach |
Can schedule test runs as one-time, or repeated activity via build-in scheduler / CI/CD tool integrations or via their RESTful API | |
Reports are comprehensive with meaningful data, including screenshots and option to download to PDF format | |
Code can be generated from the recorded script | |
Can share tests easily using the "Share test" feature |
Approach #4 - TestProject SDK
https://testproject.io/advanced-scripting-capabilities
Pros | Cons |
Probably the most powerful way of these 4 approaches as it uses WebDriver / Appium under the hood. Get the power of building your own framework, while reusing out-of-the-box features like driver management, automatic reporting, etc. | You need to learn a programming language |
Driver management is TestProject responsibility. Test implementer can focus on automating tests |
Sunday, December 6, 2020
Long time no see? Where have I been
Below is the link to all those articles, for which I have received very kind reviews and comments on LinkedIn and Twitter.
Apart from this, I have also been contributing to open source - namely - Selenium, AppiumTestDistribution and building an open-source kickstarter project for API testing using Karate and for end-2-end testing for Android, iOS, Windows, Mac & Web.
Lastly, I have also been speaking in virtual conferences, webinars and last week I also recorded a podcast, which will be available soon.
The end of Smoke, Sanity and Regression
Do we need Smoke, Sanity, Regression suites?
- Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
- Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
- Choose the toolset wisely
- After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
- Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
- Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports
Design Patterns in Test Automation
Writing code is easy, but writing good code is not as easy. Here are the reasons why I say this:
- “Good” is subjective.
- “Good” depends on the context & overall objective.
Similarly, implementing automated test cases is easy (as seen from the getting started example shared earlier). However, scaling this up to be able to implement and run a huge number of tests quickly and efficiently, against an evolving product is not easy!
I refer to a few principles when building a Test Automation Framework. They are:
- Based on the context & (current + upcoming) functionality of your product-under-test, define the overall objective of Automation Testing.
- Based on the objective defined above, determine the criteria and requirements from your Test Automation Framework. Refer to my post on “Test Automation in the World of AI & ML” for details on various aspects you need to consider to build a robust Test Automation Framework. Also, you might find these articles interesting to learn how to select the best tool for your requirements:
- Criteria for Selecting the Right Functional Testing Tools
- How to Select the Best Tool – Research Process
- How To Select The Right Test Automation Tool
Stop the Retries in Tests & Reruns of Failing Tests
- Recognise reasons why tests could be flaky / intermittent
- Critique band-aid approach to fixing flakiness in tests
- Discuss techniques to identify reasons for test flakiness
- Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!
Measuring Code Coverage from API Workflow & Functional UI Tests
Why is the Functional Coverage important?
You can choose your own way to implement Functional Coverage – based on your context of team, skills, capability, tech-stack, etc.
Tuesday, July 7, 2020
Does your functional automation really add value?
Most teams are:
- implementing automation
- talking about its benefits
- up-skilling themselves
- talking about tooling
- etc.
However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...
- Does your functional automation really add value?
- What makes you say it does / or does not?
- How long does it take for tests to run and generate reports?
- In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
- How easy is it to debug and get to the root cause of failures?
- How long does it take to update an existing test?
- How long does it take to add a new test?
- Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
- What is the test passing percentage?
- Do you “rerun” the failing tests to see if this was an intermittent issue?
- Is there control on the level of parallel execution and switch to sequential execution based on context?
- How clean & DRY is the code?
In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable
- Are you really getting the value from this activity?
- How can automation truly provide value for teams?
Friday, October 11, 2019
Overcoming chromedriver version compatibility issues the right way
So I asked a question on LinkedIn
And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.
I have a #appium / #java #framework.— Anand Bagmar (@BagmarAnand) September 20, 2019
What is a good way to keep #chromedriver updated with version of #chrome via #scripts? I do not want to do this manually - since I have 3+ #jenkins agents (#linux + #mac mini) and 7+ team members who need to keep doing this #osx #command
However, that was a partial answer for me.
Here is my context and problem statement in detail:
- My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD)
- ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
- In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
- Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
- Query the chrome browser versions on each connected device
- For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
- Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
Wednesday, September 25, 2019
Analytics - The Brain of the Software
An Analogy
Each organ has to:
- function correctly (movement, senses, core functions, etc.)
- has to perform as per expectations in different conditions the individual may be going through (walking, running, swimming, etc.)
- has to be secure from external parameters (heat, cold, rain, what we eat / drink, etc.)
- has to have a proper user experience (ex: if the human hands had webs like ducks, would we be able to hold a pen correctly to write?
- I would like to think of the brain as the super computer which keeps track of what is going on in the body, if each piece playing its part correctly, or not. And if there is something unexpected going on, then there are mechanisms of giving that feedback internally and externally so that course correction would be possible.
How does this relate to software?
Functionality works as expected
- The architecture, testability of the system will allow for various types of testing activities to be performed on the software to ensure everything works as expected
- Test Automation practices will give you quick feedback
There is a plethora of open-source and commercial tools in this space to help in this regard - the most popular open-source tools being Selenium and Appium.
Software is performant
Software is secure
- Building and testing for security is critical as you do not want user information to be leaked or manipulated and neither do you want to allow external forces to control / manipulate your product behaviour and control
- The Test Automation Pyramid hence also includes NFRs
User experience is validated, and consistent
- In the age of CD (Continuous Delivery & Continuous Deployment), you need to ensure your user experience across all your software delivery means (browsers, mobile-browsers, native apps for mobiles and tablets, etc.) is consistent and users do not face the brunt of UI / look-and-feel issues in the software at the cost of new features
- This is a relatively new domain - but there are already many tools to help in this spaces as well - the most popular one (in terms of integration, usage and features) being the AI-powered Applitools
What is the brain of the software?
Analytics is that piece in the Software product that functions as the brain. It keeps collecting data about each important piece of software, and provides feedback on the same.
I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of
- understand their Consumers (engagement / usage / patterns / etc.),
- understand usage of product features, and,
- do all revenue-related book-keeping
This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!
What this means is Analytics is more important now, than before.
Unfortunately, Analytics is not known much to the Software Dev + Test community. We know it very superficially - and do what is required to implement it and quickly test it out. But what is analytics? Why is it important? What is the impact of this not working well? Not many think about this.
I have been testing Analytics since 2010 ... and the kind of insights I have been able to get about the product have been huge! I have been able to contribute back to the team and help build better quality software as a result.
But I have to be honest - it is painful to test Analytics. And that is why I created an open-source framework - WAAT - to help automate some of this testing activities.
I also do workshops to help people learn more about Analytics, its importance, and how they can automate this as well.
In the workshop, I do not assume anything and approach is to discuss and learn by example and practice, the following
- How does Analytics works (for Web and Mobile)?
- Test Analytics manually in different ways
- Test Analytics via the final reports
- Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
- We will see a demo of the Automation running for the same.
- Time permitting, we will setup running some Automation scripts on your machine to validate the same
Takeaways from the workshop
- What is Analytics?
- Techniques to test analytics manually.
- How to automate the validation of analytics, via a demo, and if time permits, run the automation from your machine as well.
Next upcoming Analytics workshop is in TestBash Australia 2019. Let me know if you would be interested in attending the same
Friday, June 14, 2019
Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019
What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"
Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.
In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.
To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.
The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
- Functional Automation approach - identify and automate user scenarios, across supported regions
- Testing approach - what to test, when to test, how to test!
- Manual Sanity before release - and why it was important!
- Staged roll-outs via Google’s Play Store and Apple’s App Store
- Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
- Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)
Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar
Monday, June 3, 2019
Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant
Agenda:
Below is the abstract of my talk:
The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.
In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.
Recording from the talk:
Friday, July 20, 2018
Implementing Soft Assertions
That was a trigger point for me to think about Soft Assertions - what if there was a way to say if there is a type of failure that I want to know about, but the test can continue with the remaining set of validations - unless something does not make sense to proceed with.
Ex: If the text message of a field is incorrect, I can continue. But if login fails, no point in proceeding with the rest of the test.
This idea seemed interesting - so I came up with the following requirements from such an implementation as listed below:
- Clear distinction between what type of failure I can continue from, or not
- Ex: assert.** is for hard asserts. verify.** is for soft asserts
- All failures that I can continue from (i.e. soft asserts), need to be collated and at the end of the test, the complete list of those soft assert failures should be presented with the test result (and in the report), while the test failed just once
- Ex: There were 5 soft assertion failures)
- Capture relevant screenshots whenever Soft Assertion failed
- If there was a hard assert along the way of the test execution, the test failure should include the prior soft assert failures along with the hard assert failure, as appropriate
For the actual implementation, I did the following (in 2009/2010):
- I looked into the TestNG code base, and I could not really find any out-of-the-box support for what I wanted to do.
- So for lack of knowledge on better ways of implementation,
- I checked-out the TestNG code,
- added the Soft Assertion implementation, and,
- built a custom TestNG.jar file
- checked-in the jar file as a library artefact in our automation framework.
Implementing Soft Assertions in your test framework
Using Soft Assertions in your tests
Here is how you can use the Soft Assertions in your tests.Soft Assertions in any other tech stack?
What if you are not using TestNG, or Java - rather, what if you are using completely different programming language / tools / test-runner? Can you still use Soft Assertions?Absolutely YES! All you need to understand is the concept, and figure out the best way to implement the same, if any out-of-the-box solution does not exist in that tech stack.
Hope this helps you!