Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.


Wednesday, September 25, 2019

Analytics - The Brain of the Software



An Analogy 



I am not a doctor, nor did I enjoy biology too much in my curriculum as a student.  However, I do know that the body has many organs and each organ plays a vital role in the well being of the individual.

Each organ has to:

  • function correctly (movement, senses, core functions, etc.)
  • has to perform as per expectations in different conditions the individual may be going through (walking, running, swimming, etc.)
  • has to be secure from external parameters (heat, cold, rain, what we eat / drink, etc.)
  • has to have a proper user experience (ex: if the human hands had webs like ducks, would we be able to hold a pen correctly to write?



  • I would like to think of the brain as the super computer which keeps track of what is going on in the body, if each piece playing its part correctly, or not. And if there is something unexpected going on, then there are mechanisms of giving that feedback internally and externally so that course correction would be possible.

How does this relate to software?

Software is similar in some ways. For any software product to work, the following needs to be done:

Functionality works as expected


  • The architecture, testability of the system will allow for various types of testing activities to be performed on the software to ensure everything works as expected
  • Test Automation practices will give you quick feedback



There is a plethora of open-source and commercial tools in this space to help in this regard - the most popular open-source tools being Selenium and Appium.


Software is performant


  • We can do performance testing at various different levels to ensure at different loads and conditions, the users will be able to use the product in a seamless fashion
  • There are many tools to assist in Performance Testing - some popular ones being JMeter and Gatling.

Software is secure


  • Building and testing for security is critical as you do not want user information to be leaked or manipulated and neither do you want to allow external forces to control / manipulate your product behaviour and control
  • The Test Automation Pyramid hence also includes NFRs





User experience is validated, and consistent


  • In the age of CD (Continuous Delivery & Continuous Deployment), you need to ensure your user experience across all your software delivery means (browsers, mobile-browsers, native apps for mobiles and tablets, etc.) is consistent and users do not face the brunt of UI / look-and-feel issues in the software at the cost of new features
  • This is a relatively new domain - but there are already many tools to help in this spaces as well - the most popular one (in terms of integration, usage and features) being the AI-powered Applitools
Visual Validation is the new tip of the Test Automation Pyramid!





What is the brain of the software?

The above is all good, and known in various ways. But what is the "brain" of the software? How does one know if everything is working fine or not? Who will receive the feedback and how do we take corrective action on this?

Analytics is that piece in the Software product that functions as the brain. It keeps collecting data about each important piece of software, and provides feedback on the same.

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of

  • understand their Consumers (engagement / usage / patterns / etc.),
  • understand usage of product features, and,
  • do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

Unfortunately, Analytics is not known much to the Software Dev + Test community. We know it very superficially - and do what is required to implement it and quickly test it out. But what is analytics? Why is it important? What is the impact of this not working well? Not many think about this.

I have been testing Analytics since 2010 ... and the kind of insights I have been able to get about the product have been huge! I have been able to contribute back to the team and help build better quality software as a result.

But I have to be honest - it is painful to test Analytics. And that is why I created an open-source framework - WAAT - to help automate some of this testing activities.

I also do workshops to help people learn more about Analytics, its importance, and how they can automate this as well.

In the workshop, I do not assume anything and approach is to discuss and learn by example and practice, the following

  • How does Analytics works (for Web and Mobile)?
  • Test Analytics manually in different ways
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see a demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation scripts on your machine to validate the same

Takeaways from the workshop

We will learn by practicing the following:
  • What is Analytics?
  • Techniques to test analytics manually.
  • How to automate the validation of analytics, via a demo, and if time permits, run the automation from your machine as well.
Hope this post helps you understand the importance of Analytics and why you need to know more about it. Do reach out to me if you want to learn more about it.

Next upcoming Analytics workshop is in TestBash Australia 2019. Let me know if you would be interested in attending the same


Friday, June 14, 2019

Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019


What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"

Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
  • Functional Automation approach - identify and automate user scenarios, across supported regions
  • Testing approach - what to test, when to test, how to test!
  • Manual Sanity before release - and why it was important!
  • Staged roll-outs via Google’s Play Store and Apple’s App Store
  • Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
  • Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

Slides:



Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar

Monday, June 3, 2019

Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant

I spoke about Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT event organised by Globant India Pvt. Ltd.




The event was very well organised and I had the opportunity to interact with a full house, and also later meet and talk with a lot of interesting people - curious about current state of testing, test automation and how AI can impact it in the future.

Agenda:



Below is the abstract of my talk:

The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.

With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.

In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.


Recording from the talk:




Some pictures:






.

Tuesday, March 19, 2019

Collaboration - A Taboo!

In AgileIndia 2019 in Bangalore, as part of the Agile Mindset theme, I played a tweak of the Taboo game - to make it a Collaboration game.

Abstract: 

When one has fun at work, work becomes fun. However, daily pressures, metrics, KPIs, and what not, have dissolved the fun, and made work drudgery in various ways. 

This creates stress for individuals, in teams, and across teams, there is mistrust, unnecessary competition, blame, finger-pointing ….

What better way to learn, and re-learn the basics of life, work, team-work - than to play a game, have fun, and correlate it with how life and work indeed should be treated as a game, and we should have fun in this journey. Only then can people truly succeed, and so can organisations.

Here, we will play a game – “Collaboration - A Taboo!” – where you will 

  • Re-learn collaboration techniques via a game! 
  • Learning applicable for individuals and teams, in small or big organisations
  • Re-live your childhood when playing this game

Be prepared for a twist which will leave you thinking!

Slides:



Saturday, March 16, 2019

Visual validation - The Missing Tip of the Automation Pyramid


At yet-another-vodQA at ThoughtWorks, this time in the Pune edition on 16th March 2019, I spoke about Visual validation - The Missing Tip of the Automation Pyramid


Abstract:

The Test Automation Pyramid is not a new concept. The top of the pyramid is our UI / end-2-end functional tests - which should cover the breadth of the product.

What the functional tests cannot capture though, is the aspects of UX validations that can only be seen and in some cases, captured by the human eye. This is where the new buzzwords of AI & ML can truly help.


In this session, we will explore why Visual Validation is an important cog in the wheel of Test Automation and also different tools and techniques that can help achieve this. We will also see a demo of Applitools Eyes - and how it can be a good option to close this gap in automation!



Slides are available from here






Video is available here:








Thanks to Priyank Shah for this pic!






I also received some awesome feedback for the same.





Thanks vodQA Team! Till next time, adios!

Thursday, February 14, 2019

Talks and workshops in Agile India 2019


In the upcoming Agile India 2019 in Bangalore, I will be speaking about:






If you have not yet registered, you can use this code to get a discount on your registration - anand-10di$c-agile 

In addition, there are some great pre and post conference workshops as well. I will be participating in "Facilitating for Effective Collaboration...One Nudge at a Time" workshop - conducted by Deborah Hartmann Preuss and Ellen Grove


This is going to be one amazing conference to learn, network and share ideas and experiences. See you there!


.

Monday, February 11, 2019

Test Automation in the World of AI and ML

My article on "Test Automation in the World of AI & ML" recently got published on InfoQ.


Here are the key takeaways mentioned in the article -

  • There are many criteria to be considered before building framework / selecting tools for Functional Test Automation
  • It is very important to prioritise framework / tools capabilities needed for the software-under-test
  • A good, scalable Test Automation Framework that provides fast and reliable feedback to the team enables collaboration and CI/CD
  • Debugging / RCA (root cause analysis) and support for libraries / tools used is an afterthought in most cases. Do not fall in that trap.
  • There are some promising commercial tools that fit seamlessly in the Agile way of working. Depending on the complete context, these tools may be a good choice over building your own framework for Functional Automation.

You can read the full article from here

Looking forward to comments on the same!


.