Tuesday, June 29, 2021
Wednesday, June 2, 2021
Automating Functional / End-2-End Tests Across Multiple Platforms
If yes, then read my blog post on Automating Functional / End-2-End Tests Across Multiple Platforms
which shares details on the thought process & criteria involved in
creating a solution that includes how to write the tests, and run it
across the multiple platforms without any code change.
Lastly, the open-sourced solution - teswiz
also has examples on how to implement a test that orchestrates the
simulation between multiple devices / browsers to simulate multiple
users interacting with each other as part of the same test.
Sunday, December 6, 2020
Long time no see? Where have I been
Below is the link to all those articles, for which I have received very kind reviews and comments on LinkedIn and Twitter.
Apart from this, I have also been contributing to open source - namely - Selenium, AppiumTestDistribution and building an open-source kickstarter project for API testing using Karate and for end-2-end testing for Android, iOS, Windows, Mac & Web.
Lastly, I have also been speaking in virtual conferences, webinars and last week I also recorded a podcast, which will be available soon.
The end of Smoke, Sanity and Regression
Do we need Smoke, Sanity, Regression suites?
- Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
- Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
- Choose the toolset wisely
- After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
- Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
- Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports
Design Patterns in Test Automation
Writing code is easy, but writing good code is not as easy. Here are the reasons why I say this:
- “Good” is subjective.
- “Good” depends on the context & overall objective.
Similarly, implementing automated test cases is easy (as seen from the getting started example shared earlier). However, scaling this up to be able to implement and run a huge number of tests quickly and efficiently, against an evolving product is not easy!
I refer to a few principles when building a Test Automation Framework. They are:
- Based on the context & (current + upcoming) functionality of your product-under-test, define the overall objective of Automation Testing.
- Based on the objective defined above, determine the criteria and requirements from your Test Automation Framework. Refer to my post on “Test Automation in the World of AI & ML” for details on various aspects you need to consider to build a robust Test Automation Framework. Also, you might find these articles interesting to learn how to select the best tool for your requirements:
- Criteria for Selecting the Right Functional Testing Tools
- How to Select the Best Tool – Research Process
- How To Select The Right Test Automation Tool
Stop the Retries in Tests & Reruns of Failing Tests
- Recognise reasons why tests could be flaky / intermittent
- Critique band-aid approach to fixing flakiness in tests
- Discuss techniques to identify reasons for test flakiness
- Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!
Measuring Code Coverage from API Workflow & Functional UI Tests
Why is the Functional Coverage important?
You can choose your own way to implement Functional Coverage – based on your context of team, skills, capability, tech-stack, etc.
Saturday, September 5, 2020
Questions at at Taquelah - Does your functional automation really add value?
I spoke at Taquelah Lightning Talks on one of my favorite topics -
Does your functional automation really add value?
You can find the slides here - https://www.slideshare.net/abagmar/does-your-functional-automation-really-add-value
Some references:
https://essenceoftesting.blogspot.com/2020/07/does-your-functional-automation-really.html
https://essenceoftesting.blogspot.com/2020/03/tracking-functional-coverage.html
Friday, June 14, 2019
Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019
What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"
Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.
In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.
To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.
The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
- Functional Automation approach - identify and automate user scenarios, across supported regions
- Testing approach - what to test, when to test, how to test!
- Manual Sanity before release - and why it was important!
- Staged roll-outs via Google’s Play Store and Apple’s App Store
- Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
- Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)
Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar
Saturday, March 16, 2019
Visual validation - The Missing Tip of the Automation Pyramid
At yet-another-vodQA at ThoughtWorks, this time in the Pune edition on 16th March 2019, I spoke about Visual validation - The Missing Tip of the Automation Pyramid
Abstract:
The Test Automation Pyramid is not a new concept. The top of the pyramid is our UI / end-2-end functional tests - which should cover the breadth of the product.What the functional tests cannot capture though, is the aspects of UX validations that can only be seen and in some cases, captured by the human eye. This is where the new buzzwords of AI & ML can truly help.
In this session, we will explore why Visual Validation is an important cog in the wheel of Test Automation and also different tools and techniques that can help achieve this. We will also see a demo of Applitools Eyes - and how it can be a good option to close this gap in automation!
Slides are available from here
Video is available here:
Thanks to Priyank Shah for this pic!
I also received some awesome feedback for the same.
Thanks vodQA Team! Till next time, adios!
Thursday, February 14, 2019
Talks and workshops in Agile India 2019
In the upcoming Agile India 2019 in Bangalore, I will be speaking about:
If you have not yet registered, you can use this code to get a discount on your registration - anand-10di$c-agile
In addition, there are some great pre and post conference workshops as well. I will be participating in "Facilitating for Effective Collaboration...One Nudge at a Time" workshop - conducted by Deborah Hartmann Preuss and Ellen Grove.
Monday, November 12, 2018
Monday, November 5, 2018
Upcoming webinar - The Missing Feedback Loop
You can register for the webinar from here (https://hubs.ly/H0fp4by0).
Date & Time:
Thursday, November 21, 2018 at 02:00 PM New-York (EDT), 11:00 AM San-Francisco (PDT) and 08:00 PM Amsterdam (UTC+2)
Tuesday, September 11, 2018
Testing in the Agile World
- Agile
- Blogging
- Speaking at conferences
- Contributing to Open-source
- ...
- ...
Monday, July 23, 2018
A few thoughts on Test Automation
Deepanshu Agarwal and Brijesh Deb asked some very interesting questions on a LinkedIn post. Since I have some verbose thoughts on this, thought it is better to respond via a blog post instead.
- Why is Test Automation still considered suitable only for regression testing? What about writing automation tests sooner as in case of Test Driven Development?
- [Anand] - Depends what you call test automation? If ONLY FUNCTIONAL, then its better to explore the product first, investigate / have conversations with developers on what lower-level tests are already automated, and then based on cost / risk-value analysis, decide what else needs to be automated at Functional layer.A tangential rant ....The reason we think about classifications such as SMOKE, SANITY, REGRESSION in Functional Automation ONLY has a big reason. These tests are inherently very slow, brittle and it takes a lot of effort to ensure these tests give poor feedback on exact point / reason of failure.I have never seen any other form of tests - say Unit tests, which would be magnitudes in number larger than the functional tests (hopefully) ever have any such classification. We all just say, the unit tests ran, not the smoke unit tests ran.We need to grow up and understand the reason behind this. We need to make our top-of-the-pyramid tests as less in number as possible. We need to ensure we use good programming / development practices and get quick and reliable feedback from these tests. Else we will keep focussing on the symptoms, and never get to the root cause.--- Rant endsOnce we understand this, then it is a matter of understanding in the context what can and needs to happen first, and what next. In most cases TDD will work. But TDD as a Functional Spec may, or may-not be an overkill .... the team has to decide that.
- Why do the automated tests always have to derived from manual tests?
- [Anand] - What is a manual test? Something that a machine is not performing? How do you do "manual testing"? Is Exploratory Testing subset of Manual Testing, or the other way, or any other thoughts on that?From the perspective of "automated tests" - I read it as "automated functional tests" here. In that case, the answer for the above question holds true here as well.Continuing from that thought - I think the approach (of deriving automated tests from so-called manual tests) is better than thinking upfront what tests I am going to automate and then proceed with the implementation without any thought or regard to any other learning along the way.
- The tests classified as manual tests are only focused at ensuring certain checks. What about actually running some tests to discover the unknown?
- [Anand] I don't want to get into the 'checks' debate. It is futile!All I have understood is - you cannot just spend time looking at the requirements / specs and write down (in your mind / bullet points / story cards / some fancy ALM tool) your test cases / scenarios.That list is just a starting point of your journey of exploration and experimentation with the product-under-test. If you think that what you have identified is your actual scope of testing, then ALL THE BEST to you, your team and your product - because there are going to be so many opportunities you have missed out to make the product better and usable for the end-user. Unfortunately, lot of organisation still look for "regression" testing cycles - where (you think) you execute all the tests that were identified in a time long ago. However, everyone knows, it is best case / best effort, IF AT ALL, of actually following each and every step of that regression cycle. Such a waste of time and effort - when more meaningful testing could have been performed during that time.
- Why is that exploratory tests are still considered suitable only for manual testing? How about automating exploratory tests using AI?
- [Anand] What is the meaning of "exploration"?As per a quick online search, this is what it means:Now - how can you automate the unknown / unfamiliar? You can use tools to help figure out what is unknown / unfamiliar ... but once you know it, then it does not remain 'unknown'. I think buzzwords like AI and ML are tools to help bridge the gap in the known and the unknown. But we would still need to guide and use these tools and technologies to our advantage, to aid in our exploration.
Monday, April 16, 2018
Essence of Testing - A new beginning
Here are some of these experiences:
- Worked in various sized organisations across the globe in the past couple of decades
- The teams have been big and small
- Played a variety of roles - Quality Analyst, SDET (Software Development Engineer in Test), Product Quality Engineer (PQE), Automation Engineer, Consultant, Coach, Project Manager, Director - Quality, Support Engineer, etc.
- Worked with teams having products in different domains - Health care, eCommerce, Banking / Finance, Retail, Entertainment (OTT), Research, etc.
- With in organisations (B2B and B2C space): WebMD, Borland Software, Microsoft (Redmond, USA), AmberPoint (USA / India), ThoughtWorks, Vuclip
- Shared experiences with other via Meetups and Conferences world-wide
- Created opensource tools like WAAT, TTA, TaaS
I am now looking forward to work with Organisations and Teams, to help co-create optimized solutions towards shipping a quality product. Leave me a message on my blog, or send me an email at abagmar@gmail.com to talk more on how we can work together!
- Learn and understand the current state of the team and the product they were building
- Understand current (perceived) challenges
- Suggest improvements for the team in areas of:
- Tweaks in the current processes
- Practices to be adopted / got better at / or stopped (as anti-patterns and not adding value)
- Identify opportunities to Test better, and early
- Suggest a Test Automation Pyramid that is fit-for-purpose for the team
- Suggest strategy, tools and approach for end-2-end (e2e) functional automation
- Learning from Discovery
- Talk with the Team members
- See & understand the project management tool, quality of requirements, etc.
- See the code - understand complexity, types of tests written, quality of tests, etc.
- See the Testing related artifacts - test cases, test execution strategy, exploratory testing, etc.
- See the CI server - how deployments happen, what causes the builds to fail, etc.
- Recommendations
- For all the above areas, I created recommendations on what different aspects may help the team move ahead in a better way
- Process
- Architecture
- Requirements
- Development
- Testing
- Deployments
- From the recommendations, I created milestone based plans on what can be started immediately, and what decisions the team needs to take to move forward in other areas
Saturday, March 17, 2018
Measuring Consumer Quality - The Missing Feedback Loop
This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.
Abstract:
How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.
SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?
In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:
- The importance of knowing your Consumers
- How do you know your product is working well?
- How do you know your Consumers are engaged with your product?
- Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change?
Video:
Pictures:
Friday, March 9, 2018
MAD-LAB - Capabilities & Features - Agile India 2018
Though I have spoken on this similar topic answering the question - "Why I needed to build my own MAD-LAB?" before at vodQA in July 2017 at Vuclip, quite a few things have changed since then.
Knowing the value of "being agile", a day before my scheduled talk in Agile India 2018, I decided to revamp the content substantially. To add to my challenges, (and thanks to "testing" my slides before the talk in the conference room), I also realised the slide size format I was using is incorrect, and also the projector was not "setup / configured" correctly, making all my slide colours go haywire.
So after last 10 minutes of scrambling before the talk time, I managed to get this done correctly (at least that is what I think now in hindsight.
Moral of the above story - do a test / dry-run of your slides before your audience comes in!
That said, here is the abstract of the talk.
Abstract
The slides are part of the discussion on the Why, What and How I built my own MAD-LAB (Mobile Automation Devices LAB). The discussion also includes the Automation Strategy, Tech Stack, Capabilities & Features of MAD-LAB and the learnings from successful & failed experiments in the journey.
Slides
Below are the slides from my talk. The link to the video will be shared once available.Some pictures
Friday, January 26, 2018
Agile Testing & Patterns for a good Test Automation Frameworks
Patterns of a "good" Test Automation Framework
TechnoWise meetup on 13th Jan on Patterns of a "good" Test Automation Framework went very well, with lot of interaction and discussions along the way.What is Agile Testing? How does Automation Help?
There were many amazing experiences from this meetup -
- The ISQA community is very active. The meetup was setup in literally a few days and there were over 150+ attendees
- The GO-JEK office space is a very fun place. They actually have an auditorium in the office to host meetups and similar activities, apparently, once a week!!
- The questions / interactions with the attendees were very insightful
Here are slides shared in the meetup. I will share the link of the video as well once available.
Friday, December 29, 2017
Understanding, Measuring and Building Consumer Quality
In the past few months, I have been in deep water taking on a new and very exciting initiative. Before I share what that is - here is a traditional approach to Quality.
Typically practices, processes, tools are chosen and implemented to help build a good Quality Product for the end-user. Evolving from Waterfall methodology to Agile methodology has been challenging for many (organizations and individuals), but has proven to be a huge step forward to achieving the goal of building a good and usable product.
In this course of time, we have (thankfully) changed the thought process of considering QA to be the "gate-keeper of Quality" to QA being a "Quality Advocate and Quality Enabler" for the team and the product. A very important change as a result has been changing the focus of QA from "finding defects" to "preventing defects".
And rightly so! After-all, why should the QA be the gate-keeper and -
- take the responsibility and blame of someone giving poor / incomplete requirements? or,
- someone writing bad code during development?
- helps bring all stakeholders together through the life-cycle of the product - from conceptualization to end-delivery,
- asks a lot of questions to find gaps, clarify assumptions, etc.
- helps find and radiate information including risks, and,
- is an active part of doing whatever it takes to prevent defects coming into the system
But this is nothing new, at least for me. After all, during my fantastic journey at ThoughtWorks, I would say that these were basic tenets of why and how we worked.
That said, my eye-opener in the past few months has been to take this thought process many steps forward.
My agenda has been - how can I help influence and raise the bar of quality in such a way that we not only build a quality product, but also be in a position to predict how our millions of consumers will be able to use it.
This initiative we are calling as Consumer Quality -
- how do we understand Quality (= value) of the product as perceived by our Consumers,
- what data can be relevant to understand this, how can we be proactive about looking at this data while building a quality product, and,
- the Nirvana stage - how can we predict what actions taken will have desired impact on Consumer Quality!
Happy New Year everyone! Keep Learning, Keep Sharing!
Friday, April 21, 2017
Introducing MAD LAB - for Mobile Automation
- All infrastructure management is implemented now in groovy (instead of gradle as shared earlier).
- All automatic test execution is managed via Jenkins file
- Actual test implementation is done in cucumber-jvm / java
- Device management (selection, cleanup, app install and uninstall)
- Parallel test execution (at Cucumber scenario level) - maximising device utilisation)
- Appium server management
- Adb utilities
- Managing periodic ADB server disconnects
- Custom reporting using cucumber-reports
- Video recording of each scenario and embedding in the custom reports
- 1 Mac Minis - running various Jenkins Agents
- 2 Powered USB hubs
- 8 Android devices
- Analytics Testing
- Trend and Failure Analysis
- iOS
- Web
- A transformed MAD LAB
Monday, March 6, 2017
Analytics Testing
I was also part of a panel discussion having the theme - "What's not changed since moving to Agile Testing - The Legacy Continues!" There were some very interesting perspectives in this discussion.
The great part was that the audience was very involved and vocal throughout the day. This made is very interactive and good sharing of information and experiences for all!
Below is some information about the talk. I will try to add the link to the video soon.
Abstract
Analytics is changing the way products and services are being created and consumed.
In this session, we will learn
- What is Analytics?
- Why is it important to use Analytics in your product?
- The impact of Analytics not working as expected
We will also see some techniques to test Analytics manually and also automate that validation. But just knowing about Analytics is not sufficient for business now.
There are new kids in town - IoT and Big Data - two of the most used and heard-off buzz words in the Software Industry!
With IoT, with a creative mindset looking for opportunities and ways to add value, the possibilities are infinite. With each such opportunity, there is a huge volume of data being generated - which if analyzed and used correctly, can feed into creating more opportunities and increased value propositions.
There are 2 types of analysis that one needs to think about.
1. How is the end-user interacting with the product? This will give some level of understanding into how to re-position and focus on the true value add features for the product.
2. With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns from what has happened, and find out new product / value opportunities based on usage patterns.
Slides
The What, Why and How of (Web) Analytics Testing (Web, IoT, Big Data) from Anand Bagmar