Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Wednesday, January 19, 2022

Using fakesms service in Functional Test Automation

How do you automate OTP related test scenarios? Do you use a fake SMS service? Does it have restapi to query the SMS messages? geography support? 

To clarify - this needs to be done as part of my functional test automation, where,

  • the test could be running against a browser, where the browser does not have access to the phone, or,
  • the test could be running against a real mobile device (without SIM), so no way to receive the SMS, or,
  • the test could be running against an emulator (no SIM), so no way to receive the SMS
Scenarios include: login, payment, SMS content 

Hence I am thinking about using a fakesms service which has API access capabilities to retrieve the SMS. This will help when running automation on browser or devices / emulators without SIM.

Note:
  • There is no access to DB or API to query the OTP. 
  • I don't mind using a paid service

 

Thanks in advance for your help!

Friday, June 14, 2019

Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019


What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"

Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
  • Functional Automation approach - identify and automate user scenarios, across supported regions
  • Testing approach - what to test, when to test, how to test!
  • Manual Sanity before release - and why it was important!
  • Staged roll-outs via Google’s Play Store and Apple’s App Store
  • Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
  • Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

Slides:



Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar

Monday, June 3, 2019

Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT at Globant

I spoke about Visual Validation - The Missing Tip of the Automation Pyramid at QuaNTA NXT event organised by Globant India Pvt. Ltd.




The event was very well organised and I had the opportunity to interact with a full house, and also later meet and talk with a lot of interesting people - curious about current state of testing, test automation and how AI can impact it in the future.

Agenda:



Below is the abstract of my talk:

The Test Automation Pyramid is not a new concept. While Automation helps validate functionality of your product, the look & feel / user-experience (UX) validation is still mostly manual.

With everyone wanting to be Agile, doing quick releases, this look & feel / UX validation becomes the bottleneck, and also is a very error-prone activity which causes brand, revenue and leads diluting your user-base.

In this session, we will explore why Automated Visual Validation is now essential in your Automation Strategy and also look at how an AI-powered tool - Applitools Eyes, can solve this problem.


Recording from the talk:




Some pictures:






.

Friday, November 30, 2018

Recording from webinar on The Missing Feedback Loop now available

On 21st Nov, TestCraft.io hosted me in a webinar where I spoke about - "The Missing Feedback Loop - The Tools, Techniques, and Automation to Solve It". 

You can get the recording from here (https://hubs.ly/H0fBDN50).






Tuesday, November 6, 2018

Is the Future of Test Automation I predicted already here?

Today, almost at the end of 2018, I have come across many tools focused on making Test Automation, easier, faster, reliable and more valuable to the teams & the product - like testim.iotestcraft.iokataloncypress.iomabltest.ai, etc. These tools are very interesting and very promising for the value proposition they are bringing to the table. 

As I reflect on these shiny new tools, my mind wanders back to 2009 / 2010 when I was toying with the idea of what would be next in Test Automation Tools & Infrastructure space. I had penned my thoughts and published an article on ThoughtWorks Insights with the title - "Future of Test Automation Tools & Infrastructure" (https://www.thoughtworks.com/insights/blog/future-test-automation-tools-infrastructure). 

If we look deeper in my post, the tools I mentioned above (and many others that I probably am unaware of), are conceptually on the lines of what I had sort-of thought in 2009 / 2010. They are using a very interesting blend of past experiences, in some cases advanced technology like AI & ML, in some cases leveraging cloud / SaaS model, and more importantly - pushing the boundaries to do things differently! I am personally very happy to see this happen.

That gets another set of questions in my mind now - if what I had thought of back then is now true, and a reality, then what is next? What will the next generation of new, interesting, shiny tools look like in the next 5 years?


Monday, November 5, 2018

Upcoming webinar - The Missing Feedback Loop

I am very excited to share that I am going to conduct a webinar hosted by testcraft.io on "The Missing Feedback Loop - The Tools, Techniques, and Automation to Solve It". 

You can register for the webinar from here (https://hubs.ly/H0fp4by0).





Date & Time:
Thursday, November 21, 2018 at 02:00 PM New-York (EDT), 11:00 AM San-Francisco (PDT) and 08:00 PM Amsterdam (UTC+2)


Tuesday, September 11, 2018

Testing in the Agile World

Thanks to ThoughtWorks, I was introduced to many things - 


The list is actually quite long - but that is not the intention of this post.

The main takeaway in my learning at ThoughtWorks though, is how to Test better, and be more effective in that for the end-user. 

Even before my time at ThoughtWorks, I never agreed with the thought process that Functional Automation can / should be done only when the feature is stable. But at ThoughtWorks, I did learn many more tips and tricks and techniques and processes how to do this Functional Automation in a better way, as the product is evolving.

On 9th April 2011, I had written a detailed blog post / article regarding how can we test better in the Agile world. 

This post was titled - "Agile QA Process", and the document was uploaded to slideshare with the name - "Agile QA Process". I am very pleasantly surprised that till date, that document has had over 74K views and almost 2.7K downloads, and is still my topmost viewed post on slideshare.

When I look back at the document, it still seems very relevant and applicable, to me! 

What do you think?

Monday, July 23, 2018

A few thoughts on Test Automation


Deepanshu Agarwal and Brijesh Deb asked some very interesting questions on a LinkedIn post. Since I have some verbose thoughts on this, thought it is better to respond via a blog post instead.

  • Why is Test Automation still considered suitable only for regression testing? What about writing automation tests sooner as in case of Test Driven Development?
    • [Anand] - Depends what you call test automation? If ONLY FUNCTIONAL, then its better to explore the product first, investigate / have conversations with developers on what lower-level tests are already automated, and then based on cost / risk-value analysis, decide what else needs to be automated at Functional layer.

      A tangential rant ....
      The reason we think about classifications such as SMOKE, SANITY, REGRESSION in Functional Automation ONLY has a big reason. These tests are inherently very slow, brittle and it takes a lot of effort to ensure these tests give poor feedback on exact point / reason of failure. 

      I have never seen any other form of tests - say Unit tests, which would be magnitudes in number larger than the functional tests (hopefully) ever have any such classification. We all just say, the unit tests ran, not the smoke unit tests ran. 

      We need to grow up and understand the reason behind this. We need to make our top-of-the-pyramid tests as less in number as possible. We need to ensure we use good programming / development practices and get quick and reliable feedback from these tests. Else we will keep focussing on the symptoms, and never get to the root cause.

      --- Rant ends

      Once we understand this, then it is a matter of understanding in the context what can and needs to happen first, and what next. In most cases TDD will work. But TDD as a Functional Spec may, or may-not be an overkill .... the team has to decide that.
  • Why do the automated tests always have to derived from manual tests?
    • [Anand] - What is a manual test? Something that a machine is not performing? How do you do "manual testing"? Is Exploratory Testing subset of Manual Testing, or the other way, or any other thoughts on that?

      From the perspective of "automated tests" - I read it as "automated functional tests" here. In that case, the answer for the above question holds true here as well.

      Continuing from that thought - I think the approach (of deriving automated tests from so-called manual tests) is better than thinking upfront what tests I am going to automate and then proceed with the implementation without any thought or regard to any other learning along the way.
  • The tests classified as manual tests are only focused at ensuring certain checks. What about actually running some tests to discover the unknown?
    • [Anand] I don't want to get into the 'checks' debate. It is futile!

      All I have understood is - you cannot just spend time looking at the requirements / specs and write down (in your mind / bullet points / story cards / some fancy ALM tool) your test cases / scenarios. 

      That list is just a starting point of your journey of exploration and experimentation with the product-under-test. If you think that what you have identified is your actual scope of testing, then ALL THE BEST to you, your team and your product - because there are going to be so many opportunities you have missed out to make the product better and usable for the end-user. Unfortunately, lot of organisation still look for "regression" testing cycles - where (you think) you execute all the tests that were identified in a time long ago. However, everyone knows, it is best case / best effort, IF AT ALL, of actually following each and every step of that regression cycle. Such a waste of time and effort - when more meaningful testing could have been performed during that time.
  • Why is that exploratory tests are still considered suitable only for manual testing? How about automating exploratory tests using AI?
    • [Anand] What is the meaning of "exploration"?
      As per a quick online search, this is what it means:


      Now - how can you automate the unknown / unfamiliar? You can use tools to help figure out what is unknown / unfamiliar ... but once you know it, then it does not remain 'unknown'. I think buzzwords like AI and ML are tools to help bridge the gap in the known and the unknown. But we would still need to guide and use these tools and technologies to our advantage, to aid in our exploration.

Thursday, July 12, 2018

Return of the todo (learning) list

Since at least a decade, I have had a list of TODOs which I actively updated and maintained. The items on this list focused on - 

  • what new things I wanted to learn / experiment with
  • new conference talk ideas
  • open source ideas / updates
Unfortunately, due to various reasons (some of which were my doing as well), the focus on learning and experimentation got lost ... and I felt awful about it. I also shared my pain with some friends and colleagues of unable to find time / opportunity / focus / support to be in the continuous learning phase. But not much could be done / changed in those circumstances.

But I am very happy to share that the list is back, and back with a bang!!!

The days of learning and experimentation continue. My list is now overflowing, and continuing to grow with ideas and things I want to learn and experiment.

I am happy again!!

Monday, April 16, 2018

Essence of Testing - A new beginning

In my career so far, I have been very fortunate to have got great mentors, and a variety of opportunities to learn, add value, and share my experiences with others around me.

Here are some of these experiences:
  • Worked in various sized organisations across the globe in the past couple of decades
  • The teams have been big and small
  • Played a variety of roles - Quality Analyst, SDET (Software Development Engineer in Test), Product Quality Engineer (PQE), Automation Engineer, Consultant, Coach, Project Manager, Director - Quality, Support Engineer, etc.
  • Worked with teams having products in different domains - Health care, eCommerce, Banking / Finance, Retail, Entertainment (OTT), Research, etc. 
  • With in organisations (B2B and B2C space): WebMD, Borland Software, Microsoft (Redmond, USA), AmberPoint (USA / India), ThoughtWorks, Vuclip 
  • Shared experiences with other via Meetups and Conferences world-wide
  • Created opensource tools like WAAT, TTA, TaaS
After working as an employee in these organisations for almost a couple of decades now, I have taken a plunge to do something different.


I am now looking forward to work with Organisations and Teams, to help co-create optimized solutions towards shipping a quality product. Leave me a message on my blog, or send me an email at abagmar@gmail.com to talk more on how we can work together!


As part of this journey, even before I was able to buy my own laptop, and warm it up, I got an interesting consulting assignment, thanks to a dear friend.
This assignment was exactly what I needed to get started - it was a Discovery Workshop, with the following objective:
  • Learn and understand the current state of the team and the product they were building
  • Understand current (perceived) challenges
  • Suggest improvements for the team in areas of:
    • Tweaks in the current processes
    • Practices to be adopted / got better at / or stopped (as anti-patterns and not adding value)
    • Identify opportunities to Test better, and early
    • Suggest a Test Automation Pyramid that is fit-for-purpose for the team
    • Suggest strategy, tools and approach for end-2-end (e2e) functional automation

As I usually do, I started off the workshop with my favorite tool for note taking - Mind Maps! As the conversations evolved and got specific, I started with a Balanced Mind-Map  to Fishbone Mind-Map.




Eventually I consolidated my thoughts and created the Discovery Workshop Report for the client in the form of a slide deck, using the following approach.


  • Learning from Discovery
    • Talk with the Team members
    • See & understand the project management tool, quality of requirements, etc.
    • See the code - understand complexity, types of tests written, quality of tests, etc.
    • See the Testing related artifacts - test cases, test execution strategy, exploratory testing, etc.
    • See the CI server - how deployments happen, what causes the builds to fail, etc.
  • Recommendations
    • For all the above areas, I created recommendations on what different aspects may help the team move ahead in a better way
Discovery and Recommendations were based on each specific activity:
    • Process
    • Architecture
    • Requirements
    • Development
    • Testing
    • Deployments

Once the learning from the Learning and Recommendation stages were well understood (by me), I then created a Suggested Plan of Execution (in phases / milestones)
  • From the recommendations, I created milestone based plans on what can be started immediately, and what decisions the team needs to take to move forward in other areas

In this 2 week time, unknowingly, in retrospect, the whole engagement was quite agile. There were periodic demos of the POCs, regular progress sharing, and changing direction of discovery and recommendations based on learning.

The Client also appreciated the quality of conversations, and the results that were shared.

All in all, a very satisfying beginning to my new journey with Essence of Testing!


Saturday, March 17, 2018

Measuring Consumer Quality - The Missing Feedback Loop

I spoke in vodQA at ThoughtWorks, Pune on "Measuring Consumer Quality - the Missing Feedback Loop". 

This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.


Abstract:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers 
  • How do you know your product is working well? 
  • How do you know your Consumers are engaged with your product? 
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change? 

Video:


Slides can be found here.

Pictures:



Friday, March 9, 2018

MAD-LAB - Capabilities & Features - Agile India 2018

I spoke about "Build your own MAD-LAB - for Mobile Test Automation for CD" at Agile India 2018.

Though I have spoken on this similar topic answering the question - "Why I needed to build my own MAD-LAB?" before at vodQA in July 2017 at Vuclip, quite a few things have changed since then.

Knowing the value of "being agile", a day before my scheduled talk in Agile India 2018, I decided to revamp the content substantially. To add to my challenges, (and thanks to "testing" my slides before the talk in the conference room), I also realised the slide size format I was using is incorrect, and also the projector was not "setup / configured" correctly, making all my slide colours go haywire.

So after last 10 minutes of scrambling before the talk time, I managed to get this done correctly (at least that is what I think now in hindsight.

Moral of the above story - do a test / dry-run of your slides before your audience comes in!

That said, here is the abstract of the talk.


Abstract

In this age of a variety of cloud-based-services for virtual Mobile Test Labs, building a real-(mobile)-device lab for Test Automation is NOT a common thing – it is difficult, high maintenance, expensive! Yet, I had to do it! 

The slides are part of the discussion on the Why, What and How I built my own MAD-LAB (Mobile Automation Devices LAB). The discussion also includes the Automation Strategy, Tech Stack, Capabilities & Features of MAD-LAB and the learnings from successful & failed experiments in the journey. 

Slides

Below are the slides from my talk. The link to the video will be shared once available.




Some pictures



Friday, January 26, 2018

Agile Testing & Patterns for a good Test Automation Frameworks

2018 started with a bang! I got an opportunity to speak and share my experiences in 2 rocking meetups.

Patterns of a "good" Test Automation Framework

TechnoWise meetup on 13th Jan on Patterns of a "good" Test Automation Framework went very well, with lot of interaction and discussions along the way.



What is Agile Testing? How does Automation Help?


Then, there was an impromptu meetup setup by a very proactive ISQA community at GO-JEK office in Jakarta, Indonesia on 16th Jan. The topic there was What is Agile Testing? How does Automation help? In this meetup, I also covered aspects of Career Path of a Tester, and QA Skills and Capabilities.

There were many amazing experiences from this meetup -

  • The ISQA community is very active. The meetup was setup in literally a few days and there were over 150+ attendees
  • The GO-JEK office space is a very fun place. They actually have an auditorium in the office to host meetups and similar activities, apparently, once a week!!
  • The questions / interactions with the attendees were very insightful

Here are slides shared in the meetup. I will share the link of the video as well once available.





Friday, December 29, 2017

Understanding, Measuring and Building Consumer Quality

It has been a long time since I posted anything on my blog. For those who don't know, I am working in Vuclip, a B2C company in the OTT space, where we have millions of consumers using our product via Android and iOS native apps, and the Browser too.

In the past few months, I have been in deep water taking on a new and very exciting initiative. Before I share what that is - here is a traditional approach to Quality.


Typically practices, processes, tools are chosen and implemented to help build a good Quality Product for the end-user. Evolving from Waterfall methodology to Agile methodology has been challenging for many (organizations and individuals), but has proven to be a huge step forward to achieving the goal of building a good and usable product.

In this course of time, we have (thankfully) changed the thought process of considering QA to be the "gate-keeper of Quality" to QA being a "Quality Advocate and Quality Enabler" for the team and the product. A very important change as a result has been changing the focus of QA from "finding defects" to "preventing defects".

And rightly so! After-all, why should the QA be the gate-keeper and -
  • take the responsibility and blame of someone giving poor / incomplete requirements? or,
  • someone writing bad code during development?
The QA is not a scavenger meant to clean up mess created by others. The QA instead is an enabler who -
  • helps bring all stakeholders together through the life-cycle of the product - from conceptualization to end-delivery, 
  • asks a lot of questions to find gaps, clarify assumptions, etc.
  • helps find and radiate information including risks, and,
  • is an active part of doing whatever it takes to prevent defects coming into the system
The Agile practices help do this in a collaborative way, getting features to completion in an incremental fashion, and iterating / pivoting based on the feedback received. This is also what practices related to Continuous Delivery enables us to do well.

But this is nothing new, at least for me. After all, during my fantastic journey at ThoughtWorks, I would say that these were basic tenets of why and how we worked.

That said, my eye-opener in the past few months has been to take this thought process many steps forward.

My agenda has been - how can I help influence and raise the bar of quality in such a way that we not only build a quality product, but also be in a position to predict how our millions of consumers will be able to use it.

This initiative we are calling as Consumer Quality -
  • how do we understand Quality (= value) of the product as perceived by our Consumers, 
  • what data can be relevant to understand this, how can we be proactive about looking at this data while building a quality product, and,
  • the Nirvana stage - how can we predict what actions taken will have desired impact on Consumer Quality!
I hope to be able to share with you more of this in 2018!

Happy New Year everyone! Keep Learning, Keep Sharing!

Tuesday, October 10, 2017

Analytics - the forgotten child!

After a long time, I spoke about What, Why and How of Analytics Testing at Selenium Conference, Berlin 2017.

This talk was initially supposed to be focussed on Web Analytics only, with impact on / of IoT (Internet of Things) and Big Data, but my recent experiences made me realise, the learnings could easily be applied to Analytics from Mobile native apps as well.

So against better judgement, a full 30 minutes before I was supposed to go on stage, I started a revamp of the slides to include more content, which also meant a complete change of flow of the talk / slides. Talk about making stupid decisions, but thankfully, it turned out pretty ok!!

Abstract of the talk:

What is Web Analytics and why is it important? We'll walk through techniques for manually testing your data and automating the validation process.
Just knowing about Analytics is not sufficient for business now. There are new kids in town - IoT and Big Data - two of the most used and well-known buzz words in the software industry! With a creative mindset looking for opportunities to add value, the possibilities for IoT are infinite. With each such opportunity, there's a huge volume of data being generated which, if analysed and used correctly, can feed into creating more opportunities and increased value propositions.
There are 2 types of analysis that one needs to think about:
  1. How is the end-user interacting with the product? - This will give some level of understanding into how to re-position and focus on the true value add features for the product.
  2. What are the patterns in the data? - With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns and find out new product and value opportunities based on these.

Video from the talk:



Slides from the talk:



Friday, March 17, 2017

Patterns in Test Automation Framework at STPCon

I spoke about Patterns of a "good" Test Automation Framework at STPCon 2017. Here are the details from the talk.


Abstract

Building a Test Automation Framework is easy – there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.
However, building a “good” Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.
Design Patterns play a big role in helping achieve this goal of building a good and robust framework.
In this talk, we will talk about, and see examples of various types of patterns you can use for:
  • Build your Test Automation Framework
  • Test Data Management
Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

Session Takeaways:


  • Patterns for building Test Automation Framework.
  • Patterns for Test Data Management, with pros and cons of each.

Slides



Pictures




Monday, February 20, 2017

Sharing implementation of cucumber-jvm - Appium test framework

I recently shared the Features of my Android Test Automation Framework and also the challenges, and, how we overcame those, to make the parallel test execution work well with Android 7.0 devices as well.

In this blog post, I will be sharing the details (including code) of the implementation. If you have not read my post on - 
Features of my Android Test Automation Framework - I highly recommend you read that first.



Implementation Details

Tech Stack Summary

To recap - here is the tech stack that we currently have:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

1. Configure Jenkins Node (in Jenkins Server)






We currently have 5 Jenkins Nodes setup as shown below.













Each node is configured like this:
















2. Setup Jenkins Job (in Jenkins Server)

Once the Nodes are setup, we can now configure the Jenkins Jobs. 










We have setup the following jobs in Jenkins for our test executions.













Each job is configured as a Jenkins Pipeline Project and we use the the Jenkins file available here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/2%20-%20Setup%20Jenkins%20Jobs/2.2%20Jenkinsfile) as a sample from git to configure what the job is supposed to do.


3. Setup Jenkins Agent

Once the Jenkins Nodes and Jenkins Jobs are configured, we now need to get the Jenkins Agents itself setup and configured to be able to service the requests from the Jenkins server.














We use the JNLP way to connect the Jenkins Slave to the Jenkins server. For this, we have a template .sh script, which we need to copy and update 2 values in it. This is needed for each new Jenkins Node that we connect.

The template .sh script can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.1%20start-e2e-moto.sh).

Now, our Jenkins setup is done. But a big piece is still missing. 

In order to run our tests on the Agent, we need some basic software to be installed. To do this, we created a shell script, that will help provision the machine. This is required to be done just once - but we do plan to have multiple Mac Mini host machines that will run various number of Jenkins agents - so the script will help keep same software (including version) on our machines - which means the same test execution environment.

This shell script can be found here - (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.2%20JenkinsAgentMachineFirstTimeSetup.sh)


4. Manage Test Infrastructure & Test Execution

By this stage, our Jenkins Server, Jenkins Agent setup is done, including the software required to run the tests. Next thing is now at the Test Framework level.


























Our build tool is gradle. All infrastructure related work is handled via this build.gradle file. 

Before we get into the details of the gradle file, it is important to understand what the code structure is.


























Via groovy / gradle, we managed to solve the complexity of:

  • Finding matching devices based on the CONNECTED_DEVICE_IDS
  • Downloading the apk file from where ever it is available
    • This is done just once per test run - regardless of how many devices the test is going to run on
    • The URL to download is passed as an environment variable - APP_URL
    • For local testing, you can give a local absolute path to the apk file via the APP_PATH environment variable instead of specifying APP_URL
  • Finding the list of scenarios to be run (based on the cucumber tags specified via the environment variable - 'run'
  • Managing start / stop of Appium Servers
  • Cleaning up the device before test runs
  • Executing Cucumber scenarios in parallel
  • Building consolidated reports locally (cucumber-reports) - IF not using the Jenkins cucumber-reports plugin



5. Run Tests

Last, step in this process - is to manage the Android Driver. We use the Cucumber-jvm's @Before and @After hooks to set the right capabilities for instantiating the AndroidDriver, and also stopping the same after test execution is complete.


















These helper files can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/tree/master/5%20-%20Run%20Tests).


Sample Code

All the sample code can be found from my github repository cucumber-jvm-appium-infra - https://github.com/anandbagmar/cucumber-jvm-appium-infra


Happy Testing!