Showing posts with label mobile. Show all posts
Showing posts with label mobile. Show all posts

Sunday, May 22, 2022

Automating the real-user scenarios across multi-apps, and multi-devices

Simulating real-user scenarios as part of your automation is a solved problem. You need to understand the domain, the product, the user, and then define and implement your scenario.

But there are some types of scenarios that are complex to implement. These are the real-world scenarios having multiple personas (users) interacting with each other to use some business functionalities. These personas may be on the same platform or different (web / mobile-web / native apps / desktop applications).

Example scenarios:

  • How do you check if more than 1 person is able to join a zoom / teams meeting? And that they can interact with each other?
  • How do you check if the end-2-end scenario that involves multiple users, across multiple apps works as expected?
    • Given user places order on Amazon (app / browser)
    • When delivery agent delivers the order (using Delivery app)
    • Then user can see the order status as "Delivered"

Even though we will automate and test each application in such interactions independently, or test each persona scenarios independently, we need a way to build confidence that these multiple personas and applications can work together. These scenarios are critical to automate!

teswiz, an open-source framework can easily automate these multi-user, multi-app, multi-device scenarios. 

Example: Multi-user, Multi-device test scenario

 

Example: Multi-user, Multi-app, Multi-device test scenario

See teswiz and getting-started-with-teswiz projects for information, or contact me.
 

Wednesday, January 19, 2022

Using fakesms service in Functional Test Automation

How do you automate OTP related test scenarios? Do you use a fake SMS service? Does it have restapi to query the SMS messages? geography support? 

To clarify - this needs to be done as part of my functional test automation, where,

  • the test could be running against a browser, where the browser does not have access to the phone, or,
  • the test could be running against a real mobile device (without SIM), so no way to receive the SMS, or,
  • the test could be running against an emulator (no SIM), so no way to receive the SMS
Scenarios include: login, payment, SMS content 

Hence I am thinking about using a fakesms service which has API access capabilities to retrieve the SMS. This will help when running automation on browser or devices / emulators without SIM.

Note:
  • There is no access to DB or API to query the OTP. 
  • I don't mind using a paid service

 

Thanks in advance for your help!

Friday, June 14, 2019

Quality & Release Strategy for Native Android & iOS Apps at AppiumConf 2019


What an amazing time speaking at the first AppiumConf 2019 in Bangalore, India. I spoke about my experiences in setting "Quality & Release Strategy for Native Android & iOS Apps"

Abstract:
Experimentation and quick feedback is the key to success of any product, while of course ensuring a good quality product with new and better features is being shipped out at a decent / regular frequency to the users.

In this session, we will discuss how to enable experimentation, get quick feedback and reduce risk for the product by using a case study of a media / entertainment domain product, used by millions of users across 10+ countries - i.e. - we will discuss Testing Strategy and the Release process an Android & iOS Native app - that will help enable CI & CD.

To understand these techniques, we will quickly recap the challenges and quirks of testing Native Apps and how that is different than Web / Mobile Web Apps.

The majority of the discussion will focus on different techniques / practices related to Testing & Releases that can be established to achieve our goals, some of which are listed below:
  • Functional Automation approach - identify and automate user scenarios, across supported regions
  • Testing approach - what to test, when to test, how to test!
  • Manual Sanity before release - and why it was important!
  • Staged roll-outs via Google’s Play Store and Apple’s App Store
  • Extensive monitoring of the release as users come on board, and comparing the key metrics (ex: consumer engagement) with prior releases
  • Understanding Consumer Sentiments (Google’s Play Store / Apple’s App Store review comments, Social Media scans, Issues reported to / by Support, etc.)

Slides:



Quality & Release Strategy for Native Android & iOS Apps from Anand Bagmar

Saturday, March 16, 2019

Visual validation - The Missing Tip of the Automation Pyramid


At yet-another-vodQA at ThoughtWorks, this time in the Pune edition on 16th March 2019, I spoke about Visual validation - The Missing Tip of the Automation Pyramid


Abstract:

The Test Automation Pyramid is not a new concept. The top of the pyramid is our UI / end-2-end functional tests - which should cover the breadth of the product.

What the functional tests cannot capture though, is the aspects of UX validations that can only be seen and in some cases, captured by the human eye. This is where the new buzzwords of AI & ML can truly help.


In this session, we will explore why Visual Validation is an important cog in the wheel of Test Automation and also different tools and techniques that can help achieve this. We will also see a demo of Applitools Eyes - and how it can be a good option to close this gap in automation!



Slides are available from here






Video is available here:








Thanks to Priyank Shah for this pic!






I also received some awesome feedback for the same.





Thanks vodQA Team! Till next time, adios!

Friday, March 9, 2018

MAD-LAB - Capabilities & Features - Agile India 2018

I spoke about "Build your own MAD-LAB - for Mobile Test Automation for CD" at Agile India 2018.

Though I have spoken on this similar topic answering the question - "Why I needed to build my own MAD-LAB?" before at vodQA in July 2017 at Vuclip, quite a few things have changed since then.

Knowing the value of "being agile", a day before my scheduled talk in Agile India 2018, I decided to revamp the content substantially. To add to my challenges, (and thanks to "testing" my slides before the talk in the conference room), I also realised the slide size format I was using is incorrect, and also the projector was not "setup / configured" correctly, making all my slide colours go haywire.

So after last 10 minutes of scrambling before the talk time, I managed to get this done correctly (at least that is what I think now in hindsight.

Moral of the above story - do a test / dry-run of your slides before your audience comes in!

That said, here is the abstract of the talk.


Abstract

In this age of a variety of cloud-based-services for virtual Mobile Test Labs, building a real-(mobile)-device lab for Test Automation is NOT a common thing – it is difficult, high maintenance, expensive! Yet, I had to do it! 

The slides are part of the discussion on the Why, What and How I built my own MAD-LAB (Mobile Automation Devices LAB). The discussion also includes the Automation Strategy, Tech Stack, Capabilities & Features of MAD-LAB and the learnings from successful & failed experiments in the journey. 

Slides

Below are the slides from my talk. The link to the video will be shared once available.




Some pictures



Friday, December 29, 2017

Understanding, Measuring and Building Consumer Quality

It has been a long time since I posted anything on my blog. For those who don't know, I am working in Vuclip, a B2C company in the OTT space, where we have millions of consumers using our product via Android and iOS native apps, and the Browser too.

In the past few months, I have been in deep water taking on a new and very exciting initiative. Before I share what that is - here is a traditional approach to Quality.


Typically practices, processes, tools are chosen and implemented to help build a good Quality Product for the end-user. Evolving from Waterfall methodology to Agile methodology has been challenging for many (organizations and individuals), but has proven to be a huge step forward to achieving the goal of building a good and usable product.

In this course of time, we have (thankfully) changed the thought process of considering QA to be the "gate-keeper of Quality" to QA being a "Quality Advocate and Quality Enabler" for the team and the product. A very important change as a result has been changing the focus of QA from "finding defects" to "preventing defects".

And rightly so! After-all, why should the QA be the gate-keeper and -
  • take the responsibility and blame of someone giving poor / incomplete requirements? or,
  • someone writing bad code during development?
The QA is not a scavenger meant to clean up mess created by others. The QA instead is an enabler who -
  • helps bring all stakeholders together through the life-cycle of the product - from conceptualization to end-delivery, 
  • asks a lot of questions to find gaps, clarify assumptions, etc.
  • helps find and radiate information including risks, and,
  • is an active part of doing whatever it takes to prevent defects coming into the system
The Agile practices help do this in a collaborative way, getting features to completion in an incremental fashion, and iterating / pivoting based on the feedback received. This is also what practices related to Continuous Delivery enables us to do well.

But this is nothing new, at least for me. After all, during my fantastic journey at ThoughtWorks, I would say that these were basic tenets of why and how we worked.

That said, my eye-opener in the past few months has been to take this thought process many steps forward.

My agenda has been - how can I help influence and raise the bar of quality in such a way that we not only build a quality product, but also be in a position to predict how our millions of consumers will be able to use it.

This initiative we are calling as Consumer Quality -
  • how do we understand Quality (= value) of the product as perceived by our Consumers, 
  • what data can be relevant to understand this, how can we be proactive about looking at this data while building a quality product, and,
  • the Nirvana stage - how can we predict what actions taken will have desired impact on Consumer Quality!
I hope to be able to share with you more of this in 2018!

Happy New Year everyone! Keep Learning, Keep Sharing!

Tuesday, October 10, 2017

Analytics - the forgotten child!

After a long time, I spoke about What, Why and How of Analytics Testing at Selenium Conference, Berlin 2017.

This talk was initially supposed to be focussed on Web Analytics only, with impact on / of IoT (Internet of Things) and Big Data, but my recent experiences made me realise, the learnings could easily be applied to Analytics from Mobile native apps as well.

So against better judgement, a full 30 minutes before I was supposed to go on stage, I started a revamp of the slides to include more content, which also meant a complete change of flow of the talk / slides. Talk about making stupid decisions, but thankfully, it turned out pretty ok!!

Abstract of the talk:

What is Web Analytics and why is it important? We'll walk through techniques for manually testing your data and automating the validation process.
Just knowing about Analytics is not sufficient for business now. There are new kids in town - IoT and Big Data - two of the most used and well-known buzz words in the software industry! With a creative mindset looking for opportunities to add value, the possibilities for IoT are infinite. With each such opportunity, there's a huge volume of data being generated which, if analysed and used correctly, can feed into creating more opportunities and increased value propositions.
There are 2 types of analysis that one needs to think about:
  1. How is the end-user interacting with the product? - This will give some level of understanding into how to re-position and focus on the true value add features for the product.
  2. What are the patterns in the data? - With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns and find out new product and value opportunities based on these.

Video from the talk:



Slides from the talk:



Tuesday, August 22, 2017

NullPointerException from RemoteWebElement in Selenium via Appium Java-Client 5.0.0-BETA9

As you may be aware from my previous posts about MAD-LAB, we are using Appium, with Java-Client 5.0.0-BETA9 to automate user journeys of the VIU app on Android & iOS devices.

Last week, suddenly, while in the middle of doing another round of significant changes to support more capability in the test framework for the Android app, the tests started failing. All infrastructure pieces were working fine, but when the App launched, I started getting this error:

ERROR AndroidLanguageScreen:16 - [5203bb1ae2771425] - ERROR in clicking on androidElement - 'By.id: tv_one' - exception - 'null'
java.lang.NullPointerException

The code in question was - driver.findByElement(myElementLocator).click()

On further investigation, it seemed that there was a problem in doing any interaction with the app, not just "click".

After lot of racking my head, asked a colleague to see if the problem reproduces on her machine. As she had not run the tests on her machine since a few days, as soon as she ran the test execution command, soon the same error happened on her machine as well. Interestingly though, we observed the following trace in her machine's console logs:

------------
Packages that were updated:


Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-support/3.5.1/selenium-support-3.5.1.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-api/3.5.1/selenium-api-3.5.1.pom
Download https://repo1.maven.org/maven2/com/google/guava/guava/23.0/guava-23.0.pom
Download https://repo1.maven.org/maven2/com/google/guava/guava-parent/23.0/guava-parent-23.0.pom
Download https://repo1.maven.org/maven2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.pom
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.pom
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_parent/2.0.18/error_prone_parent-2.0.18.pom
Download https://repo1.maven.org/maven2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-parent/1.14/animal-sniffer-parent-1.14.pom
Download https://repo1.maven.org/maven2/org/codehaus/mojo/mojo-parent/34/mojo-parent-34.pom
Download https://repo1.maven.org/maven2/org/codehaus/codehaus-parent/4/codehaus-parent-4.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-remote-driver/3.5.1/selenium-remote-driver-3.5.1.pom
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-support/3.5.1/selenium-support-3.5.1.jar
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-api/3.5.1/selenium-api-3.5.1.jar
Download https://repo1.maven.org/maven2/com/google/guava/guava/23.0/guava-23.0.jar
Download https://repo1.maven.org/maven2/com/google/code/findbugs/jsr305/1.3.9/jsr305-1.3.9.jar
Download https://repo1.maven.org/maven2/com/google/errorprone/error_prone_annotations/2.0.18/error_prone_annotations-2.0.18.jar
Download https://repo1.maven.org/maven2/com/google/j2objc/j2objc-annotations/1.1/j2objc-annotations-1.1.jar
Download https://repo1.maven.org/maven2/org/codehaus/mojo/animal-sniffer-annotations/1.14/animal-sniffer-annotations-1.14.jar
Download https://repo1.maven.org/maven2/org/seleniumhq/selenium/selenium-remote-driver/3.5.1/selenium-remote-driver-3.5.1.jar
:buildSrc:compileJava UP-TO-DATE
------------

This trace meant that something had changed in the dependencies (automatically), and gradle was fetching newer versions for the same.

This was a smoking gun we were looking for. On investigation for selenium 3.5.1 with appium java-client 5.0.0-BETA9, it quickly showed only 1 hit in search result - which was a bug reported on Java-Client 5.0.0-BETA9 - Warning: Selenium 3.5.1 breaks java client 5.0.0-BETA9

The solution / workaround was also already provided by QAutomatron

configurations.all {
    resolutionStrategy {
        force 'org.seleniumhq.selenium:selenium-support:3.4.0',
                'org.seleniumhq.selenium:selenium-api:3.4.0'
    }
}

This resolved our issue for now.


Tuesday, May 2, 2017

Criteria for setting up a Mobile Test Automation LAB

I recently got asked this question related to the MAD LAB (Mobile Automation Devices LAB) - "Would like to understand how can we setup something similar in our organisation?"

Since this question is applicable for all those thinking of, or have already set up their own lab, thought I would share my answer here.

To setup your own LAB for Mobile Test Automation, multiple things need to align:


Supportive management who -
  • allows experiments (within reason of course) and encourages learning through failure, 
  • willing to invest in infrastructure ($$)

Skilled and Passionate team members who -
  • understand the domain well, 
  • willing to learn, experiment, re-learn and fail fast, 
  • keep looking for innovative solutions to solve problems on hand, 
  • do not reinvent the wheel. 

Philosophy aside, our MAD LAB has the following: 
  • Mac Minis (8-12 devices per Mac Mini), 
  • Powered USB Hubs (I use the ones shown below - and they are working pretty well)

  • High-quality USB cables (I use the ones shown below - and they are working pretty well)
  • CI (Jenkins) setup correctly to keep running tests continuously, proper reporting  in place (else whats the use of running tests if you do not look at the results)

You could start with similar IF it fits your product-under-test context

After I answered this on LinkedIn, I realised, there are more parameters to think about, than just the above.
  • Knowing which devices to use in your Lab
  • Having good, reliable Internet connection
  • Devices should be "seen" easily
  • Should be easy to work on / with the devices as and when required
  • Know how you the devices will be placed in the lab. We tried the following:
    • 2-way tape - that didn't work. Devices used to stay up for a few days, then "drop" suddenly. Of course, that also depends on the back surface of the devices.
    • We tried many mobile stands / hangers (shown below) - but each had their own limitations



    • Finally I found an industrial-strength velcro (1" velcro tape that could take a couple of pounds of weight) - and my devices have not budged since. PS: Please be careful when putting on this velcro on the devices. IF it gets on your hand, you will have a velcro tattoo for a long long time.

What other parameters would you consider for setting up your own Lab? Looking forward to the comments below.


Friday, April 21, 2017

Introducing MAD LAB - for Mobile Automation

The past few months I have been heads-down in stabilising my Real-Device Mobile Test Lab - which we now call MAD LAB (Mobile Automation Devices LAB) .

For those who may not recollect, see my past posts for reference -

Along with my colleagues, we have put in lot of effort in setting up MAD LAB and have now added a lot of rich features to help running tests, seeing the results and making sense out of them easier. 
  • All infrastructure management is implemented now in groovy (instead of gradle as shared earlier).
  • Actual test implementation is done in cucumber-jvm / java

List of features currently implemented:
  • Device management (selection, cleanup, app install and uninstall)
  • Parallel test execution (at Cucumber scenario level) - maximising device utilisation)
  • Appium server management
  • Adb utilities 
  • Managing periodic ADB server disconnects
  • Custom reporting using cucumber-reports
  • Video recording of each scenario and embedding in the custom reports

Contents of MAD LAB:
  • 1 Mac Minis - running various Jenkins Agents
  • 2 Powered USB hubs
  • 8 Android devices

Here are some pictures from the setup.








There are many more features, in various stages of implementation, being added to make MAD LAB more powerful.

Sneak peek into whats coming:
  • Analytics Testing
  • Trend and Failure Analysis 
  • iOS
  • Web
  • A transformed MAD LAB

Finding MAD LAB interesting? Some very interesting changes are coming in soon. Watch out for my next blog post for that. 

Want to contribute and be part of this journey? Even better! Reach out to me!

Thursday, February 16, 2017

Finding my way out of bottomless pit with Appium & Android 7.0 for parallel test runs

As mentioned in my earlier post - I designed and implemented a cucumber-jvm-Appium-based test framework to run automated tests against Android Mobile Devices.

We were using:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.3.0
  • appium - v1.6.3
  • appium-java-client - v4.1.2

  • All was good, tests were running via CI, in parallel (based on scenarios) against devices having Android v5.x and v6.x.

    Then the challenges started. We got some new Motorola G4 Plus devices for our Test Lab - which has Android 7.0 installed.

    First the test refused to run. Figured out that we would probably need to upgrade the appium java-client library version to v5.0.0-BETA1. By the time we figured that out, appium-java-client v5.0.0-BETA2 was out. We also needed to change the instrumentation to UiAutomator2. This was all fine. Our tests started working (after some more changes in how locators were defined and used).

    However, the tests refused to run in parallel on the Motorola devices with Android 7. The app used to launch correctly, but tests used to run as expected only on 1 of the devices - causing our test job to fail miserably, and without any clue.

    These same tests continued to work correctly with all other devices having Android 5.x and 6.x. Very confusing indeed, not to mention highly frustrating too!

    By this time, appium-java-client v5.0.0-BETA3 was out, but refused to upgrade to that - as the difference was iOS specific. Likewise, Appium v1.6.4 BETA is now available - but not feeling to upgrade so fast - and battle the new surprises, if any.

    After digging through Appium's open issues, figured out that many people have faced, and got the similar issue resolved. The solution seemed to be to upgrade the appium-uiautomator2-driver to version > v0.2.6.

    So - next question, which had an easier answer - how to upgrade this uiautomator2-driver. However - after the upgrade, my issue did not get fixed. In fact, now the Android Driver was unable to get instantiated at all. I was getting the errors shown below.

    1. [MJSONWP] Encountered internal error running command: Error: Command '/Users/IT000559/Library/Android-SDK/build-tools/25.0.2/aapt dump badging /usr/local/lib/node_modules/appium/node_modules/appium-uiautomator2-driver/uiautomator2/appium-uiautomator2-server-v0.1.1.apk' exited with code 1 at ChildProcess. (../../lib/teen_process.js:70:19) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:192:7) at maybeClose (internal/child_process.js:890:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

    2. org.openqa.selenium.SessionNotCreatedException: Unable to create new remote session. desired capabilities = Capabilities [{appPackage=com.vuclip.viu, noReset=false, appWaitActivity=com.vuclip.viu.ui.screens.IndianProgrammingPreferenceActivity, deviceName=motorola, fullReset=false, appWaitDuration=60000, appActivity=com.vuclip.viu.ui.screens.MainActivity, newCommandTimeout=600, platformVersion=7.0, automationName=UIAutomator2, platformName=Android, udid=ZY223V2H8R, systemPort=6658}], required capabilities = Capabilities [{}] Build info: version: '3.0.1', revision: '1969d75', time: '2016-10-18 09:49:13 -0700'

    Eventually, found a workaround. I had to make the following 2 changes:
    • When initialising the Android Driver, I had to pass an additional capability - "systemport" and set the value to the Appium port for the Appium server the test was connecting to.
      • capabilities.setCapability("systemPort", Integer.parseInt(APPIUM_PORT)); 
    • Before the test run started, I do a cleanup - which includes 
      • kill any prior / orphan Appium server for that particular port, if remaining 
      • Uninstall the app from the device. I had to add another step to also uninstall the following: 
        • io.appium.uiautomator2.server, and, 
        • io.appium.uiautomator2.server.test 
    Post this, my tests are now working, as expected (from the beginning), sequentially or in parallel, against all supported Android versions.

    Current stack::
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

  • After the dust settled, my colleague - Priyank Shah and I were thinking why not many people have encountered this problem.

    My thought is probably most people may be managing the Appium Server and the Android Driver from the test run, instead of from a build script. As a result, they would not have encountered the systemport related challenge as we did.

    PS: Note that Appium Server is started / stopped via our build.gradle file and the AndroidDriver is instantiated (based on parameters passed via a combination of environment variables & properties file) from within each cucumber-jvm scenario (@Before hook).

    Hope our learning helps others who may encounter similar issues.



    How to upgrade the appium-uiautomator2-driver version for appium 1.6.3?

    I am using appium v1.6.3 - which comes with appium-uiautomator-driver@0.2.3 with appium-uiautomator-server@0.0.8.

    I need to upgrade to the newer appium-uiautomator-driver@0.2.9 (which has a fix for an issue I am seeing - https://github.com/appium/appium/issues/7527).

    Any idea how I can upgrade the uiautomator2 driver (while using the same appium@1.6.3)?




    Friday, December 16, 2016

    How to enable seamless running of appium tests on developer machines?

    I am implementing cucumber-jvm based framework to drive mobile apps (using Appium).

    Here is what I need to be able to do -
    1. Run tests on local machine for quick validation. This is mainly for developers to be able to run the tests before pushing code changes in git.
    2. Trigger and Run the tests in the cloud to run against emulators / real devices. 
    To achieve point #1, I need the setup to be simple. I do not want the team to go through massive steps to get the environment (Appium, emulators, etc.) setup.

    Can / should the whole setup be put inside a docker container - and provide single command to setup and run the tests?

    Any other approach you recommend?

    Of course - whatever approach is taken, should potentially extend seamlessly to address point #2.

    Friday, December 2, 2016

    A new beginning - entertainment on mobile

    After 7+ years, I finally took the heavy step and moved out of ThoughtWorks.

    The past 7+ years have been awesome. I had loads of fun, learnt many new things, made a lot of friends and found inspiration and guidance from a lot of mentors.

    Thank you ThoughtWorks and ThoughtWorkers! Wouldn’t have been who I am today without you and you all will always be a huge part of me!

    Taking the decision was tougher than I thought it would be ... but new challenges were waiting for me, and the time had come.

    On 1st December, 2016, I started my next stint as "Directory - Quality" at Vuclip, Inc for the Viu product. You can also find us via the PlayStore or AppStore.

    Day 1 at vuclip, barring the first 2 hours of paperwork, was getting right into action. With the planning for 2017 in full swing, there was no time to settle - but instead had to hit the ground running.

    The charter starting Day 1 for me was:


    • Define & execute test strategy for Viu - for multiple platforms, for multiple regions & partners
    • Build team to help execute the above (see section below on what I am looking out for)
    • In scope - functional testing, automation, performance, analytics, benchmarking, infrastructure, tooling, etc.
    • Out of scope - nothing

    And so the fun has begun.

    So, here is what I need to learn and execute immediately (looking forward to suggestions, links, feedback on how you have done it in the past)

    • Is it worth setting up a mobile lab (real devices + simulators) in-house or use external services for running automated tests  exploratory tests? 
    • If the latter to the above, what services have you used in the past? What have the results been?
    • Is it possible (and worth) automating the checks for memory / processor / battery usage when running tests against the native app (on Android & iOS)?
    • How to do native app performance testing (client-side) for Android & iOS?


    Also, I am looking to build a strong testing team with team members having the primary skills & capabilities -
    • Open-minded, quick learner
    • A good Testing-mindset
    • Mobile Testing experience (non-automated + test automation)
    • Performance Testing (client-side & server-side)

    Contact me if you are interested in being part of my team to work on this challenging product.

    Tuesday, September 13, 2016

    Slides from vodQA Pune - Less Talk, Only Action! now available

    vodQA-Pune - Less Talk, Only Action! was held on on Saturday, 27th Aug 2016, 8.30am - 5.30pm at ThoughtWorks, Pune.

    Agenda



    Abstracts with Slides

    1. Automating Web Analytics - Why? How?

    Do you know –

    • What is Web Analytics?How does Web Analytics work?
    • Why is it important? How to test Web Analytics?
    • How can we ensure correct data is sent to the Web Analytics server, every time, for all the actions?

    Attend this workshop to learn ‘What is Web Analytics?’ and why it is an extremely important aspect of Software Development & Testing for your product / service to succeed!

    We will share some techniques for testing Web Analytics - in a non-automated way - and why that is very challenging and error-prone.

    We will learn, via hands-on activity, about WAAT - Web Analytics Automation Testing Framework (https://essenceoftesting.blogspot.com/search/label/waat) - an open-source solution, to automate validation of correct information / tags being sent to the Web Analytic server for different user actions as part of your regular Selenium-WebDriver Test Automation Framework.

    Lastly, we will see how the impact of Analytics has changed dramatically with more adoption and spread of IoT (Internet of Things) and Big Data, and what we need to do to be part of the change, if not influencers of change!

    Slides: automating-web-analytics


    2. Performance testing with Gatling for Beginners

    Gatling is a server side performance testing tool. This workshop aims at giving introduction to Gatling and facilitating participants to write their first performance tests using Gatling.
    • Brief intro to Gatling
    • Using Postman to check stub server(created using mountebank for workshop)
    • Write a sample test using gatling (pre set up machines are provided)

    Slides: gatling-performance-workshop

    3. Game of Test Automation

    We are going to use a game to work out the WHY, WHAT and HOW of test automation within the context of consumers, application, skillset, mindset, etc.

    4. Security Testing - Operation Vijay

    It is now the days of the web. All businesses move their applications online for their customers to use.

    Many of these applications contain critical data of the customers such as credit card details, their personal details and so on. These data are very valuable. If these data fall in the wrong hands, they can have disastrous consequences.

    Attacks on these systems can destroy the trust that the customers have for the business, they can cause great losses to the customers as well as the business, and so on.

    The motivation behind attacks could be different like earning money, earning popularity, destroying a competitor company, etc.

    No matter what the intention of the attack is, we need to develop safe applications and we need to know the various vulnerabilities and the consequences of our decisions when we develop applications.

    Similarly, we should be aware of the various vulnerabilities before we test the applications so that we can try and exploit it during the testing phase and ensure better quality and safer applications.

    Slides: security-testing-operation-vijay

    5. Automate your Mobile tests with Appium

    • Introduction to Appium.
    • Appium design for Android and iOS
    • How to locate elements on android and iOS applications(Inspector).
    • Hands-on code snippet for android and iOS(Wordpress as sample app)
    • Generating reports (ExtentReports)
    • Evolve code snippet written above into a Page Object Framework.
    Slides: mobile-automation-using-appium

    6. Increase Automation to REST

    • What are web services and why we use them?
    • How to test a web service in multiple ways?
    • Increased familiarity with automation

    Tools used : rest client, postman (mention alternatives), unirest JAVA and requests python

    Slides: increase-automation-to-rest, api-webservice-setup-instructions

    7. Let's cook Cucumber

    In this workshop we will be covering:
    • Advantages of BDD through cucumber example.
    • Framework setup along with JAVA and Selenium.
    • Writing one end to end test case in real world.
    • If time permits - will be covering basic Refactoring
    Slides: lets-cook-cucumber
     

    8. Mobile Automation using Espresso

    Imagine a situation where every commit spits out a build that can be deployed to production with confidence. In today's startup era, this can be a huge boost to business as it will reduce the time to market. UI Automation for mobile apps, be it native or hybrid, has been painful since long. But with mature frameworks coming up and Google/Apple realizing the importance of such tools, UI Automation is gaining traction in the mobile space. 

    This talk is basically to understand what and why of espresso along with automating a simple scenario using espresso.

    Slides: getting-high-on-espresso

    Friday, September 2, 2016

    WAAT 2.0 (BETA) available for use

    Very excited to share that I, along with a few collaborators of WAAT did a workshop in vodQA, Pune on Sat, 27th Aug about Automating Web Analytics - Why? How?
    We implemented the automation using WAAT.

    Also, am very excited to announce that WAAT v2.0 BETA for Java is now available from the WAAT project.

    The slides from the workshop are available here



    Here are some details of the workshop:

    Abstract

    Do you know –
    What is Web Analytics?
    How does Web Analytics work?
    Why is it important? How to test Web Analytics?
    How can we ensure correct data is sent to the Web Analytics server, every time, for all the actions?
    Attend this workshop to learn ‘What is Web Analytics?’ and why it is an extremely important aspect of Software Development & Testing for your product / service to succeed!
     
    We will share some techniques for testing Web Analytics - in a non-automated way - and why that is very challenging and error-prone.
     
    We will learn, via hands-on activity, about WAAT - Web Analytics Automation Testing Framework (https://essenceoftesting.blogspot.com/search/label/waat) - an open-source solution, to automate validation of correct information / tags being sent to the Web Analytic server for different user actions as part of your regular Selenium-WebDriver Test Automation Framework.
     
    Lastly, we will see how the impact of Analytics has changed dramatically with more adoption and spread of IoT (Internet of Things) and Big Data, and what we need to do to be part of the change, if not influencers of change!

    Pre-Reqs

    • JDK 1.8
    • IDE (Eclipse / IntelliJ Idea / etc.)
    • Git (and add it to path)
    • Gradle 2.3 (and add it to path)
    • Firefox browser compatible with Selenium v2.53.1 (v46)
    • Also refer here for steps to run Appium tests using WAAT 

     

    Sample Code

    - Create a directory WAAT-Workshop and run
    git clone git@github.com:anandbagmar/waat-sample-java.git
    - If Using any existing framework: download WAAT-all-v2.0_BETA.jar
    http://goo.gl/8bfyaJ
    - To run sample code - from waat-sample-java directory - run the command -
    gradle clean build 

    Please reach out if you need help in using WAAT!