Thursday, March 16, 2017

Workshop - Client-Side Performance Testing at STPCon 2017

I conducted a 4-hour workshop on Client-Side Performance Testing at STPCon 2017 on 15th March 2017.


Workshop Abstract

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing.
Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.
Lastly – just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity – using WebPageTest (private instance setup), and experiment with yslow – as a low-cost, programmatic alternative to WebPageTest.
We will also look at the different dimensions of Client-side Performance Testing for native mobile applications.
PS: This workshop will be a combination of presentation and hands-on-activity with lots of discussion throughout. You should bring your laptop with you.

Workshop Takeaways:

  • Understand difference between is Performance Testing and Performance Engineering.
  • Hand’s on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
  • Examples / code walk-through of some ways to automate Client-side Performance Testing.

Slides


Client-Side Performance Testing from Anand Bagmar

Some pictures from the workshop 

(Thanks Mike LylesCurtis Stuehrenberg for the pictures)





Monday, March 6, 2017

Analytics Testing

I recently spoke in an Agile Testing Conference on - The What, Why and How of (Web) Analytics Testing.

I was also part of a panel discussion having the theme - "What's not changed since moving to Agile Testing - The Legacy Continues!" There were some very interesting perspectives in this discussion.

The great part was that the audience was very involved and vocal throughout the day. This made is very interactive and good sharing of information and experiences for all!

Below is some information about the talk. I will try to add the link to the video soon.
 

Abstract

Analytics is changing the way products and services are being created and consumed.

In this session, we will learn

  • What is Analytics?
  • Why is it important to use Analytics in your product?
  • The impact of Analytics not working as expected

We will also see some techniques to test Analytics manually and also automate that validation. But just knowing about Analytics is not sufficient for business now.

There are new kids in town - IoT and Big Data - two of the most used and heard-off buzz words in the Software Industry!

With IoT, with a creative mindset looking for opportunities and ways to add value, the possibilities are infinite. With each such opportunity, there is a huge volume of data being generated - which if analyzed and used correctly, can feed into creating more opportunities and increased value propositions.

There are 2 types of analysis that one needs to think about.

1. How is the end-user interacting with the product? This will give some level of understanding into how to re-position and focus on the true value add features for the product.

2. With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns from what has happened, and find out new product / value opportunities based on usage patterns.


Slides


The What, Why and How of (Web) Analytics Testing (Web, IoT, Big Data) from Anand Bagmar

Pictures







Monday, February 20, 2017

Sharing implementation of cucumber-jvm - Appium test framework

I recently shared the Features of my Android Test Automation Framework and also the challenges, and, how we overcame those, to make the parallel test execution work well with Android 7.0 devices as well.

In this blog post, I will be sharing the details (including code) of the implementation. If you have not read my post on - 
Features of my Android Test Automation Framework - I highly recommend you read that first.



Implementation Details

Tech Stack Summary

To recap - here is the tech stack that we currently have:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

1. Configure Jenkins Node (in Jenkins Server)






We currently have 5 Jenkins Nodes setup as shown below.













Each node is configured like this:
















2. Setup Jenkins Job (in Jenkins Server)

Once the Nodes are setup, we can now configure the Jenkins Jobs. 










We have setup the following jobs in Jenkins for our test executions.













Each job is configured as a Jenkins Pipeline Project and we use the the Jenkins file available here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/2%20-%20Setup%20Jenkins%20Jobs/2.2%20Jenkinsfile) as a sample from git to configure what the job is supposed to do.


3. Setup Jenkins Agent

Once the Jenkins Nodes and Jenkins Jobs are configured, we now need to get the Jenkins Agents itself setup and configured to be able to service the requests from the Jenkins server.














We use the JNLP way to connect the Jenkins Slave to the Jenkins server. For this, we have a template .sh script, which we need to copy and update 2 values in it. This is needed for each new Jenkins Node that we connect.

The template .sh script can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.1%20start-e2e-moto.sh).

Now, our Jenkins setup is done. But a big piece is still missing. 

In order to run our tests on the Agent, we need some basic software to be installed. To do this, we created a shell script, that will help provision the machine. This is required to be done just once - but we do plan to have multiple Mac Mini host machines that will run various number of Jenkins agents - so the script will help keep same software (including version) on our machines - which means the same test execution environment.

This shell script can be found here - (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.2%20JenkinsAgentMachineFirstTimeSetup.sh)


4. Manage Test Infrastructure & Test Execution

By this stage, our Jenkins Server, Jenkins Agent setup is done, including the software required to run the tests. Next thing is now at the Test Framework level.


























Our build tool is gradle. All infrastructure related work is handled via this build.gradle file. 

Before we get into the details of the gradle file, it is important to understand what the code structure is.


























Via groovy / gradle, we managed to solve the complexity of:

  • Finding matching devices based on the CONNECTED_DEVICE_IDS
  • Downloading the apk file from where ever it is available
    • This is done just once per test run - regardless of how many devices the test is going to run on
    • The URL to download is passed as an environment variable - APP_URL
    • For local testing, you can give a local absolute path to the apk file via the APP_PATH environment variable instead of specifying APP_URL
  • Finding the list of scenarios to be run (based on the cucumber tags specified via the environment variable - 'run'
  • Managing start / stop of Appium Servers
  • Cleaning up the device before test runs
  • Executing Cucumber scenarios in parallel
  • Building consolidated reports locally (cucumber-reports) - IF not using the Jenkins cucumber-reports plugin



5. Run Tests

Last, step in this process - is to manage the Android Driver. We use the Cucumber-jvm's @Before and @After hooks to set the right capabilities for instantiating the AndroidDriver, and also stopping the same after test execution is complete.


















These helper files can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/tree/master/5%20-%20Run%20Tests).


Sample Code

All the sample code can be found from my github repository cucumber-jvm-appium-infra - https://github.com/anandbagmar/cucumber-jvm-appium-infra


Happy Testing!



Thursday, February 16, 2017

Finding my way out of bottomless pit with Appium & Android 7.0 for parallel test runs

As mentioned in my earlier post - I designed and implemented a cucumber-jvm-Appium-based test framework to run automated tests against Android Mobile Devices.

We were using:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.3.0
  • appium - v1.6.3
  • appium-java-client - v4.1.2

  • All was good, tests were running via CI, in parallel (based on scenarios) against devices having Android v5.x and v6.x.

    Then the challenges started. We got some new Motorola G4 Plus devices for our Test Lab - which has Android 7.0 installed.

    First the test refused to run. Figured out that we would probably need to upgrade the appium java-client library version to v5.0.0-BETA1. By the time we figured that out, appium-java-client v5.0.0-BETA2 was out. We also needed to change the instrumentation to UiAutomator2. This was all fine. Our tests started working (after some more changes in how locators were defined and used).

    However, the tests refused to run in parallel on the Motorola devices with Android 7. The app used to launch correctly, but tests used to run as expected only on 1 of the devices - causing our test job to fail miserably, and without any clue.

    These same tests continued to work correctly with all other devices having Android 5.x and 6.x. Very confusing indeed, not to mention highly frustrating too!

    By this time, appium-java-client v5.0.0-BETA3 was out, but refused to upgrade to that - as the difference was iOS specific. Likewise, Appium v1.6.4 BETA is now available - but not feeling to upgrade so fast - and battle the new surprises, if any.

    After digging through Appium's open issues, figured out that many people have faced, and got the similar issue resolved. The solution seemed to be to upgrade the appium-uiautomator2-driver to version > v0.2.6.

    So - next question, which had an easier answer - how to upgrade this uiautomator2-driver. However - after the upgrade, my issue did not get fixed. In fact, now the Android Driver was unable to get instantiated at all. I was getting the errors shown below.

    1. [MJSONWP] Encountered internal error running command: Error: Command '/Users/IT000559/Library/Android-SDK/build-tools/25.0.2/aapt dump badging /usr/local/lib/node_modules/appium/node_modules/appium-uiautomator2-driver/uiautomator2/appium-uiautomator2-server-v0.1.1.apk' exited with code 1 at ChildProcess. (../../lib/teen_process.js:70:19) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:192:7) at maybeClose (internal/child_process.js:890:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

    2. org.openqa.selenium.SessionNotCreatedException: Unable to create new remote session. desired capabilities = Capabilities [{appPackage=com.vuclip.viu, noReset=false, appWaitActivity=com.vuclip.viu.ui.screens.IndianProgrammingPreferenceActivity, deviceName=motorola, fullReset=false, appWaitDuration=60000, appActivity=com.vuclip.viu.ui.screens.MainActivity, newCommandTimeout=600, platformVersion=7.0, automationName=UIAutomator2, platformName=Android, udid=ZY223V2H8R, systemPort=6658}], required capabilities = Capabilities [{}] Build info: version: '3.0.1', revision: '1969d75', time: '2016-10-18 09:49:13 -0700'

    Eventually, found a workaround. I had to make the following 2 changes:
    • When initialising the Android Driver, I had to pass an additional capability - "systemport" and set the value to the Appium port for the Appium server the test was connecting to.
      • capabilities.setCapability("systemPort", Integer.parseInt(APPIUM_PORT)); 
    • Before the test run started, I do a cleanup - which includes 
      • kill any prior / orphan Appium server for that particular port, if remaining 
      • Uninstall the app from the device. I had to add another step to also uninstall the following: 
        • io.appium.uiautomator2.server, and, 
        • io.appium.uiautomator2.server.test 
    Post this, my tests are now working, as expected (from the beginning), sequentially or in parallel, against all supported Android versions.

    Current stack::
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

  • After the dust settled, my colleague - Priyank Shah and I were thinking why not many people have encountered this problem.

    My thought is probably most people may be managing the Appium Server and the Android Driver from the test run, instead of from a build script. As a result, they would not have encountered the systemport related challenge as we did.

    PS: Note that Appium Server is started / stopped via our build.gradle file and the AndroidDriver is instantiated (based on parameters passed via a combination of environment variables & properties file) from within each cucumber-jvm scenario (@Before hook).

    Hope our learning helps others who may encounter similar issues.



    How to upgrade the appium-uiautomator2-driver version for appium 1.6.3?

    I am using appium v1.6.3 - which comes with appium-uiautomator-driver@0.2.3 with appium-uiautomator-server@0.0.8.

    I need to upgrade to the newer appium-uiautomator-driver@0.2.9 (which has a fix for an issue I am seeing - https://github.com/appium/appium/issues/7527).

    Any idea how I can upgrade the uiautomator2 driver (while using the same appium@1.6.3)?




    Tuesday, February 14, 2017

    Features of my Android Test Automation Framework

    [UPDATED - Added link to implementation details at the end of the post]

    As I have shared in my previous few blog posts (A new beginning - entertainment on mobile, How to enable seamless running of appium tests on developer machines?), a few months ago, I embarked on a new journey as "Director - Quality" for the Viu product at Vuclip.


    Here are a few details about our Viu app:
    Viu offers high quality, popular, regional video content in various different languages for consumers in various different regions - Indonesia, Malaysia, India, Middle-East, Egypt ....
    The consumers on the move could be using Android devices or iOS devices to watch their favourite content - either stream it directly via their Mobile Data plan, or they could have downloaded their preferred video at home / office and watch the saved video later.
    The very interesting, and, challenging bit from Testing perspective, is that the app would behave differently based on from which part of the world the user is using it. This logic is not based on Geolocation.

    Objective

    My objective from functional test automation is:
    • Get quick feedback, from every build and know if the app is working well. If NOT, then be able to get to the root cause ASAP.
    • Provide feedback and confidence to stakeholders (Product team, Business team, etc.) on the "true" state of the product. There should be no surprises, for anyone!
    • Run tests seamlessly by simulating different regions

    Approach

    To achieve this - I choose to go with the following tech-stack:
    • Test Intent specification - cucumber-jvm - to specify the business intent, in the business terminology (expected to be) understood by all.
    • Reports - cucumber-reports provide good, rich, meaningful and easily understandable reports which provide meaningful information about the state of the product.
    • Device interaction - Appium / Java - implement the intent specified by cucumber-jvm. This will also allow us the flexibility to test against Android, iOS and Web.
    • Continuous Integration (CI) server - Jenkins - to be able to run tests automatically when a new build is ready. Also, we integrated the cucumber-reports via the cucumber-reports plugin directly in Jenkins - so now all stakeholders interested in the reports just need to go to one place and get all information, in real time. No more need for Test Reports to be generated manually and sent across for everyone.

    Test Execution environment

    I looked at various cloud-based service providers (SauceLabs, Test Object, pCloudy, AWS Device Farm, etc.) who could run our tests on real-devices in their data centers. These tests would need to be triggered automatically via Jenkins when a new build (apk) is available, or as we add more tests. However, none of these providers could satisfy my requirements (without having to compromise significantly on my objectives).

    Hence, finally decided to setup my own private Test Lab for this. Oh fun!

    Framework Architecture

    Below is the architecture of the framework that I came up with. This is based on the architecture I posted in my earlier blog post on “Assertion & Validations in Page Object” (https://essenceoftesting.blogspot.com/2012/01/assertions-and-validations-in-page.html)
    ViuTestFrameworkArchitecture.png

    Approach, Features & Capabilities of the Test Lab

    • For Jenkins configuration, we configure the job using Jenkins file - to ensure our Jenkins configuration is also under version control. Also, since this is groovy scripting language, we can have good logic and processing done as part of the job execution.
    • This helped us keep the configuration consistent and common across any type of job we had to run.
    • The Jenkins job will trigger a set of commands on the Jenkins Agent.
    • Jenkins Agents are setup on Mac Mini machines. Each Agent has only 1 executor. This is essential to prevent using a device that is already in use by another job / executor / agent.
    • The Mac Mini (more to be added on demand) has a powered USB hub which connects upto 8 devices
    • Depending on types of devices connected, we have as many Jenkins Agents (with 1 executor only) allocated for them.
    Example:
      • Samsung devices (with API Level 22) allocated to Jenkins agent - viu-e2e-samsung
      • Motorola devices allocated to Jenkins agent - viu-e2e-moto

    Infrastructure

    • To manage and use the devices allocated to each node and more important - prevent other nodes using other devices, each node in Jenkins has an environment variable in its configuration - CONNECTED_DEVICE_IDS - a comma separated list of devices allocated to this node.
      • This approach makes it easy to change / add / remove devices on the fly. All we need to do in such cases is update the device ID in the appropriate node’s CONNECTED_DEVICE_IDS environment variable
      • Our gradle file, reads the CONNECTED_DEVICE_IDS environment variable, and finds devices connected on the Mac Mini matching the provided device ids. This simple technique allows specific device allocation from the pool of devices to each node.
      • PS: If anyone does an error and provides the same device id for multiple nodes, we all know what will happen. These are areas where we need to be very cautious in our execution and maintenance.
    • The URL for the Android APK file is also passed to the gradle file as a an environment variable. We download the APK file once before test execution starts.
    • Functional (end-2-end) Automation is painfully slow to run. To get the feedback quickly, we have to run this in parallel. All existing approaches to running cucumber-jvm tests in parallel failed to meet our requirement. I wanted to run each scenario in parallel, on whichever device is free (from the matching devices list). Eventually ended up writing a small script that allows me to run scenarios in parallel.
    • Managing Appium Server (start / stop) is another important activity - which is required to be done just once per device. Gradle manages that as well.

    Next steps

    • Stabilize parallel test execution (problems with Android 7)
    • Start with iOS app automation
    • Start with Web automation (www.viu.com)
    • Share gradle file with community, if anyone interested.
    • [UPDATED] You can find the details of the implementation here

    Friday, December 16, 2016

    How to enable seamless running of appium tests on developer machines?

    I am implementing cucumber-jvm based framework to drive mobile apps (using Appium).

    Here is what I need to be able to do -
    1. Run tests on local machine for quick validation. This is mainly for developers to be able to run the tests before pushing code changes in git.
    2. Trigger and Run the tests in the cloud to run against emulators / real devices. 
    To achieve point #1, I need the setup to be simple. I do not want the team to go through massive steps to get the environment (Appium, emulators, etc.) setup.

    Can / should the whole setup be put inside a docker container - and provide single command to setup and run the tests?

    Any other approach you recommend?

    Of course - whatever approach is taken, should potentially extend seamlessly to address point #2.

    Thursday, December 8, 2016

    Career Path of a Tester!

    PS: This is a long post. You can can also get / read a pdf version of this from here via slideshare.

    I asked in various forums - “Does a Tester need a career path? Yes / No - Why?

    Thanks to all who replied and have shared deeper insights into different perspectives.

    I agree with most of the thoughts and also think that the answer to the question is "It Depends!"

    On what you ask? On various parameters being used to reach the answer. The parameters could be:
    • Your past experiences
    • Your current work - and how much you are enjoying it, or not. More important (IMHO is the activities you have been done, not the titles / roles you have been playing)
    • Who has inspired you in the past and present - and what was their role at the time

    Career Path options for a Tester

    Before I proceed further, I want to share my perspective on what I think are the options for “testers” in the Software Industry. In terms of Career Path options - I think the options are in a way clear as depicted in the mind map below.


    Career Path for a Tester
    Note - this article focusses on all aspects except the Managerial Track

    Capabilities and Skills for a Tester

    Moving along the Career Path is almost natural as you spend more time in the industry. However, to become successful along the way, there is a lot of hard work required.

    There are various capabilities and skills that a Tester needs to be successful in his / her career - including but not limited to the few options shown below. 

    Note:
    • These skills are more from a Individual Contributor / Technical Track perspective.
    • The mind map does not do justice to visualize easily the Levels of expertise that are possible for each of the items. The levels can be categorised into
      • Don't Know
      • Beginner
      • Intermediate
      • Expert
      • Master

    Organization Focus

    The Capability and Skills would differ "slightly" based on the nature / operational model of the organization you are working for.


    Coaching and Consulting Focus

    Coaching and Consulting focussed-roles would predominantly require strong soft-skills, but the importance and value of Technical Understanding cannot be underestimated. 



    Often unfortunately, I have come across 'consultants' who know only theory, and very little practical experience of implementation. In such cases, you care not doing justice to your role. 

    At the same time, I have come across brilliant Consultants who bring in a wealth of experience and knowledge (what has worked, and not worked) and apply it in the correct way in the present context. It was indeed a pleasure to work with such Consultants!

    Analysis Focus

    The path of the shift-left is extremely important and also not easy. You need to apply a different mindset to accomplish activities that helps build a better software even before any code is written.


    Test Analyst Focus

    A Tester cannot be complete without doing well in the core skills - Testing! 

    Testing is complex, frustrating, tedious, slow, negative to a certain extent - finding mismatches in actual Vs expectations and can never say its 100% complete! Yet, it is a lot of fun. You get to learn a lot of things, use a variety of tools to help your Testing efforts. 

    When the software is built right, and the end-user is able to do what is expected from the software you have helped build and test, Testing is also very rewarding!



    There are a few things that limit the impact of Testing - 
    • Lack of curiosity
    • Lack of willingness to learn, explore and get to the deep core of the product
    • Lack of willingness to be technical (understand / contribute to product architecture, read and understand code and the automated tests)
    • Lack of willingness to experiment
    • Lack of willingness to pair with different roles - Product Owners, Business Analysts, Developers, etc
    • Unable to work as ONE TEAM!
    • ....

    Technical Focus

    Software Development is a highly technical activity. There is no reason why Testing should be considered as a non-technical activity.


    Incubation and Initiatives

    There are other activities that a Testing can be part of - these are related to Learning, Sharing, Mentoring and Contributing to make the Testing space more richer!




    The Pendulum Visualization!

    Another way to visualize these skills and capabilities is to think of this as a pendulum.
    • The arc created by the swinging pendulum contains all the items listed above
    • As your experience in the industry grows, you should get some level of experience in each of these aspects
    • Your passion and interests should determine what area(s) you want to get a deep-level of expertise in
    • So eventually you will end up in a T-model - the short horizontal line representing the breadth of coverage across all aspects of how you can help build a 'good' quality product, and the vertical line representing the 'deep' level of mastery and expertise you have.

    Summary

    The sad part!

    Now that this is out of the bag - I want to highlight a few things that I have observed in the Software industry, and this is probably not limited to just the Tester role.
    • People are impatient and quickly want to move up the chain along the career path track. Motivation could be, but not limited to - Money and Power (to control). I would ask these people to do a serious introspection to figure out if they are truly doing justice to their role and their peers, Are they able to be good role models for the people “reporting” to them?
    • Career Path is necessary, but overrated. Too much emphasis is given on the “role” and not enough on the activities required to make the team successful, and make the product that will help the end-user. Focus on the big-picture, and your career will grow automatically.
    • We try to copy what others in the industry have done in their career, This is an easy way out. Instead, I recommend you understand their journey, the hard work your role-models have put in their life to get to where they are. I encourage each of you to spend some time understanding this, and then work with your mentors and people you trust to see how you can create your own path. That way, your life, journey and destination will be unique!

    The Happy Ending!

    Testing is a lot of fun. I have been in the Testing field since quite some time. Here are some things (in no particular order) that have kept me interested in the field:
    • Understand the domain
    • Always keep learning (domains, technologies, tools, practices, soft-skills, ...)
    • Experiment - that is the best way to learn
    • Don’t be afraid of failure. Learn from the mistakes, and ensure you do whatever possible to avoid the same mistake again
    • Be curious
    • Ask “relevant” questions
    • Challenge status-quo
    • Do NOT assume
    • Understand risk of functionality not working - will help in proper prioritization
    • “Why” before “How?”
    • Know the end-users of the product-under-test. Will help test better
    • Testing is NOT only about finding / reporting defects.
    • Use metrics that help make meaningful decisions to make the product better
    • Be patient!
    • Help build a “ONE TEAM” culture!