Friday, June 9, 2017

Changing logcat buffer size in Android devices ... almost works

My (debug-build of) app under test logs extra information about test execution to system logs which is accessible via logcat on Android devices. This is very powerful as now I can run my cucumber-jvm / Appium tests, copy the logcat file after the test execution completes, parse it for relevant information, and do appropriate assertions on the same.

The default buffer size on Android devices I have seen is 256kb. This is less for me - as I end up losing the earlier information, and hence my assertions fail.

Thankfully, there is a programmatic way to change the logcat buffer size in the device before running tests. The command is

adb logcat -G 3M

This adb command works in the Motorola devices in my MAD LAB, but does not work in Samsung devices. The error I see on running the above command is "failed to set the log size"

Any idea why this would not work in Samsung devices? or rather, what do I need to do to change the logcat buffer size?

[UPDATE] - Interestingly - this works on Samsung Galaxy S7, but NOT in Samsung J5 Prime OR Samsung J7 Prime

Tuesday, May 2, 2017

Criteria for setting up a Mobile Test Automation LAB

I recently got asked this question related to the MAD LAB (Mobile Automation Devices LAB) - "Would like to understand how can we setup something similar in our organisation?"

Since this question is applicable for all those thinking of, or have already set up their own lab, thought I would share my answer here.

To setup your own LAB for Mobile Test Automation, multiple things need to align:


Supportive management who -
  • allows experiments (within reason of course) and encourages learning through failure, 
  • willing to invest in infrastructure ($$)

Skilled and Passionate team members who -
  • understand the domain well, 
  • willing to learn, experiment, re-learn and fail fast, 
  • keep looking for innovative solutions to solve problems on hand, 
  • do not reinvent the wheel. 

Philosophy aside, our MAD LAB has the following: 
  • Mac Minis (8-12 devices per Mac Mini), 
  • Powered USB Hubs (I use the ones shown below - and they are working pretty well)

  • High-quality USB cables (I use the ones shown below - and they are working pretty well)
  • CI (Jenkins) setup correctly to keep running tests continuously, proper reporting  in place (else whats the use of running tests if you do not look at the results)

You could start with similar IF it fits your product-under-test context

After I answered this on LinkedIn, I realised, there are more parameters to think about, than just the above.
  • Knowing which devices to use in your Lab
  • Having good, reliable Internet connection
  • Devices should be "seen" easily
  • Should be easy to work on / with the devices as and when required
  • Know how you the devices will be placed in the lab. We tried the following:
    • 2-way tape - that didn't work. Devices used to stay up for a few days, then "drop" suddenly. Of course, that also depends on the back surface of the devices.
    • We tried many mobile stands / hangers (shown below) - but each had their own limitations



    • Finally I found an industrial-strength velcro (1" velcro tape that could take a couple of pounds of weight) - and my devices have not budged since. PS: Please be careful when putting on this velcro on the devices. IF it gets on your hand, you will have a velcro tattoo for a long long time.

What other parameters would you consider for setting up your own Lab? Looking forward to the comments below.


Friday, April 21, 2017

Introducing MAD LAB - for Mobile Automation

The past few months I have been heads-down in stabilising my Real-Device Mobile Test Lab - which we now call MAD LAB (Mobile Automation Devices LAB) .

For those who may not recollect, see my past posts for reference -

Along with my colleagues, we have put in lot of effort in setting up MAD LAB and have now added a lot of rich features to help running tests, seeing the results and making sense out of them easier. 
  • All infrastructure management is implemented now in groovy (instead of gradle as shared earlier).
  • Actual test implementation is done in cucumber-jvm / java

List of features currently implemented:
  • Device management (selection, cleanup, app install and uninstall)
  • Parallel test execution (at Cucumber scenario level) - maximising device utilisation)
  • Appium server management
  • Adb utilities 
  • Managing periodic ADB server disconnects
  • Custom reporting using cucumber-reports
  • Video recording of each scenario and embedding in the custom reports

Contents of MAD LAB:
  • 1 Mac Minis - running various Jenkins Agents
  • 2 Powered USB hubs
  • 8 Android devices

Here are some pictures from the setup.








There are many more features, in various stages of implementation, being added to make MAD LAB more powerful.

Sneak peek into whats coming:
  • Analytics Testing
  • Trend and Failure Analysis 
  • iOS
  • Web
  • A transformed MAD LAB

Finding MAD LAB interesting? Some very interesting changes are coming in soon. Watch out for my next blog post for that. 

Want to contribute and be part of this journey? Even better! Reach out to me!

Friday, March 17, 2017

Patterns in Test Automation Framework at STPCon

I spoke about Patterns of a "good" Test Automation Framework at STPCon 2017. Here are the details from the talk.


Abstract

Building a Test Automation Framework is easy – there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.
However, building a “good” Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.
Design Patterns play a big role in helping achieve this goal of building a good and robust framework.
In this talk, we will talk about, and see examples of various types of patterns you can use for:
  • Build your Test Automation Framework
  • Test Data Management
Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

Session Takeaways:


  • Patterns for building Test Automation Framework.
  • Patterns for Test Data Management, with pros and cons of each.

Slides



Pictures




Thursday, March 16, 2017

Workshop - Client-Side Performance Testing at STPCon 2017

I conducted a 4-hour workshop on Client-Side Performance Testing at STPCon 2017 on 15th March 2017.


Workshop Abstract

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing.
Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.
Lastly – just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity – using WebPageTest (private instance setup), and experiment with yslow – as a low-cost, programmatic alternative to WebPageTest.
We will also look at the different dimensions of Client-side Performance Testing for native mobile applications.
PS: This workshop will be a combination of presentation and hands-on-activity with lots of discussion throughout. You should bring your laptop with you.

Workshop Takeaways:

  • Understand difference between is Performance Testing and Performance Engineering.
  • Hand’s on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
  • Examples / code walk-through of some ways to automate Client-side Performance Testing.

Slides


Client-Side Performance Testing from Anand Bagmar

Some pictures from the workshop 

(Thanks Mike LylesCurtis Stuehrenberg for the pictures)





Monday, March 6, 2017

Analytics Testing

I recently spoke in an Agile Testing Conference on - The What, Why and How of (Web) Analytics Testing.

I was also part of a panel discussion having the theme - "What's not changed since moving to Agile Testing - The Legacy Continues!" There were some very interesting perspectives in this discussion.

The great part was that the audience was very involved and vocal throughout the day. This made is very interactive and good sharing of information and experiences for all!

Below is some information about the talk. I will try to add the link to the video soon.
 

Abstract

Analytics is changing the way products and services are being created and consumed.

In this session, we will learn

  • What is Analytics?
  • Why is it important to use Analytics in your product?
  • The impact of Analytics not working as expected

We will also see some techniques to test Analytics manually and also automate that validation. But just knowing about Analytics is not sufficient for business now.

There are new kids in town - IoT and Big Data - two of the most used and heard-off buzz words in the Software Industry!

With IoT, with a creative mindset looking for opportunities and ways to add value, the possibilities are infinite. With each such opportunity, there is a huge volume of data being generated - which if analyzed and used correctly, can feed into creating more opportunities and increased value propositions.

There are 2 types of analysis that one needs to think about.

1. How is the end-user interacting with the product? This will give some level of understanding into how to re-position and focus on the true value add features for the product.

2. With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns from what has happened, and find out new product / value opportunities based on usage patterns.


Slides


The What, Why and How of (Web) Analytics Testing (Web, IoT, Big Data) from Anand Bagmar

Pictures







Monday, February 20, 2017

Sharing implementation of cucumber-jvm - Appium test framework

I recently shared the Features of my Android Test Automation Framework and also the challenges, and, how we overcame those, to make the parallel test execution work well with Android 7.0 devices as well.

In this blog post, I will be sharing the details (including code) of the implementation. If you have not read my post on - 
Features of my Android Test Automation Framework - I highly recommend you read that first.



Implementation Details

Tech Stack Summary

To recap - here is the tech stack that we currently have:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

1. Configure Jenkins Node (in Jenkins Server)






We currently have 5 Jenkins Nodes setup as shown below.













Each node is configured like this:
















2. Setup Jenkins Job (in Jenkins Server)

Once the Nodes are setup, we can now configure the Jenkins Jobs. 










We have setup the following jobs in Jenkins for our test executions.













Each job is configured as a Jenkins Pipeline Project and we use the the Jenkins file available here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/2%20-%20Setup%20Jenkins%20Jobs/2.2%20Jenkinsfile) as a sample from git to configure what the job is supposed to do.


3. Setup Jenkins Agent

Once the Jenkins Nodes and Jenkins Jobs are configured, we now need to get the Jenkins Agents itself setup and configured to be able to service the requests from the Jenkins server.














We use the JNLP way to connect the Jenkins Slave to the Jenkins server. For this, we have a template .sh script, which we need to copy and update 2 values in it. This is needed for each new Jenkins Node that we connect.

The template .sh script can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.1%20start-e2e-moto.sh).

Now, our Jenkins setup is done. But a big piece is still missing. 

In order to run our tests on the Agent, we need some basic software to be installed. To do this, we created a shell script, that will help provision the machine. This is required to be done just once - but we do plan to have multiple Mac Mini host machines that will run various number of Jenkins agents - so the script will help keep same software (including version) on our machines - which means the same test execution environment.

This shell script can be found here - (https://github.com/anandbagmar/cucumber-jvm-appium-infra/blob/master/3%20-%20Setup%20Jenkins%20Agent/3.2%20JenkinsAgentMachineFirstTimeSetup.sh)


4. Manage Test Infrastructure & Test Execution

By this stage, our Jenkins Server, Jenkins Agent setup is done, including the software required to run the tests. Next thing is now at the Test Framework level.


























Our build tool is gradle. All infrastructure related work is handled via this build.gradle file. 

Before we get into the details of the gradle file, it is important to understand what the code structure is.


























Via groovy / gradle, we managed to solve the complexity of:

  • Finding matching devices based on the CONNECTED_DEVICE_IDS
  • Downloading the apk file from where ever it is available
    • This is done just once per test run - regardless of how many devices the test is going to run on
    • The URL to download is passed as an environment variable - APP_URL
    • For local testing, you can give a local absolute path to the apk file via the APP_PATH environment variable instead of specifying APP_URL
  • Finding the list of scenarios to be run (based on the cucumber tags specified via the environment variable - 'run'
  • Managing start / stop of Appium Servers
  • Cleaning up the device before test runs
  • Executing Cucumber scenarios in parallel
  • Building consolidated reports locally (cucumber-reports) - IF not using the Jenkins cucumber-reports plugin



5. Run Tests

Last, step in this process - is to manage the Android Driver. We use the Cucumber-jvm's @Before and @After hooks to set the right capabilities for instantiating the AndroidDriver, and also stopping the same after test execution is complete.


















These helper files can be found here (https://github.com/anandbagmar/cucumber-jvm-appium-infra/tree/master/5%20-%20Run%20Tests).


Sample Code

All the sample code can be found from my github repository cucumber-jvm-appium-infra - https://github.com/anandbagmar/cucumber-jvm-appium-infra


Happy Testing!



Thursday, February 16, 2017

Finding my way out of bottomless pit with Appium & Android 7.0 for parallel test runs

As mentioned in my earlier post - I designed and implemented a cucumber-jvm-Appium-based test framework to run automated tests against Android Mobile Devices.

We were using:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.3.0
  • appium - v1.6.3
  • appium-java-client - v4.1.2

  • All was good, tests were running via CI, in parallel (based on scenarios) against devices having Android v5.x and v6.x.

    Then the challenges started. We got some new Motorola G4 Plus devices for our Test Lab - which has Android 7.0 installed.

    First the test refused to run. Figured out that we would probably need to upgrade the appium java-client library version to v5.0.0-BETA1. By the time we figured that out, appium-java-client v5.0.0-BETA2 was out. We also needed to change the instrumentation to UiAutomator2. This was all fine. Our tests started working (after some more changes in how locators were defined and used).

    However, the tests refused to run in parallel on the Motorola devices with Android 7. The app used to launch correctly, but tests used to run as expected only on 1 of the devices - causing our test job to fail miserably, and without any clue.

    These same tests continued to work correctly with all other devices having Android 5.x and 6.x. Very confusing indeed, not to mention highly frustrating too!

    By this time, appium-java-client v5.0.0-BETA3 was out, but refused to upgrade to that - as the difference was iOS specific. Likewise, Appium v1.6.4 BETA is now available - but not feeling to upgrade so fast - and battle the new surprises, if any.

    After digging through Appium's open issues, figured out that many people have faced, and got the similar issue resolved. The solution seemed to be to upgrade the appium-uiautomator2-driver to version > v0.2.6.

    So - next question, which had an easier answer - how to upgrade this uiautomator2-driver. However - after the upgrade, my issue did not get fixed. In fact, now the Android Driver was unable to get instantiated at all. I was getting the errors shown below.

    1. [MJSONWP] Encountered internal error running command: Error: Command '/Users/IT000559/Library/Android-SDK/build-tools/25.0.2/aapt dump badging /usr/local/lib/node_modules/appium/node_modules/appium-uiautomator2-driver/uiautomator2/appium-uiautomator2-server-v0.1.1.apk' exited with code 1 at ChildProcess. (../../lib/teen_process.js:70:19) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:192:7) at maybeClose (internal/child_process.js:890:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

    2. org.openqa.selenium.SessionNotCreatedException: Unable to create new remote session. desired capabilities = Capabilities [{appPackage=com.vuclip.viu, noReset=false, appWaitActivity=com.vuclip.viu.ui.screens.IndianProgrammingPreferenceActivity, deviceName=motorola, fullReset=false, appWaitDuration=60000, appActivity=com.vuclip.viu.ui.screens.MainActivity, newCommandTimeout=600, platformVersion=7.0, automationName=UIAutomator2, platformName=Android, udid=ZY223V2H8R, systemPort=6658}], required capabilities = Capabilities [{}] Build info: version: '3.0.1', revision: '1969d75', time: '2016-10-18 09:49:13 -0700'

    Eventually, found a workaround. I had to make the following 2 changes:
    • When initialising the Android Driver, I had to pass an additional capability - "systemport" and set the value to the Appium port for the Appium server the test was connecting to.
      • capabilities.setCapability("systemPort", Integer.parseInt(APPIUM_PORT)); 
    • Before the test run started, I do a cleanup - which includes 
      • kill any prior / orphan Appium server for that particular port, if remaining 
      • Uninstall the app from the device. I had to add another step to also uninstall the following: 
        • io.appium.uiautomator2.server, and, 
        • io.appium.uiautomator2.server.test 
    Post this, my tests are now working, as expected (from the beginning), sequentially or in parallel, against all supported Android versions.

    Current stack::
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

  • After the dust settled, my colleague - Priyank Shah and I were thinking why not many people have encountered this problem.

    My thought is probably most people may be managing the Appium Server and the Android Driver from the test run, instead of from a build script. As a result, they would not have encountered the systemport related challenge as we did.

    PS: Note that Appium Server is started / stopped via our build.gradle file and the AndroidDriver is instantiated (based on parameters passed via a combination of environment variables & properties file) from within each cucumber-jvm scenario (@Before hook).

    Hope our learning helps others who may encounter similar issues.



    How to upgrade the appium-uiautomator2-driver version for appium 1.6.3?

    I am using appium v1.6.3 - which comes with appium-uiautomator-driver@0.2.3 with appium-uiautomator-server@0.0.8.

    I need to upgrade to the newer appium-uiautomator-driver@0.2.9 (which has a fix for an issue I am seeing - https://github.com/appium/appium/issues/7527).

    Any idea how I can upgrade the uiautomator2 driver (while using the same appium@1.6.3)?




    Tuesday, February 14, 2017

    Features of my Android Test Automation Framework

    [UPDATED - Added link to implementation details at the end of the post]

    As I have shared in my previous few blog posts (A new beginning - entertainment on mobile, How to enable seamless running of appium tests on developer machines?), a few months ago, I embarked on a new journey as "Director - Quality" for the Viu product at Vuclip.


    Here are a few details about our Viu app:
    Viu offers high quality, popular, regional video content in various different languages for consumers in various different regions - Indonesia, Malaysia, India, Middle-East, Egypt ....
    The consumers on the move could be using Android devices or iOS devices to watch their favourite content - either stream it directly via their Mobile Data plan, or they could have downloaded their preferred video at home / office and watch the saved video later.
    The very interesting, and, challenging bit from Testing perspective, is that the app would behave differently based on from which part of the world the user is using it. This logic is not based on Geolocation.

    Objective

    My objective from functional test automation is:
    • Get quick feedback, from every build and know if the app is working well. If NOT, then be able to get to the root cause ASAP.
    • Provide feedback and confidence to stakeholders (Product team, Business team, etc.) on the "true" state of the product. There should be no surprises, for anyone!
    • Run tests seamlessly by simulating different regions

    Approach

    To achieve this - I choose to go with the following tech-stack:
    • Test Intent specification - cucumber-jvm - to specify the business intent, in the business terminology (expected to be) understood by all.
    • Reports - cucumber-reports provide good, rich, meaningful and easily understandable reports which provide meaningful information about the state of the product.
    • Device interaction - Appium / Java - implement the intent specified by cucumber-jvm. This will also allow us the flexibility to test against Android, iOS and Web.
    • Continuous Integration (CI) server - Jenkins - to be able to run tests automatically when a new build is ready. Also, we integrated the cucumber-reports via the cucumber-reports plugin directly in Jenkins - so now all stakeholders interested in the reports just need to go to one place and get all information, in real time. No more need for Test Reports to be generated manually and sent across for everyone.

    Test Execution environment

    I looked at various cloud-based service providers (SauceLabs, Test Object, pCloudy, AWS Device Farm, etc.) who could run our tests on real-devices in their data centers. These tests would need to be triggered automatically via Jenkins when a new build (apk) is available, or as we add more tests. However, none of these providers could satisfy my requirements (without having to compromise significantly on my objectives).

    Hence, finally decided to setup my own private Test Lab for this. Oh fun!

    Framework Architecture

    Below is the architecture of the framework that I came up with. This is based on the architecture I posted in my earlier blog post on “Assertion & Validations in Page Object” (https://essenceoftesting.blogspot.com/2012/01/assertions-and-validations-in-page.html)
    ViuTestFrameworkArchitecture.png

    Approach, Features & Capabilities of the Test Lab

    • For Jenkins configuration, we configure the job using Jenkins file - to ensure our Jenkins configuration is also under version control. Also, since this is groovy scripting language, we can have good logic and processing done as part of the job execution.
    • This helped us keep the configuration consistent and common across any type of job we had to run.
    • The Jenkins job will trigger a set of commands on the Jenkins Agent.
    • Jenkins Agents are setup on Mac Mini machines. Each Agent has only 1 executor. This is essential to prevent using a device that is already in use by another job / executor / agent.
    • The Mac Mini (more to be added on demand) has a powered USB hub which connects upto 8 devices
    • Depending on types of devices connected, we have as many Jenkins Agents (with 1 executor only) allocated for them.
    Example:
      • Samsung devices (with API Level 22) allocated to Jenkins agent - viu-e2e-samsung
      • Motorola devices allocated to Jenkins agent - viu-e2e-moto

    Infrastructure

    • To manage and use the devices allocated to each node and more important - prevent other nodes using other devices, each node in Jenkins has an environment variable in its configuration - CONNECTED_DEVICE_IDS - a comma separated list of devices allocated to this node.
      • This approach makes it easy to change / add / remove devices on the fly. All we need to do in such cases is update the device ID in the appropriate node’s CONNECTED_DEVICE_IDS environment variable
      • Our gradle file, reads the CONNECTED_DEVICE_IDS environment variable, and finds devices connected on the Mac Mini matching the provided device ids. This simple technique allows specific device allocation from the pool of devices to each node.
      • PS: If anyone does an error and provides the same device id for multiple nodes, we all know what will happen. These are areas where we need to be very cautious in our execution and maintenance.
    • The URL for the Android APK file is also passed to the gradle file as a an environment variable. We download the APK file once before test execution starts.
    • Functional (end-2-end) Automation is painfully slow to run. To get the feedback quickly, we have to run this in parallel. All existing approaches to running cucumber-jvm tests in parallel failed to meet our requirement. I wanted to run each scenario in parallel, on whichever device is free (from the matching devices list). Eventually ended up writing a small script that allows me to run scenarios in parallel.
    • Managing Appium Server (start / stop) is another important activity - which is required to be done just once per device. Gradle manages that as well.

    Next steps

    • Stabilize parallel test execution (problems with Android 7)
    • Start with iOS app automation
    • Start with Web automation (www.viu.com)
    • Share gradle file with community, if anyone interested.
    • [UPDATED] You can find the details of the implementation here