Showing posts with label java. Show all posts
Showing posts with label java. Show all posts

Thursday, February 16, 2017

Finding my way out of bottomless pit with Appium & Android 7.0 for parallel test runs

As mentioned in my earlier post - I designed and implemented a cucumber-jvm-Appium-based test framework to run automated tests against Android Mobile Devices.

We were using:
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.3.0
  • appium - v1.6.3
  • appium-java-client - v4.1.2

  • All was good, tests were running via CI, in parallel (based on scenarios) against devices having Android v5.x and v6.x.

    Then the challenges started. We got some new Motorola G4 Plus devices for our Test Lab - which has Android 7.0 installed.

    First the test refused to run. Figured out that we would probably need to upgrade the appium java-client library version to v5.0.0-BETA1. By the time we figured that out, appium-java-client v5.0.0-BETA2 was out. We also needed to change the instrumentation to UiAutomator2. This was all fine. Our tests started working (after some more changes in how locators were defined and used).

    However, the tests refused to run in parallel on the Motorola devices with Android 7. The app used to launch correctly, but tests used to run as expected only on 1 of the devices - causing our test job to fail miserably, and without any clue.

    These same tests continued to work correctly with all other devices having Android 5.x and 6.x. Very confusing indeed, not to mention highly frustrating too!

    By this time, appium-java-client v5.0.0-BETA3 was out, but refused to upgrade to that - as the difference was iOS specific. Likewise, Appium v1.6.4 BETA is now available - but not feeling to upgrade so fast - and battle the new surprises, if any.

    After digging through Appium's open issues, figured out that many people have faced, and got the similar issue resolved. The solution seemed to be to upgrade the appium-uiautomator2-driver to version > v0.2.6.

    So - next question, which had an easier answer - how to upgrade this uiautomator2-driver. However - after the upgrade, my issue did not get fixed. In fact, now the Android Driver was unable to get instantiated at all. I was getting the errors shown below.

    1. [MJSONWP] Encountered internal error running command: Error: Command '/Users/IT000559/Library/Android-SDK/build-tools/25.0.2/aapt dump badging /usr/local/lib/node_modules/appium/node_modules/appium-uiautomator2-driver/uiautomator2/appium-uiautomator2-server-v0.1.1.apk' exited with code 1 at ChildProcess. (../../lib/teen_process.js:70:19) at emitTwo (events.js:106:13) at ChildProcess.emit (events.js:192:7) at maybeClose (internal/child_process.js:890:16) at Process.ChildProcess._handle.onexit (internal/child_process.js:226:5)

    2. org.openqa.selenium.SessionNotCreatedException: Unable to create new remote session. desired capabilities = Capabilities [{appPackage=com.vuclip.viu, noReset=false, appWaitActivity=com.vuclip.viu.ui.screens.IndianProgrammingPreferenceActivity, deviceName=motorola, fullReset=false, appWaitDuration=60000, appActivity=com.vuclip.viu.ui.screens.MainActivity, newCommandTimeout=600, platformVersion=7.0, automationName=UIAutomator2, platformName=Android, udid=ZY223V2H8R, systemPort=6658}], required capabilities = Capabilities [{}] Build info: version: '3.0.1', revision: '1969d75', time: '2016-10-18 09:49:13 -0700'

    Eventually, found a workaround. I had to make the following 2 changes:
    • When initialising the Android Driver, I had to pass an additional capability - "systemport" and set the value to the Appium port for the Appium server the test was connecting to.
      • capabilities.setCapability("systemPort", Integer.parseInt(APPIUM_PORT)); 
    • Before the test run started, I do a cleanup - which includes 
      • kill any prior / orphan Appium server for that particular port, if remaining 
      • Uninstall the app from the device. I had to add another step to also uninstall the following: 
        • io.appium.uiautomator2.server, and, 
        • io.appium.uiautomator2.server.test 
    Post this, my tests are now working, as expected (from the beginning), sequentially or in parallel, against all supported Android versions.

    Current stack::
  • cucumber-jvm - v1.2.5
  • cucumber-reporting - v3.5.1
  • appium - v1.6.3
  • appium-java-client - v5.0.0-BETA2
  • appium-uiautomator2-driver - v0.2.3

  • After the dust settled, my colleague - Priyank Shah and I were thinking why not many people have encountered this problem.

    My thought is probably most people may be managing the Appium Server and the Android Driver from the test run, instead of from a build script. As a result, they would not have encountered the systemport related challenge as we did.

    PS: Note that Appium Server is started / stopped via our build.gradle file and the AndroidDriver is instantiated (based on parameters passed via a combination of environment variables & properties file) from within each cucumber-jvm scenario (@Before hook).

    Hope our learning helps others who may encounter similar issues.



    How to upgrade the appium-uiautomator2-driver version for appium 1.6.3?

    I am using appium v1.6.3 - which comes with appium-uiautomator-driver@0.2.3 with appium-uiautomator-server@0.0.8.

    I need to upgrade to the newer appium-uiautomator-driver@0.2.9 (which has a fix for an issue I am seeing - https://github.com/appium/appium/issues/7527).

    Any idea how I can upgrade the uiautomator2 driver (while using the same appium@1.6.3)?




    Wednesday, July 6, 2016

    Any browsermob-proxy users facing issues with some requests not getting fired?

    Is there anyone using browsermob-proxy who is having issues with some requests not getting fired?

    I have integrated browsermob-proxy with my protractor tests. This works wonderfully when I run my tests from Mac (against local environment, or any other test environment).

    However, when I run my tests from CI (agent is SUSE Enterprise 11.4) - my tests fail. 

    I narrowed down the problem to the following scenario:

    On some specific user action in the UI, there are a lot (>100) of requests fired from the browser in parallel (batches). There are a couple of scenarios like this in my application - and the test fails in all these cases.

    Here is a screenshot of what the captured HAR file shows -



    The same test works when I run this locally from Mac

    Any idea how to fix this? Thank you in advance!

    See this issue for more details - (https://github.com/lightbody/browsermob-proxy/issues/492)


    Tuesday, July 5, 2016

    Any WAAT (Web Analytics Automation Testing Framework) users out there?

    It has been over 2 years since any update to WAAT - Java or Ruby. Over the years, I have realised, and also received a lot of thoughts / feedback from users of WAAT around where it helps, and what challenges exist. 

    Also, given the widespread IoT & Big Data based work going on around the world, (Web) Analytics now plays a much bigger role in guiding business take better decisions. 

    WAAT (again) fits in the grand scheme of things very nicely as a framework to automate the validation of correct reporting of tags to any Web Analytics solution provider.

    Hence, its a no-brainer for me - it is high time I work on some of the feedback and limitations of WAAT to make it usable again!

    At the recently concluded Selenium Conference 2016 held in Bangalore, India, I got an idea of how to overcome a lot of challenges (listed below) and pain in using WAAT. 


    What's next?

    To implement my new idea, this does mean a couple of things:

    • Existing plugins have limited use - and needs to be deleted.
    • A new plugin would need to be created - which may mean different set of APIs, and also different way to specify the test data.

    Questions for you

    Before I go ahead making these changes - I would like to get answers to the below questions (please add your answers directly in the comments):
    • Is anyone currently using WAAT? If yes - 
      • which version (Java / Ruby)?
      • which plugin
      • Using HTTP / HTTPS?
      • Which Web Analytic solution are you using? (ex: Google Analytics, WebTrends, etc?)
    • Would you be interested in using the new WAAT? If yes - 
      • Which language? Java / Ruby / JavaScript / Python / etc?
    • Would you like to contribute to implementing this new WAAT? If yes - contact me! :)
    -----------------------------------------------------------------------------------------------------------------

    Current plugins available in WAAT:

    • Omniture Debugger (WAAT-Java)
      • Pros:
        • OS independent
        • Run using regular-test-user 
      • Cons:
        • Browser dependent - need to implement ScriptRunner for the UI-driver in use
        • Web-Analytic solution dependent - only for Adobe Marketing Clout / Omniture SiteCatalyst
    • HTTPSniffer (WAAT-Java, WAAT-Ruby)
      • Pros
        • Web-Analytic solution independent
        • Browser independent
        • UI-Driver independent
      • Cons
        • 3rd party libraries are OS dependent
        • HTTPs is not supported out-of-the-box
        • Run tests as "root"
    • JSSniffer (WAAT-Java, WAAT-Ruby)
      • Pros
        • Web-Analytic solution independent
        • Browser independent
        • HTTPs supported out-of-the-box
        • No 3rd party library dependency
      • Cons
        • Need to write JavaScript to get the URL from the browser context
        • UI-Driver dependent
    -----------------------------------------------------------------------------------------------------------------

    Sunday, November 29, 2015

    Patterns in Test Automation - Framework, Data, Locators at Agile Noida

    On 28th November 2015, I spoke in Agile Noida on "Patterns in Test Automation - Framework, Data, Locators".

    I had spoken on the same topic (Patterns in Test Automation) in vodQA Hyderabad - but that was a Testing conference and I knew the attendees were Testing and Test Automation focused. Here, I was skeptic about how this topic would be received by the attendees - given that the conference was focused on Agile, and this topic was core Testing related, and to add to the fact - very technical where I was showing various Java / Ruby code samples.

    My skepticism was, thankfully, unfounded. There was a good number of attendees that came to the talk and based on conversations after the talk, I realized that I was able to get the message across to the attendees.

    Below is the abstract, slides and video from the talk.

    Abstract

    Building a Test Automation Framework is easy - there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.
    However, building a "good" Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.
    Design Patterns play a big role in helping achieve this goal of building a good and robust framework. 
    In this talk, we will talk about, and see examples of various types of patterns you can use for:
    1. Build your Test Automation Framework
    2. Test Data Management
    3. Locators / IDs (for finding / interacting with elements in the browser / app)
    Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

    Slides


    Video


    Saturday, August 22, 2015

    Patterns in Test Automation

    I spoke in vodQA Hyderabad on Sat, 22nd August 2015 about Patterns in Test Automation - Frameworks, Data & Locators.

    The slides are available on SlideShare:


    The video is available on YouTube:



    Abstract

    Building a Test Automation Framework is easy - there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.

    However, building a "good" Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.

    Design Patterns play a big role in helping achieve this goal of building a good and robust framework.

    In this talk, we will talk about, and see examples of various types of patterns you can use for:

    • Build your Test Automation Framework
    • Test Data Management
    • Locators / IDs (for finding / interacting with elements in the browser / app)
    Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

    Learning outcome


    • Patterns for building Test Automation Framework
    • Patterns for Test Data Management, with pros and cons of each
    • Patterns for managing locators / IDs for interaction with UI



    Sunday, September 7, 2014

    Perils of Page-Object Pattern

    I spoke at Selenium Conference (SeConf 2014) in Bangalore on 5th September, 2014 on "The Perils of Page-Object Pattern".

    Page-Object pattern is very commonly used when implementing Automation frameworks. However, as the scale of the framework grows, there is a limitation on how much re-usability really happens. It inherently becomes very difficult to separate the test intent from the business domain.
     

    I want to talk about this problem, and the solution I have been using - Business Layer - Page-Object pattern, which has helped me keep my code DRY.

    The slides from the talk are available here. The video is available here

    Video taken by professional:


    Video taken from my laptop:


    Slides:




    If you want to see other slides and videos from SeConf, see the SeConf schedule page.


    Thursday, April 10, 2014

    Sample test automation framework using cucumber-jvm

    I wanted to learn and experiment with cucumber-jvm. My approach was to think of a real **complex scenario that needs to be automated and then build a cucumber-jvm based framework to achieve the following goals:
    • Learn how cucumber-jvm works
    • Create a bare-bone framework with all basic requirements that can be reused
    Once you know the basics and fundamentals of building a scalable and maintainable Test Automation frameworks, it was really easy to apply my past learning and experiences to learn cucumber-jvm and build a framework from scratch.

    So, without further ado, I introduce to you the cucumber-jvm-sample Test Automation Framework, hosted on github. 

    Following functionality is implemented in this framework:

    • Tests specified using cucumber-jvm
    • Build tool: Gradle
    • Programming language: Groovy (for Gradle) and Java
    • Test Data Management: Samples to use data-specified in feature files, AND use data from separate json files
    • Browser automation: Using WebDriver for browser interaction
    • Web Service automation: Using cxf library to generate client code from web service WSDL files, and invoke methods on the same
    • Take screenshots on demand and save on disk
    • Integrated cucumber-reports to get 'pretty' and 'meaningful' reports from test execution
    • Using apache logger for storing test logs in files (and also report to console)
    • Using aspectJ to do byte code injection to automatically log test trace to file. Also creating a separate benchmarks file to track time taken by each method. This information can be mapped separately in other tools like Excel to identify patterns of test execution.

    Feel free to fork and use this framework on your projects. If there are any other features you think are important to have in a Test Automation Framework, let me know. Even better would be to submit pull requests with those changes, which I will take a look at and accept if it makes sense.

    ** Pun intended :) The complex test I am talking about is a simple search using google search.

    Friday, March 28, 2014

    WAAT Java v1.5.1 released today

    After a long time, and with lot of push from collaborators and users of WAAT, I have finally updated WAAT (Java) and made a new release today. 

    You can get this new version - v1.5.1 directly from the project's dist directory.

    Once I get some feedback, I will also update WAAT-ruby with these changes.

    Here is the list of changes in WAAT_v1.5.1:

    Changes in v1.5.1

    • Engine.isExpectedTagPresentInActualTagList in engine class is made public
    • Updated Engine to work without creating testData.xml file, and directly sending exceptedSectionList for tags
      Added a new method
          Engine.verifyWebAnalyticsData(String actionName, ArrayList
      expectedSectionList, String[] urlPatterns, int minimumNumberOfPackets)
    • Added an empty constructor for Section.java to prevent marshalling error
    • Support Fragmented Packets
    • Updated Engine to support Pattern comparison, instead of String contains
    Do let me know if you see any problems / issues with this update.

    Thanks.

    Sunday, October 6, 2013

    Offshore Testing on Agile Projects


    Offshore Testing on Agile Projects …
    Anand Bagmar

    Reality of organizations

    Organizations are now spread across the world. With this spread, having distributed teams is a reality. Reasons could be a combination of various factors, including:

    Globalization
    Cost
    24x7 availability
    Team size
    Mergers and Acquisitions
    Talent

    The Agile Software methodology talks about various principles to approach Software Development. There are various practices that can be applied to achieve these principles. 

    The choice of practices is very significant and important in ensuring the success of the project. Some of the parameters to consider, in no significant order are:

    Skillset on the team
    Capability on the team
    Delivery objectives
    Distributed teams
    Working with partners / vendors?
    Organization Security / policy constrains
    Tools for collaboration
    Time overlap time between teams
    Mindset of team members
    Communication
    Test Automation
    Project Collaboration Tools
    Testing Tools
    Continuous Integration

    ** The above list is from a Software Testing perspective.

    This post is about what practices we implemented as a team for an offshore testing project.

    Case Study - A quick introduction

    An enterprise had a B2B product providing an online version of a physically conducted auction for selling used-vehicles, in real-time and at high-speed. Typical participation in this auction is by an auctioneer, multiple sellers, and potentially hundreds of buyers. Each sale can have up to 500 vehicles. Each vehicle gets sold / skipped in under 30 seconds - with multiple buyers potentially bidding on it at the same time. Key business rules: only 1 bid per buyer, no consecutive bids by the same buyer.

    Analysis and Development was happening across 3 locations – 2 teams in the US, and 1 team in Brazil. Only Testing was happening from Pune, India.

    “Success does not consist in never making mistakes but in never making the same one a second time.”

    We took that to heart and very sincerely. We applied all our learning and experiences in picking up the practices to make us succeed. We consciously sought to creative, innovative and applied out-of-the-box thinking on how we approached testing (in terms of strategy, process, tools, techniques) for this unique, interesting and extremely challenging application, ensuring we do not go down the same path again.

    Challenges

    We had to over come many challenges for this project.
    • Challenge in creating a common DSL that will be understood by ALL parties - i.e. Clients / Business / BAs / PMs / Devs / QAs
    • All examples / forums talk using trivial problems - whereas we had lot of data and complex business scenarios to take care of.
    • Cucumber / capybara / WebDriver / ruby do not allow an easy way to do concurrency / parallel testing
    • We needed to simulate in our manual + automation tests for "n" participants at a time, interacting with the sale / auction
    • A typical sale / auction can contains 60-500 buyers, 1-x sellers, 1 auctioneer. The sale / auction can contain anywhere from 50-1000 vehicles to sell. There can be multiple sales going on in parallel. So how do we test these scenarios effectively?
    • Data creating / usage is a huge problem (ex: production subset snapshot is > 10GB (compressed) in size, refresh takes long time too,
    • Getting a local environment in Pune to continue working effectively - all pairing stations / environment machines use RHEL Server 6.0 and are auto-configured using puppet. These machines are registered to the Client account on the RedHat Satellite Server.
    • Communication challenge - We are working from 10K miles away - with a time difference of 9.5 / 10.5 hours (depending on DST) - this means almost 0 overlap with the distributed team. To add to that complexity, our BA was in another city in the US - so another time difference to take care of.
    • End-to-end Performance / Load testing is not even a part of this scope - but something we are very vary of in terms of what can go wrong at that scale.
    • We need to be agile - i.e. testing stories and functionality in the same iteration.

    All the above-mentioned problems meant we had to come up with our own unique way of tackling the testing.

    Our principles - our North Star

    We stuck to a few guiding principles as our North Star:
    • Keep it simple
    • We know the goal, so evolve the framework - don't start building everything from step 1
    • Keep sharing the approach / strategy / issues faced on regular basis with all concerned parties and make this a TEAM challenge instead of a Test team problem!
    • Don't try to automate everything
    • Keep test code clean

    The End Result

    At the end of the journey, here are some interesting events from the off-shore testing project:
    • Tests were specified in form of user journeys following the Behavior Driven Testing (BDT) philosophy – specified in Cucumber.
    • Created a custom test framework (Cucumber, Capybara, WebDriver) that tests a real-time auction - in a very deterministic fashion.
    • We had 65-70 tests in form of user journeys that covers the full automated regression for the product.
    • Our regression completed in less than 30 minutes.
    • We had no manual tests to be executed as part of regression.
    • All tests (=user journeys) are documented directly in Cucumber scenarios and are automated
    • Anything that is not part of the user journeys is pushed down to the dev team to automate (or we try to write automation at that lower level)
    • Created a ‘special’ Long running test suite that simulates a real sale with 400 vehicles, >100 buyers, 2 sellers and an auctioneer.
    • Created special concurrent (high speed parallel) tests that ensures even at highest possible load, the system is behaving correctly
    • Since there was no separate performance and load test strategy, created special utilities in the automation framework, to benchmark “key” actions.
    • No separate documentation or test cases ever written / maintained - never missed it too.
    • A separate special sanity test that runs in production after deployment is done, to ensure all the integration points are setup properly
    • Changed our work timings (for most team members) from 12pm - 9pm IST to get some more overlap, and remote pairing time with onsite team.
    • Setup an ice-cream meter - for those that come late for standup.

    Innovations and Customizations

    Necessity breeds innovation! This was so true in this project.

    Below is a table listing all the different areas and specifics of the customization we did in our framework.

    Dartboard

    Created a custom board “Dartboard” to quickly visualize the testing status in the Iteration. See this post for more details: “Dartboard – Are you on track?

    TaaS

    To automate the last mile of Integration Testing between different applications, we created an open-source product – TaaS. This provides a platform / OS / Tool / Technology / Language agnostic way of Automating the Integrated Tests between applications.

    Base premise for TaaS:

    Enterprise-sized organizations have multiple products under their belt. The technology stack used for each of the product is usually different – for various reasons.

    Most of such organizations like to have a common Test Automation solution across these products in an effort to standardize the test automation framework.

    However, this is not a good idea! If products in the same organization can be built using different / varied technology stack, then why should you pose this restriction on the Test Automation environment?

    Each product should be tested using the tools and technologies that are “right” for it.

    TaaS” is a product that allows you do achieve the “correct” way of doing Test Automation.

    See my blog for all information related to TaaS.

    WAAT - Web Analytics Automation Testing Framework

    I had created the WAAT framework for Java and Ruby in 2010/2011. However this framework had a limitation - it did not work products what are configured to work only in https mode.

    For one of the applications, we need to do testing for WebTrends reporting. Since this application worked only in https mode, I created a new plugin for WAAT  - JS Sniffer that can work with https-only applications. See my blog for more details about WAAT.