Tuesday, November 29, 2011

Effective Strategies for Distributed Testing - slides available

I finally figured a way around the problem PowerPoint was giving me in converting the slides for this presentation to pdf format. 


The slides are now uploaded in slideshare.net  and available here.


The video recording of the webinar is available here.

Wednesday, November 16, 2011

vodQA - NCR

I am very happy and proud to share that vodQA conference is going national. 


ThoughtWorks will be hosting its first vodQA out of Pune ... this time in Gurgaon on 3rd December 2010.


I will be pairing with Manish Kumar on a topic "Distributed Agile Testing for Enterprises"


See this page for more information.

Thursday, November 10, 2011

Effective Strategies of Distributed Testing - recording available

On 9th November, 2011, I presented my first webinar with my colleague - Manish Kumar, on the topic "Effective Strategies for Distributed Testing".


We spoke based on our experiences and shared tips and strategies on various different themes, like 
- agile testing principles that matter!
- need of distributed teams,
- testing challenges in distributed teams,
- mindset, 
- communication, 
- collaboration, 
- tools & technologies,
- ATDD,
- test automation,
- and many others ...


All distributed teams are working on the same product ... hence it is extremely important to treat all these distributed teams as ONE TEAM!


Watch the video recording of this webinar available here. 


(http://testing.thoughtworks.com/events/effective-strategies-distributed-testing).

Sunday, November 6, 2011

Effective Strategies for Distributed Testing - webinar

Come, join Manish Kumar and me for a webinar on 9th November, 2011 on "Effective Strategies for Distributed Testing". 


We will be sharing tips and techniques on how you can make testing in distributed teams more effective.


More details on the webinar are available here.

Thursday, September 22, 2011

Asking the right question


This Dilbert strip says it all!!


For those not able to see the link it correctly, here it is:






If you don't ask the right question, the topic can digress in any direction ... resulting in waste of time and effort of all involved.


So remember - its not just important to ask questions, but it is more important to ask the right questions!!

Monday, August 29, 2011

Does extrapolation in estimation work?

How do you estimate test automation efforts for building a regression suite for functionality that is already in Production?


The approach I am following is:
> Get some level of understanding about the domain
> Look at a subset of existing test cases of varying complexities and sizes
> Estimate the effort for each of them and also identify the complexity involved in each, with the risks and assumptions.
> Identify some obvious spikes / tech tasks that will be needed
> Extrapolate the estimates
> Add 10-15% buffer in the estimates for unknowns.


And presto!! .... I have my estimates for the task on hand.


Is this approach right? 
Is extrapolating the estimates going to give valid results?
What can be different / better ways of getting the results required?

Thursday, August 18, 2011

To test or not to test ... do you ask yourself this question?

For people involved in Test Automation, I have seen that quite a few of us get carried away and start automating anything and everything possible. This inherently happens at the expense of ignoring / de-prioritizing other equally important activities like Manual (exploraratory / ad-hoc) testing.

Also, as a result, the test automation suite gets very large and unmaintainable, and in all possibilities, not very usable, with a long feedback cycle.

I have found a few strategies work well for me when I approach Test Automation:
  • Take a step back and look at the big picture.
  • Ask yourself the question - "Should I automate this test or not? What value will the product get by automating this?"
  • Evaluate what test automation will truly provide good and valuable feedback.
  • Based on the evaluation, build and evolve your test automation suite.
One technique that is simple and quick to evaluate what tests should be automated or not, is to do a Cost Vs Value analysis of your identified tests using the graph shown below.


This is very straight forward to use.

To start off, analyze your tests and categorize them in the different quadrants of the above graph.
  1. First automate tests that provide high value, and low cost to build / maintain = #1 in the graph. This is similar to the 80/20 rule.
  2. Then automate tests that provide high value, but have a high cost to build / maintain = #2 in the graph.
  3. Beyond this, IF there is more time available, then CONSIDER automating tests that have low value, and low cost. I would rather better utilize my time at this juncture to do manual exploratory testing of the system.
  4. DO NOT automate tests that have low value and high cost to build / maintain.

    Thursday, August 11, 2011

    How do you create / generate Test Data?

    Testing (manual or automation) depends upon data being present in the product.


    How do you create Test Data for your testing?


    • Manually create it using the product itself?
    • Directly use SQL statements to create the data in the DB (required knowledge of the schema)?
    • Use product apis / developer help to seed data directly? (using product factory objects)?
    • Use production snapshot / production-like data?
    • Any other way?

    How do you store this test data?


    • Along with the test (in code)?
    • In separate test data files - eg: XML, yml, other types of files?
    • In some specific tools like Excel?
    • In a separate test database?
    • Any other way?

    Do you use the same test data that is also being used by the developers for their tests?


    What are the advantages / disadvantages you have observed in the approach you use (or any other methods)?


    Looking forward to knowing your strategy on test data creation and management!

    Thursday, August 4, 2011

    Lessons from a 1-day training engagement - a Trainer perspective

    I was a Trainer for bunch of smart QAs recently. The training was about "Agile QA". This term is very vague, and if you think about it more deeply, it is also a very vast topic.

    The additional complexity was that this training was to be done in 1 day.

    So we broke it down to what was really essential to be covered in this duration, and what could be covered in this duration.

    We came down to 2 fundamental things:
    1. How can the QA / Test team be agile and contribute effectively to Agile projects?
    2. How can you be more effective in Test Automation - and that too start this activity in parallel with Story development?

    So we structured our thoughts, discussions and presentations around this.

    At the end of the exhausting day (both, for the Trainers as well as the Trainees), a couple of things stood out (not in any particular order):
    • What is the "right" Agile QA process?
    • Roles and responsibilities of a QA on an Agile team
    • Sharing case studies and the good & not-so-good experiences around them
    • Effectiveness of process and practices
    • Value of asking difficult questions at the right time
    • Taboo game - playing it, and reflecting on its learnings
    • What to automate and what NOT to automate?
    • Discussion around - How do you create / generate Test Data?

    Tuesday, July 26, 2011

    WAAT-Ruby - ready for use

    WAAT-Ruby is now ready for use. 

    Project hosted on github - http://github.com/anandbagmar/WAAT-Ruby
    WAAT-Ruby gem available for download from here.
    Documentation is available (on WAAt-Ruby wiki pages) here.

    Since WAAT-Ruby uses WAAT-Java under the covers, I have kept the same version numbers for both platforms. The latest version is 1.4.

    I have not yet pushed it out on rubygems.org. Will update once that is done.

    So far I have tested this on the following environments:
    • Windows 7 - 64-bit with Ruby 1.8.6
    • RHEL 6 - 64-bit with Ruby 1.8.6 (I had difficulty in getting jpcap deployed on this environment). But once that was done, WAAT worked smoothly out of the box.
    • Ubuntu 10.x - 32-bit with Ruby 1.8.7
    • Ubuntu 10.x - 32-bit with Ruby 1.9.1
    One important note:
    If you are using WAAT (Java or Ruby) on any Linux-based environment, please note the Jpcap requirement for execution.
    WAAT uses Jpcap for capturing network packets. Jpcap needs administrative privileges to do this work. See the Jpcap FAQs for more information about the same.


    For all WAAT related blog posts, click here.

    Wednesday, July 20, 2011

    WAAT - Ruby .... are we there yet?

    The WAAT ruby gem is almost ready. My colleagues are helping testing it out in different environments and am updating the documentation accordingly.

    Once done, this will be available as a Ruby gem from WAAT-Ruby github project, and also from rubygems.org.

    Contact me if you are interested in trying this out before release.

    Thursday, July 14, 2011

    What is your expiry date?

    Recently when doing some online transaction using my credit card, something struck me ... I realized that the form asking for my credit card information was quite weird, and probably incorrect.

    Here is a sample layout of what I am talking about:


    Here, I am asked to enter the details in this order:
    • Credit card number 
    • Card holder's name
    • Expiry date 
    • And so on ...
    As I was entering the information, I ended up questioning myself ... whose Expiry Date??? The card's or mine??? 

    Simply based on the flow of information asked for, it is quite easy to associate the Expiry Date with the earlier field - the Card holder's name. Right?

    Wouldn't the layout be better this way instead:
    • Card Holder name
    • Card number
    • Expiry date
    • CVV number

    Or, another way can be:
    • Card number
    • Expiry date
    • CVV number
    • Card Holder name


    I checked all my 10-15 (credit / debit / membership) cards that I have. All of them have the issue date / expiry date / validity period associated with the card number, and not the card holder's name.

    This leads me to believe that no one did a usability check, or, in this context, shall we call it a reality check when designing the credit card form like the one shown above. 

    I would have not let this design / layout get through. What would you do?

    Thursday, July 7, 2011

    Ruby Test Automation Framework Technology Stack Survey

    WAAT  - Web Analytics Automation Testing Framework is currently available for java based Test Automation Frameworks. (http://essenceoftesting.blogspot.com/search/label/waat)

    I am now working on making this available as a Ruby gem.

    In order to support WAAT for a good-mix of test environments, I would like to understand the different environments and technology stacks that are typically used by teams in their Test Automation Framework.

    Thank you for your time and help in providing this information.



    Wednesday, July 6, 2011

    RubyMine (and Cucumber) caching issue

    I use RubyMine to write and implement my Cucumber features on Linux.
    I have noticed one weird behavior at times.

    Though my step definition is correct, and the test also runs fine, RubyMine flags the step as not implemented. For some reason, it is not able to find the corresponding implementation in the .rb step definition file.

    On a haunch, I selected the "Invalidate Cache" in RubyMine's File menu, and selected the "Invalidate and Restart" option. Presto .... things started working properly again. 

    Now I am wondering why did the RubyMine cache get messed up in the first place .....

    Monday, June 27, 2011

    WAAT for Ruby on its way

    I have started work on creating a Ruby gem for WAAT. This is going to sit on top of the version created for Java. Hopefully will be able to get it out soon.

    Watch this space for more information.

    Friday, June 24, 2011

    Test Trend Analyzer (TTA)

    There are many tools and utilities that provide ways to do test result reporting and analysis of those results. However, I have not found a good, generic way of doing some Trend Analysis of those results. 


    Why do I need to Trend Analysis of the test results?

    Long(er) duration projects / enterprise products need to know the state of the quality of the product over time. One also may need to know various other metrics around the testing - like number of tests, pass %, failure %, etc. over time.

    The reports I have seen are very good about analyzing the current test results. However, I have not really come across a good generic tool that can be used in most environments for the Test Trend Analysis over a period of time.

    I am thinking about developing a tool which can address this space. I call this - the Test Trend Analyzer (TTA).

    Here is what I think TTA should do:

    Supports:
    • Work with reports generated by common unit-test frameworks (jUnit, nUnit, TestNG, TestUnit, style of reports)
    • Provides Web Service interface to upload results to TTA
    • Test Results uploaded will be stored in db
    • Will work on Windows and Linux

      Dashboard:
      • Creates default TTA dashboard
      • Customizable TTA dashboard
      • Dashboard will be accessible via the browser

        My questions to you:
        • Do you think this will help? 
        • What other features would you like to see in TTA?
        • What other type of support would you like to have in TTA?

        Tuesday, June 21, 2011

        WAAT and HTTPS

        While most sites use http to report tags to the web analytic tool, there are some cases where http is disabled and all traffic is using https only.

        In such cases, there may be a problem in using the generic solution provided by WAAT.

        I did some research, analysis and experimentation and here are my findings:
        1. jpcap captures raw packets. It does not differentiate about http / https
        2. There is no problem in WAAT. All it does it matches packets based on patterns you specify in the tests.
        3. Since the requests are https based, WAAT is not able to match the packets, unless you specify encrypted packet identifiers and encrypted data in the xml file. firebug / fiddler / ethereal / wireshark / charles / burp / etc. does something extra in this regard to decode the packet information and show the raw content in the browser / tool.

        So the question is what can be done next?
        1. If it is possible for you to get the configuration in the test environments changed to have the web analytics request sent out on http (maybe along with https) request, that can resolve the issue. Once in a while you can then verify manually if requests are going out on https.
        2. You can use Omniture Debugger - but the limitation in your case is that it will be available for Omniture only and not the other web analytic tools.
        3. You can extend the HttpSniffer class (,say HttpsSniffer), and provide implementation to decode the captured packets before doing the validation. However, note that this will be a expensive operation as you will be decoding all the captured packets on the network interfaces on your machine and the packet(s) of your interest will be fractional of those captured.

        Tuesday, April 19, 2011

        WAAT (Web Analytics Automation Testing Framework) is alive!!!

        [UPDATE]  Check related post here (http://essenceoftesting.blogspot.com/2011/04/waat-web-analytics-automation-testing.html)

        I am very happy to announce that the first release of WAAT is available for general use.

        WAAT is hosted on github (https://github.com/anandbagmar/WAAT)

        WAAT v1.3 can be used to test *almost any type of Web Analytic solution. Tested with Google Analytics and Omniture. This is platform dependent.

        Binaries:
        You can either get the code base from git@github.com:anandbagmar/WAAT.git, or, get the binaries available in the dist folder.

        Documentation:
        Documentation for using WAAT is available in various different formats:
        > WAAT Readme.docx
        > WAAT Readme.doc
        > WAAT Readme.pdf
        > WAAT Readme.html

        These files are available in the docs folder.

        The documentation is also part of the binary file downloads.

        I am looking forward for your usage and comments to make this better and more usable.

        Saturday, April 16, 2011

        WAAT release update

        I am almost ready with my first public release of WAAT. Some finishing touches remaining which is causing the delay for this.

        For those not aware, here is what WAAT is:
        > WAAT stands for Web Analytics Automation Testing Framework
        > Developed as a Java jar to be used in existing testing frameworks to do web analytics automation
        > Phase 1: Implemented for Omniture using Omniture Debugger -> Status: Completed
        > Phase 2: Can be used to test *almost any type of Web Analytic solution. Tested with Google Analytics and Omniture. This is platform dependent.
        -> Status: In progress. Documentation to be updated.
        > Phase 3: Make WAAT available for Ruby / .Net testing frameworks. -> Status: To be started.


        Look at my earlier post for more details on WAAT.

        Wednesday, April 13, 2011

        Interesting webinar coming up ... "Where Exploration and Automation meet: Leveraging..."

        There is a very interesting and informative webinar on how to utilize automated functional testing within your organization.

        This is scheduled for Thursday, April 21.

        See this link for more information.

        Saturday, April 9, 2011

        Agile QA Process

        After doing testing on multiple Agile projects, I have come to realize certain aspects about the process and techniques that are common across projects. Some things I have learned along the way, some, by reflection on the mistakes / sub-optimal things that I did.

        I have written and published my thoughts around the "Agile QA Process", more particularly what techniques can be used to test effectively in the Iterations. The pdf is available here for your reading. (http://www.slideshare.net/abagmar/agile-qa-process)

        Note: A process is something that should be tweaked and customized based on the context you are in. The process mentioned in the document should be taken in the same light.

        Friday, April 8, 2011

        WAAT - Web Analytics Automation Testing Framework

        [UPDATE]  See my post about how you can get WAAT here (http://essenceoftesting.blogspot.com/2011/04/waat-is-alive.html).

        Problem statement:

        On one of the projects that I worked on, I needed to test if Omniture reporting was done correctly.

        The client relied a lot on Omniture reports to understand and determine the direction of their business. They have a bunch of Omniture tags reported for a lot of different actions on the site. Manual testing was the only way this functionality could be done verified. But given the huge number of tags, it was never possible to be sure that all tags were being reported correctly on a regular basis.

        So I came up with a strategy to remove this pain-point.

        Approach:
        I created a framework in our existing automation framework to do Omniture testing. The intention of creating this framework was:
        1. There is minimal impact on existing tests.
        2. There should be no need to duplicate the tests just to do Omniture testing.
        3. Should be easy to use (specify Omniture tags for various different actions, enable testing, etc.)

        How it helped us?
        1. We provided a huge and reliable safety net to the client and the development team by having Omniture testing automated.
        2. Reduced the manual testing effort required for this type of testing, and instead got some bandwidth to focus on other areas.

        Next Steps:
        I am making this into a generic framework - a.k.a. WAAT - Web Analytics Automation Testing Framework to enable others doing Omniture testing to easily automate this functionality. This project will be hosted on github.

        Phase 1 of this implementation will be for Omniture Debugger and input data provided in xml format. This framework will be available as a jar file.

        Phase 2 also now complete includes support for any Web Analytic tool. I have tested this with Google Analytics as well as Omniture (NOT using Omniture Debugger). This uses a generic mechanism to capture packets from the network layer and processes them appropriately. Given this generic approach to work with any Web Analytic tool, the framework does become OS dependent.

        Watch this space for more information (instructions, links to github, etc). Also, please do contact me with ideas / suggestions / comments about the same.

        Tuesday, January 18, 2011

        My article on "Future of Test Automation Tools and Infrastructure" featured in The Smart Techie magazine

        My article on the Future of Test Automation Tools and Infrastructure is featured in "The Smart Techie" magazine. You can check it out here or download the pdf from here.

        Wednesday, December 8, 2010

        MEDIA ALERT: Live Web Event: The End of the Free Internet

        Paul Jay, senior editor of The Real News Network, will moderate a virtual panel discussion promoting a dialogue for the technology community about the technological and legal ramifications of the WikiLeaks shutdown.
           
        Topics to be discussed include:

            * What does the cut off of service to WikiLeaks mean for the future of the Internet? 

            * Will digital journalism be less protected?

            * Was WikiLeaks afforded procedural protections before its website and DNS entries were shut down? What process should be required?

            * The Internet is vulnerable to internal threats. What technical innovations are needed?

            * Can leaks ever be stopped? Is it worth the price?


        Click here (http://bit.ly/ejXRNX) for more information.

        Tuesday, November 23, 2010

        Focus and Positivity is the key!

        There are certain things I have observed quite a few times in various different situations while testing. These things seem more like a pattern than exceptions, and many a times demotivate me to do the right thing, for myself, and the organization.


        What are these things?

        • Testing is considered as a step child - at times not given enough "listening ear" as required.
        • Testing usually seems lower in priority (compared to other issues faced).
        • Clients / organization seem OK with bugs / defects going live or, in other words, they are OK with pushing code in Production without testing it properly.
        • QA team is not involved in all stages of the life cycle - leading to disconnect between team members, invalid / incorrect assumptions, etc.
        • QA capacity is not adequate for the amount of work actually needed to be done.
        • Team members say - "This [issue] is not my responsibility. Talk to someone else."
        • Team members say - "This [issue] is not your [QA] responsibility. You don't need to provide suggestions / solutions for it. Let the appropriate owners think / talk about it."

        Tips to remain focused on the right thing
        • Don't be hesitant to raise issues (time and again).
        • Identify and keep highlighting risks - in current situation, their implications and the potential mitigation points.
        • Keep doing what is right for the project / client.
        • Identify and engage with the "correct" people to ensure issues are understood, highlighted and prioritized.
        • Know what is in your control, and what is not. Strive to effect a change in what is in your control. For things out of your control, raise it with the relevant people, and then feel satisfied that you have done your part.
        • Be positive.
        I strongly believe that the last 2 items in the above list are the most crucial to remaining focused and being successful in whatever one does ... in personal life and in professional life.


        Saturday, November 20, 2010

        Generic Test Automation Frameworks - Good or Bad?

        Many a times I have been part of discussions that we should build a generic test automation framework which all other project teams can leverage. Also, another thought on this has been that building test automation framework is not really a QA job, but a developer job.

        I have different thoughts on this.
        1. From a capability building perspective, if we build a generic framework, how is the QA team going to learn to do things from ground-up, and also make it better?
        2. From a business perspective, if I as an organization have a generic framework, then am I going to give it for free to the client? If no, then it is as good as saying I have built the automation framework as a standalone product - which is fine. But, if I am giving it free to the client, then what value has it brought to me as an organization by designing and building this framework, and then I just give it off free?
        3. Developers can build a great framework - it is another product for them. However, the users of this framework are QA. More often than not, this framework that is built by the developers will be difficult to use by the QAs.
        • I have seen an example of a client-project, where our developers and QAs built the framework together. However, only the developers were able to make any core framework changes because of its complexity. Also, when the time came to hand-over the framework to the client, it was the developer(s) who had to provide training for this - the QAs were not even involved in this, because ...
        So to cut the answer short, I don't like the idea of generic test automation frameworks!

        Monday, October 11, 2010

        vodQA 2 - plan so hard for it, and its over so soon!

        We were planning for vodQA 2 since about 6-7 weeks. But we suffered our first hiccup when I had to go to the US on a QA Coaching assignment for 3 weeks. Fortunately, my return was a few days before vodQA (7th October 2010).

        It was a tough time being away ... constantly thinking about if all things are taken care off or not, if we are getting adequate help for the core organizers or not, what blockers are existing, and so on and on.

        However, our core team here in ThoughtWorks office really slogged it out, at times pulling people by their ears :) to ensure we are on track.

        All this meant that I didn't have to do much when I came back.

        On THE day, just before the event though, we suffered another hiccup - our audio-visual vendor refused to come up till a good few guests had already arrived. So it was awkward start to the event ... but things settled down pretty fast and then the event did go through quite smoothly.

        Pictures from vodQA 2 are available here

        A lot of positives from the event:
        1. More than 101 external guests arrived. Of these there were a significant number of people who had also attended our first vodQA event on June 10, 2010.
        2. The talks were very interesting, and very apt to the changing dynamics of the industry.
        3. In the initial feedback that we have gone through, people seem to be craving for such type of events - which is a very positive sign for us that we are doing something right, giving something to the community that will help them share, learn, grow and connect.
        4. We incorporated the feedback received in our first vodQA to make this more tuned to what people wanted to experience.

        A few people mentioned in the event - "this is our QA family, and we want to see it grow".

        Finally, the event came, and went - leaving behind lot of good memories. I will post the presentation and the videos links as soon as they are posted.

        Meanwhile, here are the topics that were covered in vodQA 2:

        Speaker / topic list:

        Full Length talks
        1. Deepak Gole & Saager Mhatre - Sapna Solutions
        Automated acceptance testing for iPhone

        2.Parul Mody & Vijay Khatke - BMC Software
        Cloud Testing

        3. Ashwini Malthankar - ThoughtWorks
        Effective use of Continuous integration by QA

        4. Vijay and Supriya - ThoughtWorks
        Test your service not your UI

        5. Ananthapadmanabhan R - ThoughtWorks
        Twist : Evolving test suites over test cases

        Lightning Talks
        1. Satish Agrawal - Persistent Systems
        Leadership and Innovation in a commoditized industry

        2. Anay Nayak - ThoughtWorks
        Fluent Interfaces and Matches Fish bowl topic: DNA of Test Automation


        Monday, September 6, 2010

        vodQA2 - You are invited to Pune's Quality Analyst Meet!

        vodQA - THE TESTING SPIRIT

        announces its 2nd event in ThoughtWorks, Pune on October 7, 2010 at 5.30pm.

        Refer here to more details of the event.

        Friday, August 27, 2010

        vodQA - THE TESTING SPIRIT !

        'vodQA - THE TESTING SPIRIT!' - is a platform created by ThoughtWorks for our peers in the software testing industry to strengthen the QA community by sharing and learning new practices and ideas.

        'vodQA' offers a unique opportunity to interact with people who are equally passionate about software testing and continuously strive to better the art.

        Our first vodQA event held in Pune on June 10, 2010 was a huge success.

        Highlights Of The Event


        • The format was such that each speaker had 10-12 minutes to present followed by a couple of minutes of "open" discussion with participants.
        • The topics covered in this session were be appropriate for any level of testing experience.
        • The Speakers got to share your knowledge and gain recognition as a subject expert.
        • The Attendees were connect with like-minded peers, and gain insight from industry practitioners.
        • This was a free to attend event.
        • We had more than 225 registrations, and based on industry trends, we expected around 30-40% of them to show-up. So we were right on track in our estimations.
        • Due to the over-whelming response, we had to close off the registrations within 2 weeks of sending out the invites.
        • There were more than 70 attendees from the Pune QA community, 4 external speakers and 4 TW speakers present for the event.
        • Total attendance for the event 110+ (including ThoughtWorkers).
        • The pictures from the event are available here.

        Presenters

        1. Swapnil Kadu & Harshad Khaladkar - Synechron Technologies - Transitioning from Waterfall to Agile - Challenges
        2. Manish Kumar - ThoughtWorks - Be Agile, Get Rapid Feedback
        3. Sagar Surana - Amdocs Dev. Ltd - Intelligent test design
        4. Chirag Doshi - ThoughtWorks - Write tests in the end-user's lingo
        5. Srinivas Chillara - etakey - Trials & Tribulations
        6. Anand Bagmar - ThoughtWorks - Future of Test Automation Tools & Infrastructure. Video available here. Presentation available here.
        7. Sudeep Somani & Krishna Kumar Sure - ThoughtWorks - Decoupled Automation Frameworks
        8. Sumit Agrawal - Automation Execution results Dashboard

        The next 'vodQA - THE TESTING SPIRIT!' event will be held in ThoughtWorks, Pune on Thursday, 7th October 2010


        More information to follow soon. Watch this space!

        Critical Test Failures

        What are Critical Test Failures?

        Imagine a Banking application you are testing.

        Majority of your automation suite relies on successful login to the application. So you have written various login tests to ensure this functionality is well tested.

        Apart from other basic tests, all core business transaction tests are going to use the login functionality implicitly to verify proper functioning of the application. Ex: Account balance, transfer funds, bill payments, etc.

        Problem scenario

        • If there is some problem in the authentication module of the application, all your tests that rely on login are going to fail.
        • We see a false number of failures in the test reports.
        • There could be various manifestations of the same failure, which in turn means more time is required to get to the root cause of the issues.
        • There is unnecessary panic created after seeing a massive number of test failures.

        Wouldn't it be nice, in such cases, to be able to tag your tests in a way that defines the "critical" dependencies for it to PASS, before the test in question is attempted to be run?

        Defining Critical Dependencies

        • It should be straight-forward, flexible (to define complex dependencies) and easy to add, maintain and update over time.
        • Each test / fixture / suite can depend on multiple test(s) / fixture(s) / suite(s).
        • Each test / fixture / suite can have multiple test(s) / fixture(s) / suite(s) dependent on itself.
        Given this dynamic / flexible structure possible, the following questions should be asked in context of the project:
        1. What is the level of flexibility needed for me to implement critical dependencies effectively?
        2. What is the value I am trying to achieve by having this functionality?

        Based on the answer to the above questions, you can define your dependencies in any type of resource - either a text / properties file, xml / yaml file, csv files, databases, etc.

        Test Execution time processing

        • Determine the test execution order based on the Critical Test Dependencies defined above. Needless to say, the dependent items should be executed first.
        • When executing a test / fixture / suite, get the list of the critical test(s), fixture(s), suite(s) it is dependent on, and since the dependent tests have already been executed (because of the above point), get their results too.
        • If the result of any of the critical test dependencies is fail / skip, then do NOT execute this test / fixture / suite AND instead, mark the current test / fixture / suite as fail / skip*, and set the reason as "Critical Test dependency failed" with more specifics of the same.
        * - Marking the test as fail / skip is dependent on how you want to see the end report.

        How to implement Critical Test Failures in your framework?

        You can then implement the handling of Critical Test Failures using different approaches. A few possible ideas (in order of priority) are listed below:
        1. Create a generic utility with full flexibility and use it as an external utility in your project.
        2. Create a custom listener for your unit testing framework (eg: TestNG, jUnit, nUnit, etc.) and add this listener to the runtime when executing tests.
        3. Create a specific custom solution in your test automation infrastructure to implement this functionality.

        Value proposition

        • We see a correct number of failures, and correct total number of tests in the test reports
        • There are no false failures reported. The failures / skips because of Critical Test dependencies have the pointer to the exact test / fixture / suite because of which this test / fixture / suite failed.
        • Since time is NOT spent on running tests which have failed critical tests it depends on, we get quicker feedback.
        • This solution can be applied to any framework.

        Wednesday, August 25, 2010

        Future of Test Automation Tools & Infrastructure


        There are some specific trends noticeable in the way we do UI-based test automation. Technology has advanced, new interfaces have been created, and as a result, to counter that, new tools have been created that changed our way of doing test automation.

        Evolution

        Let us go back in time a little to see how the test automation tools and frameworks have evolved.
        • The crux of any automation framework is its core engine.
        • The traditional record-and-playback set of tools sit on top of this core framework.
        • The rigidity and difficulty (amongst other factors) in customizing the standard record and playback scripts resulted in the new layer being added – that of the Custom Frameworks.


        What are these Custom Frameworks? These are nothing different than writing customized scripts to do more optimal record and playback. We know these frameworks by various different names, however, most commonly as depicted in the picture below.





        I am not going to get into the specifics of the above mentioned framework. But it is important to note that most often, when one starts to build a Custom Framework using either of the 4 mentioned types, eventually you end up with a Hybrid solution – which is a combination of the different frameworks.

        The Custom Frameworks have been around for a considerable time now, and there are more than a bunch of tools and utilities to support this. However, there has been a need for writing tests in a new lingo. Something that will be easier for non-coders (example: Business Analysts) to read, understand, and maybe also contribute to.

        Thus arose a new type of methodology and framework for building our Automated Tests - BDD – Behavior Driven Development. There are many tools in the market that allow BDD, namely, Cucumber, JBehave, RSpec, Twist, etc.

        Interesting point to note is that the BDD layer sits on top of the Customized frameworks. So essentially we are building up layer upon layer. This is important, because we don’t want to reinvent the wheel. Instead, we want to keep reusing what we have (as much as possible), till we reach a point where new design and rewrite becomes necessary. But that is a separate discussion.

        The BDD frameworks have also been around for some time now. When thinking about this pattern, the question that comes in my mind is – WHAT IS NEXT?


        UI Advancements

        To answer the question – “WHAT IS NEXT?” we need to understand the nature of UI advancements that have been happening in the past decade or two.

        How many of us remember the CRT monitors we used to work on a few years ago? These monitors itself went through a big change over the past 2 decades. Then arrived the amazing, sleek, flat panel LCDs. The benefits of using the LCD monitors over CRT are well known.

        What about the first generation of the big, clunky, power hungry, laptops? Compare that with the laptops available today, the change in the processing speed, the portability, battery life, and of course, in the context of this discussion, the high color and resolution available for us. Following this came the tablet PCs, which probably did not take off as well as one would have thought. However, this is a huge change in a pretty fast time isn’t it?

        The latest in this portable computer generation is the Netbook PCs – ultra portable, pretty powerful, long battery life, still the same good UI capabilities.

        Another category of devices has started changing the way we work. 

        For example, in the images shown below, the woman is browsing a wrist watch catalog with the help of a completely different interactive interface – which is controlled (browse, zoom, select, etc.) using her hand gestures.
         
        Source

        Another example, the person in the right image shown below is editing the images directly using his hand, instead of any special device in his hand.


        Source

        Another example, the child shown below is drawing an image with the help of a completely different interactive interface – which is controlled (browse, zoom, select, etc.) using her hand gestures.

        Source

        Last example, the person in the image shown below is editing the images directly using his hand, instead of any special device in his hand.

        Source

        You would ask how is this affecting the end user? How is this related to Test Automation?

        Well, the answer is simple. These changes in UI interfaces have resulted in a boom in the software industry. Enabling or writing new software for mobile phones, or portable devices has become a new vertical in software development and testing.

        Look at the smart phones (iPhones, Androids, etc.). There are so many more things possible on portable devices today, that the possibilities of what you can do are limitless. You can interact with them using regular buttons, or touch-based gestures, or stylus.

        See how the Internet has evolved. On all the major portals, you are now able to create your own customized page, based on your preference. And all this is done not by major configuration changes, or talking to a sys-admin. They are done simply by doing some mouse gestures and actions. Example: In the below image, the Yahoo page has widgets which you can configure and arrange in the order of your preference, so that you are able to see what you want to see.

        WHAT IS NEXT?

        The whole world appears to be moving towards providing content or doing actions based on “interactions”.

        If you recall the movie, “The Minority Report”, the technology depicted there is simply amazing. The movie, portrayed in the year 1950, shows the actors interacting with images, videos, voices, all using gestures. This technology was developed by MIT labs for the movie, and with the work that has happened in the past few years, this technology was demonstrated in TED talks by John Underkoffler. He in fact believes this technology would become mainstream in the next couple of years for everyone’s use. He called this technology the “Spatial operating environment”.

        In simpler terms, I call this “Gesture Based Technology”. This is the future that we are already very close to!

        How does this affect the software test automation?

        Well, this affects us in a major way.
        • We eventually will be developing software to support this technology.
        • If we are developing software, that means we need to test it.
        • This also means that we need to do automation for it.
        It is imperative for us to start thinking about how will we, as testers, test in this new environment?

        What tool support do we need to test this effectively?

        Lastly, let’s think BIG - why can’t we create / write our automation tests using similar interfaces?

        UDD – UI Driven Development

        If a user of a system can interact with it using gestures, why can’t we testers change the way we write automated tests? Why do we have to rely on coding, or writing tests in BDD format? If a picture speaks a thousand words, why can we raise the bar and write tests using a different, interactive format?



        I envision the UDD framework to have the following components:



        Some of these components are self-explanatory. However, there are some key components here which I would like to talk about.

        Plugin Manager

        This complete framework would be built on plugins. There would be a set of core plugins that make this environment, and various other plugins developed and contributed by the community based on their need, requirement and vision.

        Another important aspect of this environment is that if a new plugin needs to be added, we would not need to restart the complete framework. A ‘hot-deployment’ mechanism would be available to enable additions of the new plugins in the environment.



        Sample plugins include:
        • xPath utilities
        • Recording engine – generate code in the language specified
        • Custom reporters / trend analysis
        • Test data generators
        • Schedulers / integration with CI (Continuous Integration) systems
        • Language / driver support – I believe it should be easy to change the underlying framework at the click of a button (provided the necessary plugins are available). This way the admin user can choose to change from say using Selenium to Sahi just by choosing which UI framework is to be used. Similarly, it should be possible to select which language is used for the code generation.
        • Integration with external tools and repositories – example: file diff / compare tools, etc.

        Discovery

        This to me is a very essential and critical piece because we want to make sure we do not need to reinvent the wheel. We would like to reuse our existing frameworks as much as possible and make the transition to UDD as seamless as possible.

        This component should be able to reverse engineer the existing code base, and create an UI object hierarchy available in a palette / repository.

        Example: After running the discovery tool against the existing source repository, the UI objects will be created like this:



        Author

        To create new objects / tests scripts, the test author would use the UI objects from the palette / repository, and, ‘simply’ drag-&-drop various UI objects to create new objects / test scripts. All the ‘intelligent’ code refactoring and restructuring will happen automatically in the backend. Refer to the picture below for reference.

        Note: We can do this to a certain extent at present. Using reverse engineering tools, we can create class diagrams / UML diagrams from existing code base. 

        In the context of UDD, these are at present dummy objects. We need to make this proper UI driven objects, which when moved across, would result in the framework making appropriate modifications in the underlying code-base, without the user having to manually intervene.





        This provides a higher level and also a pictorial view for the people looking at these tests.

        That said, when new functionality needs to be added in the code base, then the test author can simply write code for the same, and the UDD framework will create appropriate UI objects out of it, and also publish it to the repository for everyone’s use.

        Execution Engine

        The execution engine provides a lot of flexibility in terms of how the tests should be run. There are various options:
        • Run the tests within UDD framework
        • Create a command for the set of tests the user wants to run, which the user can simply copy and paste in the command prompt and execute the tests directly without having to worry / think about what command needs to be run.
        • Provide ability to execute the tests on the same machine, remote machines or combinations so desired.
        • Can be triggered via CI tools.

        Reporting Engine

        We are used to seeing the default, yet quite comprehensive reports generated by the various unit testing frameworks (jUnit, nUnit, TestNG, etc.). 

        However, what is lacking in this is the ability to consolidate reports from various different runs and archive them, create trend analysis and charts of various types which may be interesting to track the health of the system.

        There should be a default set of Reporting plugins which provide this type of mechanism out of the box. Also, since this is plugin based architecture, the community can contribute to writing customized reporters to cater to specific requirements.

        How do we get there?

        I have shared what my vision is for the Future of Test Automation.  The next important question is what can we do to help us get ready for the future, whatever it may be?

        If we can follow a few practices when we do test automation, we can be in a good state to adopt what the future would have to offer. 



        Test code should be of Production quality!
        Use private / protected member variables / methods. Make them public only when absolutely essential.
        Import only those classes that you need. Avoid import abc.*
        Keep test intent separate from implementation.
        Use xPaths with caution. Do NOT use indexes.
        Do not simply copy / paste code from other sources without understanding it completely.
        Keep test data separate from test scripts.
        Duplicating code is NOT OK.