Wednesday, December 8, 2010

MEDIA ALERT: Live Web Event: The End of the Free Internet

Paul Jay, senior editor of The Real News Network, will moderate a virtual panel discussion promoting a dialogue for the technology community about the technological and legal ramifications of the WikiLeaks shutdown.
   
Topics to be discussed include:

    * What does the cut off of service to WikiLeaks mean for the future of the Internet? 

    * Will digital journalism be less protected?

    * Was WikiLeaks afforded procedural protections before its website and DNS entries were shut down? What process should be required?

    * The Internet is vulnerable to internal threats. What technical innovations are needed?

    * Can leaks ever be stopped? Is it worth the price?


Click here (http://bit.ly/ejXRNX) for more information.

Tuesday, November 23, 2010

Focus and Positivity is the key!

There are certain things I have observed quite a few times in various different situations while testing. These things seem more like a pattern than exceptions, and many a times demotivate me to do the right thing, for myself, and the organization.


What are these things?

  • Testing is considered as a step child - at times not given enough "listening ear" as required.
  • Testing usually seems lower in priority (compared to other issues faced).
  • Clients / organization seem OK with bugs / defects going live or, in other words, they are OK with pushing code in Production without testing it properly.
  • QA team is not involved in all stages of the life cycle - leading to disconnect between team members, invalid / incorrect assumptions, etc.
  • QA capacity is not adequate for the amount of work actually needed to be done.
  • Team members say - "This [issue] is not my responsibility. Talk to someone else."
  • Team members say - "This [issue] is not your [QA] responsibility. You don't need to provide suggestions / solutions for it. Let the appropriate owners think / talk about it."

Tips to remain focused on the right thing
  • Don't be hesitant to raise issues (time and again).
  • Identify and keep highlighting risks - in current situation, their implications and the potential mitigation points.
  • Keep doing what is right for the project / client.
  • Identify and engage with the "correct" people to ensure issues are understood, highlighted and prioritized.
  • Know what is in your control, and what is not. Strive to effect a change in what is in your control. For things out of your control, raise it with the relevant people, and then feel satisfied that you have done your part.
  • Be positive.
I strongly believe that the last 2 items in the above list are the most crucial to remaining focused and being successful in whatever one does ... in personal life and in professional life.


Saturday, November 20, 2010

Generic Test Automation Frameworks - Good or Bad?

Many a times I have been part of discussions that we should build a generic test automation framework which all other project teams can leverage. Also, another thought on this has been that building test automation framework is not really a QA job, but a developer job.

I have different thoughts on this.
  1. From a capability building perspective, if we build a generic framework, how is the QA team going to learn to do things from ground-up, and also make it better?
  2. From a business perspective, if I as an organization have a generic framework, then am I going to give it for free to the client? If no, then it is as good as saying I have built the automation framework as a standalone product - which is fine. But, if I am giving it free to the client, then what value has it brought to me as an organization by designing and building this framework, and then I just give it off free?
  3. Developers can build a great framework - it is another product for them. However, the users of this framework are QA. More often than not, this framework that is built by the developers will be difficult to use by the QAs.
  • I have seen an example of a client-project, where our developers and QAs built the framework together. However, only the developers were able to make any core framework changes because of its complexity. Also, when the time came to hand-over the framework to the client, it was the developer(s) who had to provide training for this - the QAs were not even involved in this, because ...
So to cut the answer short, I don't like the idea of generic test automation frameworks!

Monday, October 11, 2010

vodQA 2 - plan so hard for it, and its over so soon!

We were planning for vodQA 2 since about 6-7 weeks. But we suffered our first hiccup when I had to go to the US on a QA Coaching assignment for 3 weeks. Fortunately, my return was a few days before vodQA (7th October 2010).

It was a tough time being away ... constantly thinking about if all things are taken care off or not, if we are getting adequate help for the core organizers or not, what blockers are existing, and so on and on.

However, our core team here in ThoughtWorks office really slogged it out, at times pulling people by their ears :) to ensure we are on track.

All this meant that I didn't have to do much when I came back.

On THE day, just before the event though, we suffered another hiccup - our audio-visual vendor refused to come up till a good few guests had already arrived. So it was awkward start to the event ... but things settled down pretty fast and then the event did go through quite smoothly.

Pictures from vodQA 2 are available here

A lot of positives from the event:
1. More than 101 external guests arrived. Of these there were a significant number of people who had also attended our first vodQA event on June 10, 2010.
2. The talks were very interesting, and very apt to the changing dynamics of the industry.
3. In the initial feedback that we have gone through, people seem to be craving for such type of events - which is a very positive sign for us that we are doing something right, giving something to the community that will help them share, learn, grow and connect.
4. We incorporated the feedback received in our first vodQA to make this more tuned to what people wanted to experience.

A few people mentioned in the event - "this is our QA family, and we want to see it grow".

Finally, the event came, and went - leaving behind lot of good memories. I will post the presentation and the videos links as soon as they are posted.

Meanwhile, here are the topics that were covered in vodQA 2:

Speaker / topic list:

Full Length talks
1. Deepak Gole & Saager Mhatre - Sapna Solutions
Automated acceptance testing for iPhone

2.Parul Mody & Vijay Khatke - BMC Software
Cloud Testing

3. Ashwini Malthankar - ThoughtWorks
Effective use of Continuous integration by QA

4. Vijay and Supriya - ThoughtWorks
Test your service not your UI

5. Ananthapadmanabhan R - ThoughtWorks
Twist : Evolving test suites over test cases

Lightning Talks
1. Satish Agrawal - Persistent Systems
Leadership and Innovation in a commoditized industry

2. Anay Nayak - ThoughtWorks
Fluent Interfaces and Matches Fish bowl topic: DNA of Test Automation


Monday, September 6, 2010

vodQA2 - You are invited to Pune's Quality Analyst Meet!

vodQA - THE TESTING SPIRIT

announces its 2nd event in ThoughtWorks, Pune on October 7, 2010 at 5.30pm.

Refer here to more details of the event.

Friday, August 27, 2010

vodQA - THE TESTING SPIRIT !

'vodQA - THE TESTING SPIRIT!' - is a platform created by ThoughtWorks for our peers in the software testing industry to strengthen the QA community by sharing and learning new practices and ideas.

'vodQA' offers a unique opportunity to interact with people who are equally passionate about software testing and continuously strive to better the art.

Our first vodQA event held in Pune on June 10, 2010 was a huge success.

Highlights Of The Event


  • The format was such that each speaker had 10-12 minutes to present followed by a couple of minutes of "open" discussion with participants.
  • The topics covered in this session were be appropriate for any level of testing experience.
  • The Speakers got to share your knowledge and gain recognition as a subject expert.
  • The Attendees were connect with like-minded peers, and gain insight from industry practitioners.
  • This was a free to attend event.
  • We had more than 225 registrations, and based on industry trends, we expected around 30-40% of them to show-up. So we were right on track in our estimations.
  • Due to the over-whelming response, we had to close off the registrations within 2 weeks of sending out the invites.
  • There were more than 70 attendees from the Pune QA community, 4 external speakers and 4 TW speakers present for the event.
  • Total attendance for the event 110+ (including ThoughtWorkers).
  • The pictures from the event are available here.

Presenters

  1. Swapnil Kadu & Harshad Khaladkar - Synechron Technologies - Transitioning from Waterfall to Agile - Challenges
  2. Manish Kumar - ThoughtWorks - Be Agile, Get Rapid Feedback
  3. Sagar Surana - Amdocs Dev. Ltd - Intelligent test design
  4. Chirag Doshi - ThoughtWorks - Write tests in the end-user's lingo
  5. Srinivas Chillara - etakey - Trials & Tribulations
  6. Anand Bagmar - ThoughtWorks - Future of Test Automation Tools & Infrastructure. Video available here. Presentation available here.
  7. Sudeep Somani & Krishna Kumar Sure - ThoughtWorks - Decoupled Automation Frameworks
  8. Sumit Agrawal - Automation Execution results Dashboard

The next 'vodQA - THE TESTING SPIRIT!' event will be held in ThoughtWorks, Pune on Thursday, 7th October 2010


More information to follow soon. Watch this space!

Critical Test Failures

What are Critical Test Failures?

Imagine a Banking application you are testing.

Majority of your automation suite relies on successful login to the application. So you have written various login tests to ensure this functionality is well tested.

Apart from other basic tests, all core business transaction tests are going to use the login functionality implicitly to verify proper functioning of the application. Ex: Account balance, transfer funds, bill payments, etc.

Problem scenario

  • If there is some problem in the authentication module of the application, all your tests that rely on login are going to fail.
  • We see a false number of failures in the test reports.
  • There could be various manifestations of the same failure, which in turn means more time is required to get to the root cause of the issues.
  • There is unnecessary panic created after seeing a massive number of test failures.

Wouldn't it be nice, in such cases, to be able to tag your tests in a way that defines the "critical" dependencies for it to PASS, before the test in question is attempted to be run?

Defining Critical Dependencies

  • It should be straight-forward, flexible (to define complex dependencies) and easy to add, maintain and update over time.
  • Each test / fixture / suite can depend on multiple test(s) / fixture(s) / suite(s).
  • Each test / fixture / suite can have multiple test(s) / fixture(s) / suite(s) dependent on itself.
Given this dynamic / flexible structure possible, the following questions should be asked in context of the project:
  1. What is the level of flexibility needed for me to implement critical dependencies effectively?
  2. What is the value I am trying to achieve by having this functionality?

Based on the answer to the above questions, you can define your dependencies in any type of resource - either a text / properties file, xml / yaml file, csv files, databases, etc.

Test Execution time processing

  • Determine the test execution order based on the Critical Test Dependencies defined above. Needless to say, the dependent items should be executed first.
  • When executing a test / fixture / suite, get the list of the critical test(s), fixture(s), suite(s) it is dependent on, and since the dependent tests have already been executed (because of the above point), get their results too.
  • If the result of any of the critical test dependencies is fail / skip, then do NOT execute this test / fixture / suite AND instead, mark the current test / fixture / suite as fail / skip*, and set the reason as "Critical Test dependency failed" with more specifics of the same.
* - Marking the test as fail / skip is dependent on how you want to see the end report.

How to implement Critical Test Failures in your framework?

You can then implement the handling of Critical Test Failures using different approaches. A few possible ideas (in order of priority) are listed below:
  1. Create a generic utility with full flexibility and use it as an external utility in your project.
  2. Create a custom listener for your unit testing framework (eg: TestNG, jUnit, nUnit, etc.) and add this listener to the runtime when executing tests.
  3. Create a specific custom solution in your test automation infrastructure to implement this functionality.

Value proposition

  • We see a correct number of failures, and correct total number of tests in the test reports
  • There are no false failures reported. The failures / skips because of Critical Test dependencies have the pointer to the exact test / fixture / suite because of which this test / fixture / suite failed.
  • Since time is NOT spent on running tests which have failed critical tests it depends on, we get quicker feedback.
  • This solution can be applied to any framework.

Wednesday, August 25, 2010

Future of Test Automation Tools & Infrastructure


There are some specific trends noticeable in the way we do UI-based test automation. Technology has advanced, new interfaces have been created, and as a result, to counter that, new tools have been created that changed our way of doing test automation.

Evolution

Let us go back in time a little to see how the test automation tools and frameworks have evolved.
  • The crux of any automation framework is its core engine.
  • The traditional record-and-playback set of tools sit on top of this core framework.
  • The rigidity and difficulty (amongst other factors) in customizing the standard record and playback scripts resulted in the new layer being added – that of the Custom Frameworks.


What are these Custom Frameworks? These are nothing different than writing customized scripts to do more optimal record and playback. We know these frameworks by various different names, however, most commonly as depicted in the picture below.





I am not going to get into the specifics of the above mentioned framework. But it is important to note that most often, when one starts to build a Custom Framework using either of the 4 mentioned types, eventually you end up with a Hybrid solution – which is a combination of the different frameworks.

The Custom Frameworks have been around for a considerable time now, and there are more than a bunch of tools and utilities to support this. However, there has been a need for writing tests in a new lingo. Something that will be easier for non-coders (example: Business Analysts) to read, understand, and maybe also contribute to.

Thus arose a new type of methodology and framework for building our Automated Tests - BDD – Behavior Driven Development. There are many tools in the market that allow BDD, namely, Cucumber, JBehave, RSpec, Twist, etc.

Interesting point to note is that the BDD layer sits on top of the Customized frameworks. So essentially we are building up layer upon layer. This is important, because we don’t want to reinvent the wheel. Instead, we want to keep reusing what we have (as much as possible), till we reach a point where new design and rewrite becomes necessary. But that is a separate discussion.

The BDD frameworks have also been around for some time now. When thinking about this pattern, the question that comes in my mind is – WHAT IS NEXT?


UI Advancements

To answer the question – “WHAT IS NEXT?” we need to understand the nature of UI advancements that have been happening in the past decade or two.

How many of us remember the CRT monitors we used to work on a few years ago? These monitors itself went through a big change over the past 2 decades. Then arrived the amazing, sleek, flat panel LCDs. The benefits of using the LCD monitors over CRT are well known.

What about the first generation of the big, clunky, power hungry, laptops? Compare that with the laptops available today, the change in the processing speed, the portability, battery life, and of course, in the context of this discussion, the high color and resolution available for us. Following this came the tablet PCs, which probably did not take off as well as one would have thought. However, this is a huge change in a pretty fast time isn’t it?

The latest in this portable computer generation is the Netbook PCs – ultra portable, pretty powerful, long battery life, still the same good UI capabilities.

Another category of devices has started changing the way we work. 

For example, in the images shown below, the woman is browsing a wrist watch catalog with the help of a completely different interactive interface – which is controlled (browse, zoom, select, etc.) using her hand gestures.
 
Source

Another example, the person in the right image shown below is editing the images directly using his hand, instead of any special device in his hand.


Source

Another example, the child shown below is drawing an image with the help of a completely different interactive interface – which is controlled (browse, zoom, select, etc.) using her hand gestures.

Source

Last example, the person in the image shown below is editing the images directly using his hand, instead of any special device in his hand.

Source

You would ask how is this affecting the end user? How is this related to Test Automation?

Well, the answer is simple. These changes in UI interfaces have resulted in a boom in the software industry. Enabling or writing new software for mobile phones, or portable devices has become a new vertical in software development and testing.

Look at the smart phones (iPhones, Androids, etc.). There are so many more things possible on portable devices today, that the possibilities of what you can do are limitless. You can interact with them using regular buttons, or touch-based gestures, or stylus.

See how the Internet has evolved. On all the major portals, you are now able to create your own customized page, based on your preference. And all this is done not by major configuration changes, or talking to a sys-admin. They are done simply by doing some mouse gestures and actions. Example: In the below image, the Yahoo page has widgets which you can configure and arrange in the order of your preference, so that you are able to see what you want to see.

WHAT IS NEXT?

The whole world appears to be moving towards providing content or doing actions based on “interactions”.

If you recall the movie, “The Minority Report”, the technology depicted there is simply amazing. The movie, portrayed in the year 1950, shows the actors interacting with images, videos, voices, all using gestures. This technology was developed by MIT labs for the movie, and with the work that has happened in the past few years, this technology was demonstrated in TED talks by John Underkoffler. He in fact believes this technology would become mainstream in the next couple of years for everyone’s use. He called this technology the “Spatial operating environment”.

In simpler terms, I call this “Gesture Based Technology”. This is the future that we are already very close to!

How does this affect the software test automation?

Well, this affects us in a major way.
  • We eventually will be developing software to support this technology.
  • If we are developing software, that means we need to test it.
  • This also means that we need to do automation for it.
It is imperative for us to start thinking about how will we, as testers, test in this new environment?

What tool support do we need to test this effectively?

Lastly, let’s think BIG - why can’t we create / write our automation tests using similar interfaces?

UDD – UI Driven Development

If a user of a system can interact with it using gestures, why can’t we testers change the way we write automated tests? Why do we have to rely on coding, or writing tests in BDD format? If a picture speaks a thousand words, why can we raise the bar and write tests using a different, interactive format?



I envision the UDD framework to have the following components:



Some of these components are self-explanatory. However, there are some key components here which I would like to talk about.

Plugin Manager

This complete framework would be built on plugins. There would be a set of core plugins that make this environment, and various other plugins developed and contributed by the community based on their need, requirement and vision.

Another important aspect of this environment is that if a new plugin needs to be added, we would not need to restart the complete framework. A ‘hot-deployment’ mechanism would be available to enable additions of the new plugins in the environment.



Sample plugins include:
  • xPath utilities
  • Recording engine – generate code in the language specified
  • Custom reporters / trend analysis
  • Test data generators
  • Schedulers / integration with CI (Continuous Integration) systems
  • Language / driver support – I believe it should be easy to change the underlying framework at the click of a button (provided the necessary plugins are available). This way the admin user can choose to change from say using Selenium to Sahi just by choosing which UI framework is to be used. Similarly, it should be possible to select which language is used for the code generation.
  • Integration with external tools and repositories – example: file diff / compare tools, etc.

Discovery

This to me is a very essential and critical piece because we want to make sure we do not need to reinvent the wheel. We would like to reuse our existing frameworks as much as possible and make the transition to UDD as seamless as possible.

This component should be able to reverse engineer the existing code base, and create an UI object hierarchy available in a palette / repository.

Example: After running the discovery tool against the existing source repository, the UI objects will be created like this:



Author

To create new objects / tests scripts, the test author would use the UI objects from the palette / repository, and, ‘simply’ drag-&-drop various UI objects to create new objects / test scripts. All the ‘intelligent’ code refactoring and restructuring will happen automatically in the backend. Refer to the picture below for reference.

Note: We can do this to a certain extent at present. Using reverse engineering tools, we can create class diagrams / UML diagrams from existing code base. 

In the context of UDD, these are at present dummy objects. We need to make this proper UI driven objects, which when moved across, would result in the framework making appropriate modifications in the underlying code-base, without the user having to manually intervene.





This provides a higher level and also a pictorial view for the people looking at these tests.

That said, when new functionality needs to be added in the code base, then the test author can simply write code for the same, and the UDD framework will create appropriate UI objects out of it, and also publish it to the repository for everyone’s use.

Execution Engine

The execution engine provides a lot of flexibility in terms of how the tests should be run. There are various options:
  • Run the tests within UDD framework
  • Create a command for the set of tests the user wants to run, which the user can simply copy and paste in the command prompt and execute the tests directly without having to worry / think about what command needs to be run.
  • Provide ability to execute the tests on the same machine, remote machines or combinations so desired.
  • Can be triggered via CI tools.

Reporting Engine

We are used to seeing the default, yet quite comprehensive reports generated by the various unit testing frameworks (jUnit, nUnit, TestNG, etc.). 

However, what is lacking in this is the ability to consolidate reports from various different runs and archive them, create trend analysis and charts of various types which may be interesting to track the health of the system.

There should be a default set of Reporting plugins which provide this type of mechanism out of the box. Also, since this is plugin based architecture, the community can contribute to writing customized reporters to cater to specific requirements.

How do we get there?

I have shared what my vision is for the Future of Test Automation.  The next important question is what can we do to help us get ready for the future, whatever it may be?

If we can follow a few practices when we do test automation, we can be in a good state to adopt what the future would have to offer. 



Test code should be of Production quality!
Use private / protected member variables / methods. Make them public only when absolutely essential.
Import only those classes that you need. Avoid import abc.*
Keep test intent separate from implementation.
Use xPaths with caution. Do NOT use indexes.
Do not simply copy / paste code from other sources without understanding it completely.
Keep test data separate from test scripts.
Duplicating code is NOT OK.