Monday, July 30, 2012

Do we have rules for story writing?


I came across this interesting article from Pixar - The 22 rules of storytelling, according to Pixar


It got me thinking - this applies to our Agile world of Software Development too! 
  • Anyone writing story cards - do you have certain rules / criteria in mind? I think quite a few things from the Pixar article apply in our world too!
  • For us Testers, do we think about the functionality in the story cards based on these rules? Or, simply put, do we think about the new functionality in the story card affects the end-product? Do we think about the big picture enough?

Monday, July 23, 2012

How will you test this?

How will you automate the testing of something like this? ... no - not the video, but the wonders they have achieved in the "Miniatur Wunderland"!


WAAT-Ruby crosses 300 downloads

I was very happy to see that WAAT-Ruby (Web Analytics Automation Testing framework) has crossed 300 downloads on rubygems.org.

Tuesday, July 17, 2012

Behavior Driven Testing (BDT) in Agile - slides uploaded

I spoke in SiliconIndia's SoftTec 2012 in Bangalore on 14th July on Behavior Driven Testing (BDT) in Agile


It was a pretty big conference - with the Test Technologies / Methodologies track having an attendance of about 250 people, and the People / Process / Business track having an attendance of about 150 people. There were good interactions with the attendees and fellow speakers during the break times.

The topic of my talk was Behavior Driven Testing (BDT) in Agile. Off late, I have been speaking on this topic at various different places in some form or the other. Click here for all information related to BDT.


Here is the abstract of this talk: 

In this talk, I will explain Agile Testing and how a technique called "Behavior Driven Testing (BDT)" can make your testing more effective. I will also cover the differences between BDD (Behavior Driven Development) and BDT, how BDT affects the Test Pyramid, and the value proposition of using BDT. 


The slides from this talk are available in the vodQA group on facebook and on SlideShare.net

Monday, July 9, 2012

vodQA Geek Night - Behavior Driven Testing (BDT) update

We hosted a vodQA Geek Night focussing on Behavior Driven Testing (BDT) at ThoughtWorks, Pune on 5th July 2012. The workshop itself was very intense and saw great participation and interactions with the attendees and the workshop facilitators.


There were many firsts for this workshop:

  • The first vodQA Geek Night
  • Event was announced, marketed and registrations done purely as a Facebook event
  • 95% attendance
Pictures and slides have been uploaded on the vodQA group in Facebook.

Contact me or the vodQA Pune team for more information / questions.

Friday, June 29, 2012

Behavior Driven Testing (BDT) in Agile

I am speaking in SoftTec 2012 in Bangalore on 14th July on Behavior Driven Testing (BDT) in Agile


Abstract: 
In this talk, I will explain Agile Testing and how a technique called "Behavior Driven Testing (BDT)" can make your testing more effective. I will also cover the differences between BDD (Behavior Driven Development) and BDT, how BDT affects the Test Pyramid, and the value proposition of using BDT. 

Feedback on WAAT

I am considering adding some functionality to WAAT. However, before I do that, I would like to know what your opinion is.


So, to all those who are either using, tried using, or, want to use WAAT: Can you please provide me some feedback based on the following questions:



  • Which flavor of WAAT do you use? 
    • Java
    • Ruby
    • Both
  • Have you faced any problems using WAAT? 
    • If yes, what problems? How did you resolve them?
  • WAAT using the httpSniffer approach has known limitations (namely: does not support https request capturing, and on non-Windows platform you need to run the tests using root access). 
    • Have you run into these limitations? 
    • How did you resolve the issue?
  • Do you find the WAAT wiki useful?
    • If not, what if done differently will provide more value?
  • Any other thoughts / comments on how WAAT can be made better?
Looking forward for your comments.

Thanks.

Anand

Thursday, June 21, 2012

Test Driven Development via Agile Testing - slides + audio

I just finished a presentation on Test Driven Development via Agile Testing in the Next Generation Testing Conference in Bangalore. Went pretty well.


The following topics and answer questions related to:
  • Overview of Agile Testing
  • The Test Pyramid
  • Different flavors of TDD
    • BDD – Behavior Driven Development
    • ATDD – Acceptance Driven Development
    • BDT – Behavior Driven Testing
      • Difference between BDD and BDT
      • Tools that support BDT
      • The value proposition BDT offers
Here is a link to the slides. The audio recording of my talk can be downloaded from the link below. You will be able to  listen to the talk using VLC Player or similar.


Wednesday, June 20, 2012

vodQA Geek Night - Behavior Driven Testing (BDT) on 5th July

I had presented a topic on Behavior Driven Testing (BDT) in vodQA - Testing and Beyond


We are now running a workshop as a followup to the session to provide a first-hand experience into understanding BDT, and how it can potentially help you in your testing efforts.


Since this is a workshop, seats are limited. If you are interested in attending, please join our vodQA group on facebook and confirm your presence for the vodQA Geek Night event scheduled at 5.30pm on 5th July 2012 in ThoughtWorks Pune.

Thursday, May 31, 2012

Test Driven Development via Agile Testing

I will be giving a talk in the "Next Generation Testing Conference" held in Bangalore on 21st June 2012. 


The topic and abstract is as mentioned below. See you at the conference!

Title:

Test Driven Development via Agile Testing 

Abstract covering main features of the talk:
In this talk, I will cover the following topics and answer questions related to:
·       Overview of Agile Testing
·       The Test Pyramid
·       Different flavors of TDD
o   BDD – Behavior Driven Development?
o   ATDD – Acceptance Driven Development?
o   BDT – Behavior Driven Testing?
§  Difference between BDD and BDT
§  Tools that support BDT
§  The value proposition







Thursday, May 17, 2012

Keeping your test suites "green"

My article on Keeping your test suites "green" has been published in SiliconIndia's QA City. Looking forward for your comments.


Same article quoted below:


In days where we are talking and thinking more and more on how to achieve "Continuous Delivery" in our software projects, Test Automation plays an even more crucial role. 

To reap the benefits of test automation, you want to run it as often as possible. However, just putting your test automation jobs in some CI tool like Hudson / Jenkins / GO / etc., and have it run every so often is of little value, unless, the tests are passing, or the failures are identified and analyzed immediately, AND, proper action is taken based on the failures. 

If the number of failures / jobs are quite a few, then the test failure analysis and test maintenance activity takes a lot of time. Also, as a result, the development / product / project team may start losing confidence in the automation suite because the CI always shows the jobs in red. Eventually, test automation may lose priority and value, which is not a good sign. 

Before I explain a technique that may help keep your test suites "green" - and reduce the test failure analysis and maintenance time, let us understand why we get into this problem.

I have seen the functional tests failing for 3 main reasons:

1. The product has undergone some "unexpected change". As a result, the test has caught a regression bug as the product has changed when it was not supposed to.
2. The product has undergone some "expected" change and the test has not yet been updated to keep up with the new functionality.
3. There is an intermittent issue - maybe related to environment / database / browser / network / 3rd party integration / etc.
Regardless of the reason, if there is even 1 failure in your CI job, it means the whole job fails and turns "red". 

This is painful and more importantly, this does not provide the correct picture of the health of the system.

To determine the health of the system, we now need to:

• Spend dedicated time per test run to ensure the failures in the jobs are analyzed and accounted for,
• In case of genuine failures, defects are reported against the product, or,
• In case of test failures based on expected product changes, update the tests to be in accordance with the new functionality, or,
• In case of intermittent failures, rerun the test again to confirm the failure was indeed due to an intermittent issue.

This is not a trivial task to keep doing on every test run. So can something be done to keep your test suites green, and provide a true representation of what the health of the product under test?

Here is a strategy, which will reduce the manual analysis of your test runs, and, provide a better understanding into how the product conforms to what its supposed to do:

Lets make some assumptions:


1. Say, you have 5 jobs of various types in your CI
2. Each job uses a specific tag / annotation to run specific types of tests.

Now here is what you do:

1. Create appropriate commands / tasks in your test framework to execute tests with a new "failing_tests" tag / annotation.
2. Create a new job in CI - "Failing Tests" and point it to run the tests with tag / annotation "failing_tests".
3. Analyze all your existing / earlier jobs, and for all tests that have failed for any of the reasons mentioned earlier, comment out the original tag / annotation, and instead, add the tag / annotation "failing_tests" to such tests.

Run all the tests again and the now the following should be seen:

• The above steps have ensured the tests that pass, will continue to pass, with the added benefit of making the CI job green
• The tests that fail, will continue to fail - but in another, special "Failing Tests" CI job. 
• As a result, all the original 5 jobs you had in CI, will now turn GREEN and you just need to monitor the "Failing Tests" job.

This means now that your effort of test analysis has been reduced from 5 jobs to just 1 job. 

When a failing test passes, replace the "failing_tests" tag with the original tag back to it.

If you want to categorize the failing tests in a better way, you could potentially create separate category "Failing Tests" jobs like:

• "Failing Tests - Open Defects"
• "Failing Tests - Test updates needed"
• "Failing Tests - Intermittent / environment issues"

Regardless of your approach, the solution should be simple to implement, and you should be saving time at the end of the day, to focus on more important testing activities, instead of just analyzing the test failures.

One of my colleagues asked:
"What if a smoke test is failing? Should we move that also to a Failing Tests job?"


My answer was: 


"As with most things, you cannot apply one rule for everything. In this case also, you should not apply one strategy to all problems. As each problem is different in nature, you need to create a correct strategy that solves the problem in the best possible way.

That said, fundamentally, the smoke suite should always be "green". If there is any reason it is not, then we need to stop everything else, and make sure this is a passing test suite.

However, if you have various jobs representing the smoke suite, then you could potentially create a "Smoke - Failing Suite" on the above mentioned lines IF that helps reduce time wasted in test result analysis and provides the correct product health representation quickly, and consistently."

To summarize:

• Create a failing tests CI job and run all the failing tests as part of this job
• All existing CI jobs should turn "green"
• Monitor the failing tests and fix / update them as necessary
• If any of the passing tests fail at any point, first move them to the "Failing Tests" job to ensure the other jobs remain "green"
• When a failing test passes, move that test back from the "Failing Tests" job to the original job.

I have been profiled

SiliconIndia's QA City portal has put up my career profile on their site. You can see that here.

Monday, April 30, 2012

Theoretical Vs Practical knowledge

http://dilbert.com/strips/comic/2012-04-30/

Funny ... but on the other hand, at times this is true. Just because something has been written about, does not necessarily mean it is always true. Things change, evolve, and we need to change and move accordingly. At times, we need to flow against the tide for what we think and believe is the right thing to do.


This definitely applies to what I have seen in my career so far ... so keep thinking in innovative and creative ways - even if at times you have to swim against the tide!

Multi-tasking .... good or bad?

Many a times I end up trying to do too many things at almost the same time. I have got mixed results out of this approach.


I think off late, more often than not, I have not been too successful at juggling many things together ... this could be because of mental fatigue and burnout. 


As a result I have consciously tried to take a step away from items of relatively lower priority. This has helped me tremendously.  Also, I came across this post (http://blogs.hbr.org/schwartz/2012/03/the-magic-of-doing-one-thing-a.html)- which talks about techniques how to be more effective in your work. See if this helps you too!

Friday, April 20, 2012

vodqa Pune (17th Mar '12) videos, pictures, slides, feedback ...

Dear Testing enthusiasts,

Our recently concluded vodQA organized by ThoughtWorks Pune, on 17th March 2012 (http://testing.thoughtworks.com/events/testing-and-beyond), was a huge success. Your participation, energy, questions, thoughts, comments and feedback raised the level of this event to great heights! We thank you for that.

You can join the following groups to keep up-to-date with what’s happening in vodQA, to know when the next vodQA is happening, to connect with fellow testing enthusiasts, share thoughts related with testing, etc.

LinkedIn group: vodQA (http://linkd.in/IaHcFG)
Facebook group: vodQA (http://on.fb.me/HUPAX9)
Twitter: @vodqa

Here is what happened in this 7th edition of vodQA (4th in Pune):

Statistics:
375+ attendee registrations
35+ speaker registrations
130+ attendees

All videos from this edition of vodQA are available here: http://bit.ly/JRRTtH, and pictures are available here: http://on.fb.me/Jq4Qyh

Based on feedback received, here are the topics that attendees found most useful:
- Open Space
- Mobile Testing
- BDT  [ Behavior Driven Testing ]

Summary of feedbacks received:- video feedbacks (http://bit.ly/Jm88Qd)
- Impressive, please continue conducting such events
- I have attended vodQA for first time, it is an amazing experience
- Nice initiative taken by ThoughtWorks
- Have this event once in Quarter
- Parallel tracks (Participants found it difficult to choose one session over the other)
- Each speaker should have a Slide with email and contact details
- Should have more experienced speakers
- More time for Open Space
- Make arrangements for Parking
- Food can be improved

Want to hear more of:
- Cloud Computing, Security Testing, Malware, Ethical Hacking, Agile methodologies, mobile tech
- Current industry trends and topics

Suggestions given by attendees for next vodQA: http://bit.ly/HXGZ9I


External blogs by:
Anand Bagmar: http://bit.ly/HU78r5
Savita Munde: http://bit.ly/FOS6ll
Srinivas Chillara: http://bit.ly/IaVRAB

Sessions/Topics details:
- vodQA opening
YouTube: http://bit.ly/Jb3MPl

- Opening note by Chaitanya Nadkarny
YouTube: http://bit.ly/JckWcR

- Quiz
YouTube: http://bit.ly/I8o5sv

- Testing is Dead. Long Live Testing - Shrinivas Kulkarni
Synopsis: Last year, three leading software doctors pronounced testing dead. As we mourn the alleged demise of our craft - questions raise as what to do next? This talk analyses the meaning and impact of death of testing. The talk then deliberates on potential next steps and challenges ahead of us.
YouTube: http://bit.ly/JpXI4O
Slides: http://slidesha.re/IrBL4o

- Testing a Massively Multi-player Online Game Server - Nirmalya Sengupta, Srinivas Chilllara
Synopsis: An online game server's functional and non-functional features lead to non-standard challenges for both architecting as well as testing. This talk starts with an overview and then discusses one testing scenario in depth. We stress particularly on testing the asynchronous nature of the application's method calls. A few general approaches of testing such applications terms are alluded to at the end.
YouTube: http://bit.ly/HWO1NP
Slides: http://slidesha.re/JRZKHQ

- Virtualization Impact on Software Testing - Parthasarthi T
Synopsis: Virtualization Impact on Software Testing
YouTube: http://bit.ly/HWjFrj
Slides: http://slidesha.re/Jq4bNe

- Mobile Testing: Challenges and Solutions - Ashwini Phalle
Synopsis: Different testing requirements that mobile applications have, challenges and solutions.
Challenges:
1. Complex mobile testing matrix, Expensive test environment
2. Repetitive testing
3. Mobile testing for devices located at various locations
Solutions:
1. Risk Based Testing approach
2. Using Mobile device emulators
3. Use of Automation tools
4. Leveraging external services
YouTube: http://bit.ly/Jb4hZw
Slides: http://slidesha.re/Jb9Rer

- Open Space discussions
YouTube: http://bit.ly/Jb4k7O

- The Marshmallow Challenge - Sneha Kadam
Synposis: This is a fun and instructive design exercise that encourages teams to experience simple but profound lessons in collaboration, innovation & creativity. It challenges you to find hidden assumptions in business requirements & learn to Fail-Fast-Fail-Often! In 18 minutes, teams must build the tallest free-standing structure out of spaghetti, tape, string and one marshmallow which must be on top.
YouTube: Part 1: http://bit.ly/HR4XiX
YouTube: Part 2: http://bit.ly/J7t8zJ

- Mobile Testing: In and Out - Sudeep Somani
YouTube: http://bit.ly/JifEQH

- BDT (Behaviour Driven Testing) - Anand Bagmar
Synopsis: What is Behavior Driven Testing (BDT)? How does it differ from Behavior Driven Development? What tools support this kind of testing? The value proposition BDT offers.
YouTube: http://bit.ly/HUSuuY
Slides: http://slidesha.re/I69BNK

- Code Coverage of Function Testing Automation Scripts - Aakash Tyagi
Synopsis: Challenge As the product is a vast product that provides so regression suite of this was very big. It was taking about 14 days to execute and with every release it was increasing. The main challenge was to keep regression suite comprehensive as well small so that it can be executed many time. Solution Emma was used to find code coverage of product code then redesign the regression suite.
Slides: http://slidesha.re/HWQJCQ

- Negative Testing, in a positive vein - Srinivas Chillara
Synopsis: How to think about "negative testing", and why it may not be truly negative.
YouTube: http://bit.ly/IaKOr4

- Virtual Communication and Testers - Archana Dhingra
Synopsis: What is Virtual communication and its importance in IT industry. - The common mistakes we all commit while communicating in a Virtual environment - How to effectively manage and communicate with virtual teams. - Conflict resolution in a Virtual setup.
YouTube: http://bit.ly/Jq3uUc

- Automation Reusable Framework based on QC - Vysali Alaparthi
Synopsis: About ART:A hybrid framework named as ART (Automation Reusable Test) is used for end-to-end automation as ART framework supports automation of web, windows, and AS/400 applications. ART framework uses automation tool owned by HP i.e. Quick Test Professional (QTP) for execution of automated keyword-driven test scripts.Key Achievements: Efforts involved in test cases/scripts integration has reduced.
YouTube: http://bit.ly/Jlb8MH
Slides: http://slidesha.re/IrBtua

Closing note: Shalabh Varma
YouTube: http://bit.ly/Jb9fFO

Looking forward for your continued comments, feedback, thoughts and support to make vodQA more successful, and the QA community more vibrant and connected!

See you in the next vodQA.

Thank you.

vodQA Team.
vodqa-pune@thoughtworks.com

Thursday, March 22, 2012

vodQA Pune - another big success


Thoughtworks Pune organized another successful vodQA on 17th March 2012

Statistics:
375+ attendee registrations
35+ speaker registrations
100+ external attendees
20+ Thoughtworkers

Sessions/Topics covered:
- Code Coverage of Function Testing Automation Scripts - Aakash Tyagi
- Mobile Testing: Challenges and Solutions - Ashwini Phalle
- Business Analysis - Beyond Technical & Communication Skills - Anil Dagia
- Testing is Dead. Long Live Testing - Shrinivas Kulkarni
- Testing a Massively Multi-player Online Game Server - Nirmalya Sengupta, Srinivas Chilllara
- Behaviour Driven Testing - Anand Bagmar
- Virtualisation Impact on Software Testing - Parthasarthi T
- Negative Testing, in a positive vein - Srinivas Chillara
- Virtual Communication and Testers - Archana Dhingra
- Automation Reusable Framework based on QC - Vysali Alaparthi

Workshops:
- Mobile Testing: In and Out - Sudeep Somani
- The Marshmallow Challenge - Sneha Kadam

Some feedbacks received:
  • Impressive, please continue conducting such events
  • I have attended vodQa for first time, its Amazing experince
  • Some forum about Innovation & "Testing process"
  • Nice initiative taken by Thoughtworks
  • Have this event once in Quarter



Please join our LinkedIn Group - vodQA and our FaceBook group - vodQA..

We will be uploading the pictures, videos and slides to a common place soon for everyone to see.

Thank you all who attended for making this a successful event.

Monday, January 23, 2012

vodQA Chennai starts off with a century!

I attended vodQA in Chennai on 21st Jan 2012. The event was great. Over 100 passionate testers from Chennai testing community turned up and made sure people presenting were on their toes with excellent questions and great interactions.


One of the participants already blogged about it here. She raises valid observations - and I wish I had the opportunity to speak with her directly to address some of the questions / concerns she raised. 


In this session, I presented a topic - "What is WAAT?" based on my open-source project - WAAT. The slides used in this session are available here.


I am already looking forward to the next vodQA in Chennai. For now, I am preparing for vodQA Bangalore - "Agile Testing for Teams and Enterprises". on Feb 11, and then vodQA Pune on Mar 17.

Tuesday, January 3, 2012

[Date Updated] Announcing the next vodQA event in Pune

[Date updated to 17th March 2012]


After a long delay, ThoughtWorks is happy to announce the next edition of vodQA - THE TESTING SPIRIT! on 17th March, 2012 in ThoughtWorks Pune office.


Watch this space for more details.


Contact me if you are interested in helping make this a great event!


Thanks.


Anand

Assertions and Validations in Page Objects

A colleague recently asked me a very nice set of questions - 

  • "Have any of you designed tests with assertions happening in page objects and not in the tests? 
  • If yes, have you faced any specific problem with this approach? 
  • What would be the drawback in moving the assertions inside the page's methods."

Here are my thoughts on this.

Test Framework Design

I follow a few principles when designing my test framework:
  • Test code should be of Production Quality!
  • Since the test code should be of Production Quality, it means it needs to be designed and built using design patterns.
  • This well-designed test framework should have proper abstraction layers. These abstraction layers help in many different ways:
    • Decouple test specification from test implementation
    • Provides greater level of re-usability
    • Easier in re-factoring
    • Easier to maintain and evolve the framework
    • Easier for all team members to ramp-up and work in a collaborative way on specific abstraction layers.
  • Evolve your framework functionality and implementation. Keeping the end goal in mind, develop the framework as per requirements at that point in time. Do not attempt to build all the functionality in a single shot. More likely than not, you will end up building something that is not going to satisfy the future requirement.
See the diagram below for reference on different possible layers of a Test Framework.



What is Page-Object pattern?


Page-Object pattern is one of the powerful ways of designing a good, reusable, extensible and maintainable test framework. 

This article as great explanation and examples of Page Objects (http://code.google.com/p/selenium/wiki/PageObjects)

Assertions and validations in Page Objects?

The page object is a code representation of the actual page. It has accessors and modifiers (getters and setters) for various objects in that page. It only knows how to perform actions on the page object, and retrieve data / values from the page. 


If there is any problem in the page under test, or the page object representing the page, then the test will fail automatically (in most cases) because of functionality mismatch. 


Assertions / verifications are essentially business rules for the product under test.


The page-object should not have assertions or verifications in them. The business rules of the product do not belong in the page object layer, but instead in the layer above it.

In many cases the business rules remain the same where as the underlying implementation evolves. If you have the changes isolated and decoupled, then updating the framework becomes easy and much quicker. this also makes more sense in larger and distributed projects where everyone may not be on the same page.



I refrain from having any form of assertions in the page's object. It mixes the pure implementation of visibility of the page's functionality with the business logic. This in turn makes both, the framework and the tests brittle.

The impact of having these rules and guidelines of how the test framework is structured is greatly seen as the framework matures and when new people come on the team, the learning curve is not as steep.

If the project is small, or if the test framework is going to be thrown away after some time,  then you can probably get away by building the framework any which way you want. 



Thinking that I will take the easy way out for now and then will come back to "do this right" is a trap!!! More often than not, you are never going to get time to come back and make things right. So might as well, spend a little extra effort in the beginning and build your test framework the right way! Remember - "A stitch in time saves nine"!

Tuesday, November 29, 2011

Effective Strategies for Distributed Testing - slides available

I finally figured a way around the problem PowerPoint was giving me in converting the slides for this presentation to pdf format. 


The slides are now uploaded in slideshare.net  and available here.


The video recording of the webinar is available here.

Wednesday, November 16, 2011

vodQA - NCR

I am very happy and proud to share that vodQA conference is going national. 


ThoughtWorks will be hosting its first vodQA out of Pune ... this time in Gurgaon on 3rd December 2010.


I will be pairing with Manish Kumar on a topic "Distributed Agile Testing for Enterprises"


See this page for more information.

Thursday, November 10, 2011

Effective Strategies of Distributed Testing - recording available

On 9th November, 2011, I presented my first webinar with my colleague - Manish Kumar, on the topic "Effective Strategies for Distributed Testing".


We spoke based on our experiences and shared tips and strategies on various different themes, like 
- agile testing principles that matter!
- need of distributed teams,
- testing challenges in distributed teams,
- mindset, 
- communication, 
- collaboration, 
- tools & technologies,
- ATDD,
- test automation,
- and many others ...


All distributed teams are working on the same product ... hence it is extremely important to treat all these distributed teams as ONE TEAM!


Watch the video recording of this webinar available here. 


(http://testing.thoughtworks.com/events/effective-strategies-distributed-testing).

Sunday, November 6, 2011

Effective Strategies for Distributed Testing - webinar

Come, join Manish Kumar and me for a webinar on 9th November, 2011 on "Effective Strategies for Distributed Testing". 


We will be sharing tips and techniques on how you can make testing in distributed teams more effective.


More details on the webinar are available here.

Thursday, September 22, 2011

Asking the right question


This Dilbert strip says it all!!


For those not able to see the link it correctly, here it is:






If you don't ask the right question, the topic can digress in any direction ... resulting in waste of time and effort of all involved.


So remember - its not just important to ask questions, but it is more important to ask the right questions!!

Monday, August 29, 2011

Does extrapolation in estimation work?

How do you estimate test automation efforts for building a regression suite for functionality that is already in Production?


The approach I am following is:
> Get some level of understanding about the domain
> Look at a subset of existing test cases of varying complexities and sizes
> Estimate the effort for each of them and also identify the complexity involved in each, with the risks and assumptions.
> Identify some obvious spikes / tech tasks that will be needed
> Extrapolate the estimates
> Add 10-15% buffer in the estimates for unknowns.


And presto!! .... I have my estimates for the task on hand.


Is this approach right? 
Is extrapolating the estimates going to give valid results?
What can be different / better ways of getting the results required?

Thursday, August 18, 2011

To test or not to test ... do you ask yourself this question?

For people involved in Test Automation, I have seen that quite a few of us get carried away and start automating anything and everything possible. This inherently happens at the expense of ignoring / de-prioritizing other equally important activities like Manual (exploraratory / ad-hoc) testing.

Also, as a result, the test automation suite gets very large and unmaintainable, and in all possibilities, not very usable, with a long feedback cycle.

I have found a few strategies work well for me when I approach Test Automation:
  • Take a step back and look at the big picture.
  • Ask yourself the question - "Should I automate this test or not? What value will the product get by automating this?"
  • Evaluate what test automation will truly provide good and valuable feedback.
  • Based on the evaluation, build and evolve your test automation suite.
One technique that is simple and quick to evaluate what tests should be automated or not, is to do a Cost Vs Value analysis of your identified tests using the graph shown below.


This is very straight forward to use.

To start off, analyze your tests and categorize them in the different quadrants of the above graph.
  1. First automate tests that provide high value, and low cost to build / maintain = #1 in the graph. This is similar to the 80/20 rule.
  2. Then automate tests that provide high value, but have a high cost to build / maintain = #2 in the graph.
  3. Beyond this, IF there is more time available, then CONSIDER automating tests that have low value, and low cost. I would rather better utilize my time at this juncture to do manual exploratory testing of the system.
  4. DO NOT automate tests that have low value and high cost to build / maintain.

    Thursday, August 11, 2011

    How do you create / generate Test Data?

    Testing (manual or automation) depends upon data being present in the product.


    How do you create Test Data for your testing?


    • Manually create it using the product itself?
    • Directly use SQL statements to create the data in the DB (required knowledge of the schema)?
    • Use product apis / developer help to seed data directly? (using product factory objects)?
    • Use production snapshot / production-like data?
    • Any other way?

    How do you store this test data?


    • Along with the test (in code)?
    • In separate test data files - eg: XML, yml, other types of files?
    • In some specific tools like Excel?
    • In a separate test database?
    • Any other way?

    Do you use the same test data that is also being used by the developers for their tests?


    What are the advantages / disadvantages you have observed in the approach you use (or any other methods)?


    Looking forward to knowing your strategy on test data creation and management!

    Thursday, August 4, 2011

    Lessons from a 1-day training engagement - a Trainer perspective

    I was a Trainer for bunch of smart QAs recently. The training was about "Agile QA". This term is very vague, and if you think about it more deeply, it is also a very vast topic.

    The additional complexity was that this training was to be done in 1 day.

    So we broke it down to what was really essential to be covered in this duration, and what could be covered in this duration.

    We came down to 2 fundamental things:
    1. How can the QA / Test team be agile and contribute effectively to Agile projects?
    2. How can you be more effective in Test Automation - and that too start this activity in parallel with Story development?

    So we structured our thoughts, discussions and presentations around this.

    At the end of the exhausting day (both, for the Trainers as well as the Trainees), a couple of things stood out (not in any particular order):
    • What is the "right" Agile QA process?
    • Roles and responsibilities of a QA on an Agile team
    • Sharing case studies and the good & not-so-good experiences around them
    • Effectiveness of process and practices
    • Value of asking difficult questions at the right time
    • Taboo game - playing it, and reflecting on its learnings
    • What to automate and what NOT to automate?
    • Discussion around - How do you create / generate Test Data?

    Tuesday, July 26, 2011

    WAAT-Ruby - ready for use

    WAAT-Ruby is now ready for use. 

    Project hosted on github - http://github.com/anandbagmar/WAAT-Ruby
    WAAT-Ruby gem available for download from here.
    Documentation is available (on WAAt-Ruby wiki pages) here.

    Since WAAT-Ruby uses WAAT-Java under the covers, I have kept the same version numbers for both platforms. The latest version is 1.4.

    I have not yet pushed it out on rubygems.org. Will update once that is done.

    So far I have tested this on the following environments:
    • Windows 7 - 64-bit with Ruby 1.8.6
    • RHEL 6 - 64-bit with Ruby 1.8.6 (I had difficulty in getting jpcap deployed on this environment). But once that was done, WAAT worked smoothly out of the box.
    • Ubuntu 10.x - 32-bit with Ruby 1.8.7
    • Ubuntu 10.x - 32-bit with Ruby 1.9.1
    One important note:
    If you are using WAAT (Java or Ruby) on any Linux-based environment, please note the Jpcap requirement for execution.
    WAAT uses Jpcap for capturing network packets. Jpcap needs administrative privileges to do this work. See the Jpcap FAQs for more information about the same.


    For all WAAT related blog posts, click here.