Thursday, December 22, 2011
Tuesday, November 29, 2011
Effective Strategies for Distributed Testing - slides available
Wednesday, November 16, 2011
vodQA - NCR
I am very happy and proud to share that vodQA conference is going national.
ThoughtWorks will be hosting its first vodQA out of Pune ... this time in Gurgaon on 3rd December 2010.
I will be pairing with Manish Kumar on a topic "Distributed Agile Testing for Enterprises"
See this page for more information.
ThoughtWorks will be hosting its first vodQA out of Pune ... this time in Gurgaon on 3rd December 2010.
I will be pairing with Manish Kumar on a topic "Distributed Agile Testing for Enterprises"
See this page for more information.
Thursday, November 10, 2011
Effective Strategies of Distributed Testing - recording available
On 9th November, 2011, I presented my first webinar with my colleague - Manish Kumar, on the topic "Effective Strategies for Distributed Testing".
We spoke based on our experiences and shared tips and strategies on various different themes, like
- agile testing principles that matter!
- need of distributed teams,
- testing challenges in distributed teams,
- mindset,
- communication,
- collaboration,
- tools & technologies,
- ATDD,
- test automation,
- and many others ...
All distributed teams are working on the same product ... hence it is extremely important to treat all these distributed teams as ONE TEAM!
Watch the video recording of this webinar available here.
(http://testing.thoughtworks.com/events/effective-strategies-distributed-testing).
We spoke based on our experiences and shared tips and strategies on various different themes, like
- agile testing principles that matter!
- need of distributed teams,
- testing challenges in distributed teams,
- mindset,
- communication,
- collaboration,
- tools & technologies,
- ATDD,
- test automation,
- and many others ...
All distributed teams are working on the same product ... hence it is extremely important to treat all these distributed teams as ONE TEAM!
Watch the video recording of this webinar available here.
(http://testing.thoughtworks.com/events/effective-strategies-distributed-testing).
Sunday, November 6, 2011
Effective Strategies for Distributed Testing - webinar
Come, join Manish Kumar and me for a webinar on 9th November, 2011 on "Effective Strategies for Distributed Testing".
We will be sharing tips and techniques on how you can make testing in distributed teams more effective.
More details on the webinar are available here.
We will be sharing tips and techniques on how you can make testing in distributed teams more effective.
More details on the webinar are available here.
Thursday, September 22, 2011
Asking the right question
This Dilbert strip says it all!!
For those not able to see the link it correctly, here it is:
If you don't ask the right question, the topic can digress in any direction ... resulting in waste of time and effort of all involved.
So remember - its not just important to ask questions, but it is more important to ask the right questions!!
Monday, August 29, 2011
Does extrapolation in estimation work?
How do you estimate test automation efforts for building a regression suite for functionality that is already in Production?
The approach I am following is:
> Get some level of understanding about the domain
> Look at a subset of existing test cases of varying complexities and sizes
> Estimate the effort for each of them and also identify the complexity involved in each, with the risks and assumptions.
> Identify some obvious spikes / tech tasks that will be needed
> Extrapolate the estimates
> Add 10-15% buffer in the estimates for unknowns.
And presto!! .... I have my estimates for the task on hand.
Is this approach right?
Is extrapolating the estimates going to give valid results?
What can be different / better ways of getting the results required?
The approach I am following is:
> Get some level of understanding about the domain
> Look at a subset of existing test cases of varying complexities and sizes
> Estimate the effort for each of them and also identify the complexity involved in each, with the risks and assumptions.
> Identify some obvious spikes / tech tasks that will be needed
> Extrapolate the estimates
> Add 10-15% buffer in the estimates for unknowns.
And presto!! .... I have my estimates for the task on hand.
Is this approach right?
Is extrapolating the estimates going to give valid results?
What can be different / better ways of getting the results required?
Thursday, August 18, 2011
To test or not to test ... do you ask yourself this question?
For people involved in Test Automation, I have seen that quite a few of us get carried away and start automating anything and everything possible. This inherently happens at the expense of ignoring / de-prioritizing other equally important activities like Manual (exploraratory / ad-hoc) testing.
Also, as a result, the test automation suite gets very large and unmaintainable, and in all possibilities, not very usable, with a long feedback cycle.
I have found a few strategies work well for me when I approach Test Automation:
- Take a step back and look at the big picture.
- Ask yourself the question - "Should I automate this test or not? What value will the product get by automating this?"
- Evaluate what test automation will truly provide good and valuable feedback.
- Based on the evaluation, build and evolve your test automation suite.
One technique that is simple and quick to evaluate what tests should be automated or not, is to do a Cost Vs Value analysis of your identified tests using the graph shown below.
This is very straight forward to use.
To start off, analyze your tests and categorize them in the different quadrants of the above graph.
- First automate tests that provide high value, and low cost to build / maintain = #1 in the graph. This is similar to the 80/20 rule.
- Then automate tests that provide high value, but have a high cost to build / maintain = #2 in the graph.
- Beyond this, IF there is more time available, then CONSIDER automating tests that have low value, and low cost. I would rather better utilize my time at this juncture to do manual exploratory testing of the system.
- DO NOT automate tests that have low value and high cost to build / maintain.
Thursday, August 11, 2011
How do you create / generate Test Data?
Testing (manual or automation) depends upon data being present in the product.
How do you create Test Data for your testing?
- Manually create it using the product itself?
- Directly use SQL statements to create the data in the DB (required knowledge of the schema)?
- Use product apis / developer help to seed data directly? (using product factory objects)?
- Use production snapshot / production-like data?
- Any other way?
How do you store this test data?
- Along with the test (in code)?
- In separate test data files - eg: XML, yml, other types of files?
- In some specific tools like Excel?
- In a separate test database?
- Any other way?
Do you use the same test data that is also being used by the developers for their tests?
What are the advantages / disadvantages you have observed in the approach you use (or any other methods)?
Looking forward to knowing your strategy on test data creation and management!
Thursday, August 4, 2011
Lessons from a 1-day training engagement - a Trainer perspective
I was a Trainer for bunch of smart QAs recently. The training was about "Agile QA". This term is very vague, and if you think about it more deeply, it is also a very vast topic.
The additional complexity was that this training was to be done in 1 day.
So we broke it down to what was really essential to be covered in this duration, and what could be covered in this duration.
We came down to 2 fundamental things:
- How can the QA / Test team be agile and contribute effectively to Agile projects?
- How can you be more effective in Test Automation - and that too start this activity in parallel with Story development?
So we structured our thoughts, discussions and presentations around this.
At the end of the exhausting day (both, for the Trainers as well as the Trainees), a couple of things stood out (not in any particular order):
- What is the "right" Agile QA process?
- Roles and responsibilities of a QA on an Agile team
- Sharing case studies and the good & not-so-good experiences around them
- Effectiveness of process and practices
- Value of asking difficult questions at the right time
- Taboo game - playing it, and reflecting on its learnings
- What to automate and what NOT to automate?
- Discussion around - How do you create / generate Test Data?
Thursday, July 28, 2011
WAAT-Ruby - available on RubyGems and opensourcetesting
I am very happy to announce that WAAT is now available on RubyGems.org and is also linked from Open source functional testing tools.
Here are the links for the same:
Tuesday, July 26, 2011
WAAT-Ruby - ready for use
WAAT-Ruby is now ready for use.
Project hosted on github - http://github.com/anandbagmar/WAAT-Ruby
WAAT-Ruby gem available for download from here.
Documentation is available (on WAAt-Ruby wiki pages) here.
Since WAAT-Ruby uses WAAT-Java under the covers, I have kept the same version numbers for both platforms. The latest version is 1.4.
I have not yet pushed it out on rubygems.org. Will update once that is done.
So far I have tested this on the following environments:
- Windows 7 - 64-bit with Ruby 1.8.6
- RHEL 6 - 64-bit with Ruby 1.8.6 (I had difficulty in getting jpcap deployed on this environment). But once that was done, WAAT worked smoothly out of the box.
- Ubuntu 10.x - 32-bit with Ruby 1.8.7
- Ubuntu 10.x - 32-bit with Ruby 1.9.1
One important note:
If you are using WAAT (Java or Ruby) on any Linux-based environment, please note the Jpcap requirement for execution.
WAAT uses Jpcap for capturing network packets. Jpcap needs administrative privileges to do this work. See the Jpcap FAQs for more information about the same.
For all WAAT related blog posts, click here.
For all WAAT related blog posts, click here.
Wednesday, July 20, 2011
WAAT - Ruby .... are we there yet?
The WAAT ruby gem is almost ready. My colleagues are helping testing it out in different environments and am updating the documentation accordingly.
Once done, this will be available as a Ruby gem from WAAT-Ruby github project, and also from rubygems.org.
Contact me if you are interested in trying this out before release.
Thursday, July 14, 2011
What is your expiry date?
Recently when doing some online transaction using my credit card, something struck me ... I realized that the form asking for my credit card information was quite weird, and probably incorrect.
Here is a sample layout of what I am talking about:
Here, I am asked to enter the details in this order:
- Credit card number
- Card holder's name
- Expiry date
- And so on ...
As I was entering the information, I ended up questioning myself ... whose Expiry Date??? The card's or mine???
Simply based on the flow of information asked for, it is quite easy to associate the Expiry Date with the earlier field - the Card holder's name. Right?
Wouldn't the layout be better this way instead:
- Card Holder name
- Card number
- Expiry date
- CVV number
Or, another way can be:
- Card number
- Expiry date
- CVV number
- Card Holder name
I checked all my 10-15 (credit / debit / membership) cards that I have. All of them have the issue date / expiry date / validity period associated with the card number, and not the card holder's name.
This leads me to believe that no one did a usability check, or, in this context, shall we call it a reality check when designing the credit card form like the one shown above.
I would have not let this design / layout get through. What would you do?
Thursday, July 7, 2011
Ruby Test Automation Framework Technology Stack Survey
WAAT - Web Analytics Automation Testing Framework is currently available for java based Test Automation Frameworks. (http://essenceoftesting.blogspot.com/search/label/waat)
I am now working on making this available as a Ruby gem.
In order to support WAAT for a good-mix of test environments, I would like to understand the different environments and technology stacks that are typically used by teams in their Test Automation Framework.
Thank you for your time and help in providing this information.
Wednesday, July 6, 2011
RubyMine (and Cucumber) caching issue
I use RubyMine to write and implement my Cucumber features on Linux.
I have noticed one weird behavior at times.
Though my step definition is correct, and the test also runs fine, RubyMine flags the step as not implemented. For some reason, it is not able to find the corresponding implementation in the .rb step definition file.
On a haunch, I selected the "Invalidate Cache" in RubyMine's File menu, and selected the "Invalidate and Restart" option. Presto .... things started working properly again.
Now I am wondering why did the RubyMine cache get messed up in the first place .....
Monday, June 27, 2011
WAAT for Ruby on its way
I have started work on creating a Ruby gem for WAAT. This is going to sit on top of the version created for Java. Hopefully will be able to get it out soon.
Watch this space for more information.
Friday, June 24, 2011
Test Trend Analyzer (TTA)
There are many tools and utilities that provide ways to do test result reporting and analysis of those results. However, I have not found a good, generic way of doing some Trend Analysis of those results.
Why do I need to Trend Analysis of the test results?
Long(er) duration projects / enterprise products need to know the state of the quality of the product over time. One also may need to know various other metrics around the testing - like number of tests, pass %, failure %, etc. over time.
The reports I have seen are very good about analyzing the current test results. However, I have not really come across a good generic tool that can be used in most environments for the Test Trend Analysis over a period of time.
I am thinking about developing a tool which can address this space. I call this - the Test Trend Analyzer (TTA).
Here is what I think TTA should do:
Supports:
- Work with reports generated by common unit-test frameworks (jUnit, nUnit, TestNG, TestUnit, style of reports)
- Provides Web Service interface to upload results to TTA
- Test Results uploaded will be stored in db
- Will work on Windows and Linux
Dashboard:
- Creates default TTA dashboard
- Customizable TTA dashboard
- Dashboard will be accessible via the browser
My questions to you:
- Do you think this will help?
- What other features would you like to see in TTA?
- What other type of support would you like to have in TTA?
Tuesday, June 21, 2011
WAAT and HTTPS
While most sites use http to report tags to the web analytic tool, there are some cases where http is disabled and all traffic is using https only.
In such cases, there may be a problem in using the generic solution provided by WAAT.
I did some research, analysis and experimentation and here are my findings:
- jpcap captures raw packets. It does not differentiate about http / https
- There is no problem in WAAT. All it does it matches packets based on patterns you specify in the tests.
- Since the requests are https based, WAAT is not able to match the packets, unless you specify encrypted packet identifiers and encrypted data in the xml file. firebug / fiddler / ethereal / wireshark / charles / burp / etc. does something extra in this regard to decode the packet information and show the raw content in the browser / tool.
So the question is what can be done next?
- If it is possible for you to get the configuration in the test environments changed to have the web analytics request sent out on http (maybe along with https) request, that can resolve the issue. Once in a while you can then verify manually if requests are going out on https.
- You can use Omniture Debugger - but the limitation in your case is that it will be available for Omniture only and not the other web analytic tools.
- You can extend the HttpSniffer class (,say HttpsSniffer), and provide implementation to decode the captured packets before doing the validation. However, note that this will be a expensive operation as you will be decoding all the captured packets on the network interfaces on your machine and the packet(s) of your interest will be fractional of those captured.
Tuesday, April 19, 2011
WAAT (Web Analytics Automation Testing Framework) is alive!!!
[UPDATE] Check related post here (http://essenceoftesting.blogspot.com/2011/04/waat-web-analytics-automation-testing.html)
I am very happy to announce that the first release of WAAT is available for general use.
I am very happy to announce that the first release of WAAT is available for general use.
WAAT is hosted on github (https://github.com/anandbagmar/WAAT)
WAAT v1.3 can be used to test *almost any type of Web Analytic solution. Tested with Google Analytics and Omniture. This is platform dependent.
Binaries:
You can either get the code base from git@github.com:anandbagmar/WAAT.git, or, get the binaries available in the dist folder.
Documentation:
Documentation for using WAAT is available in various different formats:
> WAAT Readme.docx
> WAAT Readme.doc
> WAAT Readme.pdf
> WAAT Readme.html
These files are available in the docs folder.
The documentation is also part of the binary file downloads.
I am looking forward for your usage and comments to make this better and more usable.
Saturday, April 16, 2011
WAAT release update
I am almost ready with my first public release of WAAT. Some finishing touches remaining which is causing the delay for this.
For those not aware, here is what WAAT is:
> WAAT stands for Web Analytics Automation Testing Framework
> Developed as a Java jar to be used in existing testing frameworks to do web analytics automation
> Phase 1: Implemented for Omniture using Omniture Debugger -> Status: Completed
> Phase 2: Can be used to test *almost any type of Web Analytic solution. Tested with Google Analytics and Omniture. This is platform dependent. -> Status: In progress. Documentation to be updated.
> Phase 3: Make WAAT available for Ruby / .Net testing frameworks. -> Status: To be started.
> WAAT stands for Web Analytics Automation Testing Framework
> Developed as a Java jar to be used in existing testing frameworks to do web analytics automation
> Phase 1: Implemented for Omniture using Omniture Debugger -> Status: Completed
> Phase 2: Can be used to test *almost any type of Web Analytic solution. Tested with Google Analytics and Omniture. This is platform dependent. -> Status: In progress. Documentation to be updated.
> Phase 3: Make WAAT available for Ruby / .Net testing frameworks. -> Status: To be started.
Look at my earlier post for more details on WAAT.
Wednesday, April 13, 2011
Interesting webinar coming up ... "Where Exploration and Automation meet: Leveraging..."
There is a very interesting and informative webinar on how to utilize automated functional testing within your organization.
This is scheduled for Thursday, April 21.
See this link for more information.
Saturday, April 9, 2011
Agile QA Process
After doing testing on multiple Agile projects, I have come to realize certain aspects about the process and techniques that are common across projects. Some things I have learned along the way, some, by reflection on the mistakes / sub-optimal things that I did.
I have written and published my thoughts around the "Agile QA Process", more particularly what techniques can be used to test effectively in the Iterations. The pdf is available here for your reading. (http://www.slideshare.net/abagmar/agile-qa-process)
Note: A process is something that should be tweaked and customized based on the context you are in. The process mentioned in the document should be taken in the same light.
Friday, April 8, 2011
WAAT - Web Analytics Automation Testing Framework
[UPDATE] See my post about how you can get WAAT here (http://essenceoftesting.blogspot.com/2011/04/waat-is-alive.html).
Problem statement:
Problem statement:
On one of the projects that I worked on, I needed to test if Omniture reporting was done correctly.
The client relied a lot on Omniture reports to understand and determine the direction of their business. They have a bunch of Omniture tags reported for a lot of different actions on the site. Manual testing was the only way this functionality could be done verified. But given the huge number of tags, it was never possible to be sure that all tags were being reported correctly on a regular basis.
So I came up with a strategy to remove this pain-point.
Approach:
I created a framework in our existing automation framework to do Omniture testing. The intention of creating this framework was:
1. There is minimal impact on existing tests.
2. There should be no need to duplicate the tests just to do Omniture testing.
3. Should be easy to use (specify Omniture tags for various different actions, enable testing, etc.)
How it helped us?
1. We provided a huge and reliable safety net to the client and the development team by having Omniture testing automated.
2. Reduced the manual testing effort required for this type of testing, and instead got some bandwidth to focus on other areas.
Next Steps:
I am making this into a generic framework - a.k.a. WAAT - Web Analytics Automation Testing Framework to enable others doing Omniture testing to easily automate this functionality. This project will be hosted on github.
Phase 1 of this implementation will be for Omniture Debugger and input data provided in xml format. This framework will be available as a jar file.
Phase 2 also now complete includes support for any Web Analytic tool. I have tested this with Google Analytics as well as Omniture (NOT using Omniture Debugger). This uses a generic mechanism to capture packets from the network layer and processes them appropriately. Given this generic approach to work with any Web Analytic tool, the framework does become OS dependent.
Watch this space for more information (instructions, links to github, etc). Also, please do contact me with ideas / suggestions / comments about the same.
Tuesday, January 18, 2011
Subscribe to:
Posts (Atom)