Friday, November 22, 2013
Oraganising vodQA
My blog on "Organizing a successful meetup - Tips from vodQA" is now available from ThoughtWorks Insights page.
Monday, October 21, 2013
BDT in Colombo Selenium Meetup
[UPDATED again] Feedback and pictures from the virtual session on BDT
[UPDATED]
The slides and audio + slide recording have now been uploaded.
I will be talking virtually and remotely about "Building the 'right' regression suite using Behavior Driven Testing (BDT)" in Colombo's Selenium Meetup on Wednesday, 23rd October 2013 at 6pm IST.
If you are interested in joining virtually, let me know, and if possible, I will get you a virtual seat in the meetup.
[UPDATED]
The slides and audio + slide recording have now been uploaded.
I will be talking virtually and remotely about "Building the 'right' regression suite using Behavior Driven Testing (BDT)" in Colombo's Selenium Meetup on Wednesday, 23rd October 2013 at 6pm IST.
If you are interested in joining virtually, let me know, and if possible, I will get you a virtual seat in the meetup.
Saturday, October 12, 2013
vodQA Pune - Faster | Smarter | Reliable schedule announced
A very impressive and engrossing schedule for vodQA Pune scheduled for Saturday, 19th October 2013 at ThoughtWorks, Pune has now been announced. See the event page for more details.
I am going to be talking about "Real-time Trend and Failure Analysis using Test Trend Analyzer (TTA)"
I am going to be talking about "Real-time Trend and Failure Analysis using Test Trend Analyzer (TTA)"
Sunday, October 6, 2013
Offshore Testing on Agile Projects
Offshore Testing on Agile Projects …
Anand Bagmar
Reality of organizations
Organizations
are now spread across the world. With this spread, having distributed teams is
a reality. Reasons could be a combination of various factors, including:
Globalization
|
Cost
|
24x7
availability
|
Team size
|
Mergers
and Acquisitions
|
Talent
|
The Agile Software
methodology talks about various principles to approach Software Development. There
are various practices that can be applied to achieve these principles.
The choice
of practices is very significant and important in ensuring the success of the
project. Some of the parameters to consider, in no significant order are:
Skillset on the team
|
Capability on the team
|
Delivery objectives
|
Distributed
teams
|
Working with partners / vendors?
|
Organization Security / policy constrains
|
Tools for collaboration
|
Time overlap
time between teams
|
Mindset of team members
|
Communication
|
Test Automation
|
Project
Collaboration Tools
|
Testing Tools
|
Continuous Integration
|
** The above
list is from a Software Testing perspective.
This post is
about what practices we implemented as a team for an offshore testing project.
Case Study - A quick introduction
An enterprise had a B2B product providing an
online version of a physically conducted auction for selling used-vehicles, in
real-time and at high-speed. Typical participation in this auction is by an auctioneer,
multiple sellers, and potentially hundreds of buyers. Each sale can have up to
500 vehicles. Each vehicle gets sold / skipped in under 30 seconds - with multiple
buyers potentially bidding on it at the same time. Key business rules: only 1
bid per buyer, no consecutive bids by the same buyer.
Analysis and Development was happening across 3
locations – 2 teams in the US, and 1 team in Brazil. Only Testing was happening from Pune, India.
George Bernard Shaw said:
“Success does not consist in never making
mistakes but in never making the same one a second time.”
We took that to heart and very sincerely. We
applied all our learning and experiences in picking up the practices to make us
succeed. We consciously sought to creative, innovative and applied
out-of-the-box thinking on how we approached testing (in terms of strategy,
process, tools, techniques) for this unique, interesting and extremely
challenging application, ensuring we do not go down the same path again.
Challenges
We
had to over come many challenges for this project.
- Challenge in creating a common DSL that will be understood by ALL parties - i.e. Clients / Business / BAs / PMs / Devs / QAs
- All examples / forums talk using trivial problems - whereas we had lot of data and complex business scenarios to take care of.
- Cucumber / capybara / WebDriver / ruby do not allow an easy way to do concurrency / parallel testing
- We needed to simulate in our manual + automation tests for "n" participants at a time, interacting with the sale / auction
- A typical sale / auction can contains 60-500 buyers, 1-x sellers, 1 auctioneer. The sale / auction can contain anywhere from 50-1000 vehicles to sell. There can be multiple sales going on in parallel. So how do we test these scenarios effectively?
- Data creating / usage is a huge problem (ex: production subset snapshot is > 10GB (compressed) in size, refresh takes long time too,
- Getting a local environment in Pune to continue working effectively - all pairing stations / environment machines use RHEL Server 6.0 and are auto-configured using puppet. These machines are registered to the Client account on the RedHat Satellite Server.
- Communication challenge - We are working from 10K miles away - with a time difference of 9.5 / 10.5 hours (depending on DST) - this means almost 0 overlap with the distributed team. To add to that complexity, our BA was in another city in the US - so another time difference to take care of.
- End-to-end Performance / Load testing is not even a part of this scope - but something we are very vary of in terms of what can go wrong at that scale.
- We need to be agile - i.e. testing stories and functionality in the same iteration.
All the above-mentioned problems meant we had to come up with our
own unique way of tackling the testing.
Our principles - our North Star
We
stuck to a few guiding principles as our North Star:
- Keep it simple
- We know the goal, so evolve the framework - don't start building everything from step 1
- Keep sharing the approach / strategy / issues faced on regular basis with all concerned parties and make this a TEAM challenge instead of a Test team problem!
- Don't try to automate everything
- Keep test code clean
The End Result
At the end of the journey, here are some interesting
events from the off-shore testing project:
- Tests were specified in form of user journeys following the Behavior Driven Testing (BDT) philosophy – specified in Cucumber.
- Created a custom test framework (Cucumber, Capybara, WebDriver) that tests a real-time auction - in a very deterministic fashion.
- We had 65-70 tests in form of user journeys that covers the full automated regression for the product.
- Our regression completed in less than 30 minutes.
- We had no manual tests to be executed as part of regression.
- All tests (=user journeys) are documented directly in Cucumber scenarios and are automated
- Anything that is not part of the user journeys is pushed down to the dev team to automate (or we try to write automation at that lower level)
- Created a ‘special’ Long running test suite that simulates a real sale with 400 vehicles, >100 buyers, 2 sellers and an auctioneer.
- Created special concurrent (high speed parallel) tests that ensures even at highest possible load, the system is behaving correctly
- Since there was no separate performance and load test strategy, created special utilities in the automation framework, to benchmark “key” actions.
- No separate documentation or test cases ever written / maintained - never missed it too.
- A separate special sanity test that runs in production after deployment is done, to ensure all the integration points are setup properly
- Changed our work timings (for most team members) from 12pm - 9pm IST to get some more overlap, and remote pairing time with onsite team.
- Setup an ice-cream meter - for those that come late for standup.
Innovations and Customizations
Necessity
breeds innovation! This was so true in this project.
Below is a table listing all the different areas and specifics of
the customization we did in our framework.
Dartboard
Created a custom board “Dartboard” to quickly visualize
the testing status in the Iteration. See this post for more details: “Dartboard
– Are you on track?”
TaaS
To
automate the last mile of Integration Testing between different applications, we
created an open-source product – TaaS. This provides a platform / OS
/ Tool / Technology / Language agnostic way of Automating the Integrated
Tests between applications.
Base premise for TaaS:
Enterprise-sized organizations have multiple
products under their belt. The technology stack used for each of the product is
usually different – for various reasons.
Most of such organizations like to have a
common Test Automation solution across these products in an effort to
standardize the test automation framework.
However, this is not a good idea! If products in the same
organization can be built using different / varied technology stack, then why
should you pose this restriction on the Test Automation environment?
Each product should be tested using the tools
and technologies that are “right” for it.
“TaaS” is a product that allows you do
achieve the “correct” way of doing Test Automation.
WAAT - Web Analytics Automation Testing Framework
I had created the WAAT framework
for Java and Ruby in 2010/2011.
However this framework had a limitation - it did not work products what are
configured to work only in https mode.
For one of the applications, we need to do testing for WebTrends
reporting. Since this application worked only in https mode, I created a new
plugin for WAAT - JS Sniffer that can
work with https-only applications. See my blog for
more details about WAAT.
Wednesday, October 2, 2013
Real-time Trend and Failure Analysis using Test Trend Analyzer (TTA)
Real-time Trend and Failure Analysis using Test Trend
Analyzer (TTA)
Anand Bagmar
Summary
Organizations have long running
products / programs. They need to understand the health of their products /
projects at a quick glance, instead of having a team of people manually
scrambling frantically to collate and collect the information needed to get a
sense of quality
about the products they support.
about the products they support.
TTA is an open source product that
becomes the source of information to give you real-time insights into the health
of the product portfolio using the Test Automation results, in form of Trends,
Comparative Analysis, Failure Analysis and Functional Performance Benchmarking.
The Dream
The statement "I have a
dream" is a very famous quote by American activist Martin Luther
King Jr.
I resonate very closely with that. Here is why and how ...
Sometime in 2011, I had a dream ... a vision about a product that can help those working in large organizations understand the health of their products / projects at a quick glance, instead of having a team of people manually scrambling frantically to collate and collect the information needed to get a sense of quality about the products they support….
I resonate very closely with that. Here is why and how ...
Sometime in 2011, I had a dream ... a vision about a product that can help those working in large organizations understand the health of their products / projects at a quick glance, instead of having a team of people manually scrambling frantically to collate and collect the information needed to get a sense of quality about the products they support….
I called this dream - Test Trend Analyzer - TTA
What is TTA?
In a nutshell, given all various types
of Test Automation is done in your organization, TTA is a product that stores
and parses the test run results, and then displays various Trend Analysis charts
and also does Test Failure analysis for you. Based on the context of the
product under test, the viewer can then make more meaning of the data
presented, and more importantly, take meaningful actions / next steps.
Why do I need to Trend Analysis of the test results?
Automation (Unit / Integration / Functional / etc.) is a key factor in ensuring the success, quality and time-to-market for products.
Since Automated tests are executed via CI
(Continuous Integration), a lot of trend analysis and test failure analysis is already
be done by the CI tool itself.
However, the ability of CI doing this
is limited for the following reasons:
- The typical archival duration in CI is in the range from 15-45 days.
- Only trends can be seen after grouping relevant jobs in the CI tool.
- It is difficult to group all related product jobs in CI – because of the sheer volume of tests.
- The grouping of jobs becomes more challenging if the number of products / projects / vendors or partners / environments / etc. are more in number.
- The projects / products are long running (many months to years). It is not practical to archive the results for such duration in CI.
I have seen first-hand many of the use
cases listed below from real scenarios, where we need a unique and different
product to solve some Testing Specific problems:
- A Business Manager / Test Director overseeing multiple products development in the organization may want to see the overall health of all the products in his / her portfolio, in real time.
- A Product Owner / PM / Test Manager overseeing the product development / testing of a specific product in the organization may want to see the overall health of the product, in real time.
- Individual team members (Tech Leads / QAs / Developers / etc.) want to do quick test failure analysis in order to decide the correct priority of next set of tasks.
Vision for TTA
With the above considerations in mind, I came
up with the following vision statement for TTA:
• A
single point, visual solution to gauge the health of your product portfolio
using Test Automation results by means of –
– Trends
– Failure
analysis
• And
providing
– Drill-down
reports
– Customizable
reports
• So
that
– Different
stakeholders can get single click view of the health status and potential issues
– A
project team can decide if automation is useful or not.
– Automated
data collation and trending to avoid manual data aggregation and interpretation
• With
the stakeholders being
– QA
Directors / Managers / Leads / hands-on-tester
– Developers
– Tech-Ops
How does TTA work?
TTA is developed as an independent RoR
product. It uses MySQL as the database. You will need to install TTA
(instructions available on TTA
github wiki) on an independent (virtual) machine.
TTA is a decoupled product. It does not
depend on any specific CI (Continuous Integration) Tool, programming language,
test framework, etc.
CI Jobs typically call some build tool
– example ant, maven, gradle, etc. The command called by the CI job does the
test setup, and then executes the tests. After execution, the results are sent
back to CI, and the test run completed.
After the test execution is completed,
to integrate automatic reporting of results to TTA, we need to:
- Zip the log folder, and,
- Send the results with test meta-data information
Current set of Features for TTA
· Test Pyramid view (/pyramid) -
to see how your project's automation effort aligns with the Test Automation
philosophy
· Comparative Analysis view (/comparative_analysis) - to see the trend of your test automation results over
a period of time, and if any patterns emerge
· Failure Analysis view (/defect_analysis) - to make better meaning of the test failures, and help you prioritize
which failures should be fixed first.
· Integration of external dashboards (add from /admin
page, see integration on /home page) - this allows one to integrate
different existing dashboards into TTA - to make it a one stop place for
seeing all Testing related information. Example: You can integrate your defect
reports from Mingle / Jira / etc., or, you can also integrate your specific
Continuous Integration (CI) dashboard from Go / Jenkins / Hudson / Bamboo /
etc.
· Test
Execution Trend - to see the
benchmarking of specific test execution over a time period
· Compare test runs (/compare_runs) - to compare specific test runs
o
what are the common
failures,
o
what are the unique
failures,
o
what failed on date 1,
but passed on date 2
o
what failed on date 2,
but passed on date 1
· Upload Test Run Data manually (/upload) - to manually upload test data in case if you have not
uploaded test data automatically to TTA, but still want to use TTA
· TTA Statistics Page (/stats) -
to know usage of TTA by different projects / teams in your organization
Refer to my blog or the TTA-github-wiki for other
information, including screenshots about TTA.
Current state
TTA is available as an open-source product
via github. There are a couple
of clients (internal to ThoughtWorks, and external) using TTA in their
projects.
How can you contribute?
Given that we have implemented only a
few basic features right now, and there are many more in the backlog, here is
how you can help:
- Suggest new ideas / features that will help make TTA better
- Use TTA on your project and provide feedback
- More importantly, help in implementing these features
Contact information
Contact Anand Bagmar (anand.bagmar@thoughtworks.com /
abagmar@gmail.com) for more information
about TTA.
Subscribe to:
Posts (Atom)