I spoke in Mumbai recently about an open-source framework created to assist in End-2-End automated Integration Testing using TaaS - Test as a Service. The slides for the talk are available here and audio here.
The session was planned for 1 hour. I managed to finish the talk in 50ish minutes, then then the Q&A went on for another 30ish minutes.
Plan is to create and publish TaaS as a gem on rubygems.org.
Looking forward to your comments / feedback for the same!!
Friday, December 21, 2012
Monday, December 17, 2012
Speaking in Bangalore about WAAT and Agile Testing
I had a great time speaking in this conference. My talk was probably the only very technical talk in the conference. Another thing I observed from the audience is that not many of them knew about Web Analytics. I managed to finish up my talk in 40 min, and surprisingly, for an audience who didn't know much about Web Analytics, there were no questions. BUT, in the lunch break and networking session, a lot of people came up to me and said they really enjoyed my talk, and will look forward to "seeing" more on how Web Analytics is used in their organisation.
Also, there was a lot of interest and questions about Agile, and Agile Testing. This is a topic I can talk about for hours at length - and I controlled myself to a great extent to let other esteemed speakers also talk and answer questions about the same from the audience.
I ended up encouraging a lot of fellow speakers and attendees to think with an "open-mind" and be innovative and see how service organisations can provide "a good and true value" to customers, and why it is important to provide "good" solutions and ask the right and tough questions to the same! Also shared the concepts of TaaS with a few folks who potentially would be in similar soup with the "common test framework" concept.
All in all, a very good time in Bangalore. Now looking forward for the same in Mumbai on 18th December, where I will be talking about "Integration Testing in Enterprises - using TaaS (Test as a Service)".
Sunday, December 9, 2012
December 2012 conferences - WAAT & TaaS it is!
December seems to be a busy time related to conferences for me.
First, I attended test-ed 2012 conference hosted by moolya where I had the opportunity to meet James Bach and a few other great speakers.
I was a little let-down by James talk. It seemed like more of a marketing pitch for moolya - and somehow I felt I expected more from a person of his caliber! None-the-less, I am sure he inspired a lot of folks in the auditorium to become a free and innovative tester!
I also got to talk with him 1-1 about BDD and what situations it works well, and more importantly, when it does not work well. Also spoke with him on how BDT (Behavior Driven Testing) helps in building the "right automated regression suite" and the challenges facing the Testers in India in order to become the "free-spirited, creative and innovative testers" he spoke about.
Next, on 13th December, I am speaking in UNICOM's Next Generation Testing Conference in Bangalore about "The What, Why, and How of Web Analytics Testing". I will be talking about my open-source framework - "WAAT - the Web Analytics Automation Testing", and how that can ease the manual drudge of web analytics testing.
To close the year, I will be speaking on 18th December in another UNICOM's Next Generation Testing Conference in Mumbai. Here I will be talking about "Integration testing in Enterprises using TaaS (Test-as-a-Service)- Via Case Study". This is about another open-source framework I have created - TaaS - Test as a Service.
Hope to see you all in Bangalore / Mumbai.
Labels:
agile,
automation,
bdt,
taas,
testing,
thoughtworks,
waat
Monday, November 5, 2012
vodqa - Going Beyond the Usual updates
Our recently concluded vodQA Pune, on 13th October 2012(http://testing. thoughtworks.com/events/going- beyond-the-usual), was a huge success. Your participation, questions, comments and feedback helped us take this event to new heights. We thank you for that.
The first set of videos from this edition of vodQA are now available here: http://bit.ly/ YtNwje, and some of thepictures are available here: http://bitly. com/TEWwjQ
Here are the links for the some of the talks that were held at vodQA:
- Opening note by Tarang Baxi
YouTube: http://bitly.com/ VN83v0
- Automated Infrastructure Testing - Ranjib Dey
YouTube: http://bitly.com/ QhpYgz
Slides: http://bitly.com/ UbSBgD
- The World Without Testing - Vikrant Chauhan and Supriya Pawar
YouTube: http://bitly.com/ TEXL2u
- The Lean Game- Sneha Kadam
YouTube: http://bitly.com/ YtSzQO
- What's Accessibility - Vikrant Chauhan
YouTube: http://bitly.com/ TtKBmC
The first set of videos from this edition of vodQA are now available here: http://bit.ly/
Here are the links for the some of the talks that were held at vodQA:
- Opening note by Tarang Baxi
YouTube: http://bitly.com/
- Automated Infrastructure Testing - Ranjib Dey
YouTube: http://bitly.com/
Slides: http://bitly.com/
- The World Without Testing - Vikrant Chauhan and Supriya Pawar
YouTube: http://bitly.com/
- The Lean Game- Sneha Kadam
YouTube: http://bitly.com/
- What's Accessibility - Vikrant Chauhan
YouTube: http://bitly.com/
Slides: http://bitly.com/ Xd96JR
- Firefox Add-ons Use for Software Testers - Sumit Singhal
YouTube: http://bitly.com/ RKi0HZ
YouTube: http://bit.ly/SHxZd9
You can join the following groups on Facebook and LinkedIn to stay up-to-date with what’s happening in vodQA,updates, meeting other testing enthusiasts.
LinkedIn group: vodQA
Facebook group: vodQA
- Firefox Add-ons Use for Software Testers - Sumit Singhal
YouTube: http://bitly.com/
Slides: http://bitly.com/ VNaa1Y
- Redefining Bugs! - Sneha Kadam
YouTube: http://bit.ly/UtGgyR
Closing note: Chaitanya NadkarnyYouTube: http://bit.ly/UtGgyR
YouTube: http://bit.ly/SHxZd9
You can join the following groups on Facebook and LinkedIn to stay up-to-date with what’s happening in vodQA,updates, meeting other testing enthusiasts.
LinkedIn group: vodQA
Facebook group: vodQA
WAAT v1.5.0 released to rubygems.org
The newest version of WAAT (Web Analytics Automation Testing framework) for Java and Ruby is now available with a new plugin - JsSniffer.
Following the announcement of the new JsSniffer plugin for WAAT, I have now completed the testing for the same, and released the gem - WAAT-1.5.0.gem to rubygems.org.
Please take a look at the FAQs and the major changes on Ruby / major changes on Java section for some known issues, potential solutions for the same - as some of these changes could break your tests.
Following the announcement of the new JsSniffer plugin for WAAT, I have now completed the testing for the same, and released the gem - WAAT-1.5.0.gem to rubygems.org.
Please take a look at the FAQs and the major changes on Ruby / major changes on Java section for some known issues, potential solutions for the same - as some of these changes could break your tests.
Tuesday, October 16, 2012
WAAT (Java & Ruby) with JS_SNIFFER is out of the box
I have pushed in the latest changes to WAAT to get over the limitation of not working in a pure https environment (http://essenceoftesting.blogspot.in/2011/06/waat-and-https.html).
The solution is creating a new type of plugin - called JS_SNIFFER.
This plugin requires the user of WAAT to do a little more work than before.
They need (to work with their development team) to figure out what JS script they need to invoke in the browser to get the URL of interest that is sent as a pure https request over the wire. WAAT then takes this request, and does the tag matching for you.
This generation of the JS script is a one-time effort - unless the way the tags are reported changes in the product. Then the test framework can work in a seamless fashion as before to test this is working consistently in an automated fashion.
Another advantage of this is that with this approach, we do not need to install jpcap or run the tests as a "super-user" - a restriction posed by the network packet capture library.
The WAAT_v1.5.0.jar is available on here (https://github.com/anandbagmar/WAAT/tree/master/dist) on github.
Similarly, I have also updated the WAAT-ruby gem (WAAT-1.5.0.gem). This gem is not yet pushed out to rubygems.org - as I am still testing it out. However, if you are interested, you can download it from here.
As usual, feedback / comments / suggestions most welcome!
The solution is creating a new type of plugin - called JS_SNIFFER.
This plugin requires the user of WAAT to do a little more work than before.
They need (to work with their development team) to figure out what JS script they need to invoke in the browser to get the URL of interest that is sent as a pure https request over the wire. WAAT then takes this request, and does the tag matching for you.
This generation of the JS script is a one-time effort - unless the way the tags are reported changes in the product. Then the test framework can work in a seamless fashion as before to test this is working consistently in an automated fashion.
Another advantage of this is that with this approach, we do not need to install jpcap or run the tests as a "super-user" - a restriction posed by the network packet capture library.
The WAAT_v1.5.0.jar is available on here (https://github.com/anandbagmar/WAAT/tree/master/dist) on github.
Similarly, I have also updated the WAAT-ruby gem (WAAT-1.5.0.gem). This gem is not yet pushed out to rubygems.org - as I am still testing it out. However, if you are interested, you can download it from here.
As usual, feedback / comments / suggestions most welcome!
Labels:
automation,
java,
ruby,
thoughtworks,
waat
Wednesday, September 26, 2012
Error in building native extensions on mac / ruby?
If you encounter errors when a ruby gem on your mac (error in building native extensions), read more for a solution that worked for me.
I was trying to install ffi on my mac and got the following error:
sudo gem install ffi -v '1.0.7'
Building native extensions. This could take a while...
ERROR: Error installing ffi:
ERROR: Failed to build gem native extension.
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h
Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/ffi-1.0.7 for inspection.
Results logged to /Library/Ruby/Gems/1.8/gems/ffi-1.0.7/ext/ffi_c/gem_make.out
Here is what you need to do resolve the issue:
Now you will be able to install the gem and build its native extensions on your mac.
An important UPDATE from my good friend Oscar Reiken:
Oscar Rieken5:52 PM (edited)
I was trying to install ffi on my mac and got the following error:
sudo gem install ffi -v '1.0.7'
Building native extensions. This could take a while...
ERROR: Error installing ffi:
ERROR: Failed to build gem native extension.
/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb
mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h
Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/ffi-1.0.7 for inspection.
Results logged to /Library/Ruby/Gems/1.8/gems/ffi-1.0.7/ext/ffi_c/gem_make.out
Here is what you need to do resolve the issue:
- Install xcode
- Install the "Command Line Tools" from xcode -> Preferences -> Downloads -> Components
Now you will be able to install the gem and build its native extensions on your mac.
An important UPDATE from my good friend Oscar Reiken:
Oscar Rieken5:52 PM (edited)
good tip ;) but that only works with newer versions of mac OSX where the latest version of Xcode is included, you might also want to add that if you dont have access to the latest xcode you can use brew(http://apple.stackexchange.com/questions/38222/how-do-i-install-gcc-via-homebrew) or install GCC directly(https://github.com/kennethreitz/osx-gcc-installer)
Tuesday, September 25, 2012
New version of WAAT-ruby gem available
I finally got around to pushing out a new version of WAAT-ruby (1.4.1) on rubygems.org. The only change in this version is the removal of a dependency on a particular version of bundler. See the WAAT-ruby project on github for more information.
Watch this space for a new version of WAAT-ruby that overcomes the limitation of doing web analytics automation for https urls.
Watch this space for a new version of WAAT-ruby that overcomes the limitation of doing web analytics automation for https urls.
Monday, September 10, 2012
vodQA Pune - Going Beyond the Usual
The next edition of vodQA is coming up on Saturday, 13th October 2012 in Pune. The theme for this edition is "Going Beyond the Usual". There is going to be focus on functional and cross-functional areas like Security, Usability, Scalability and Performance, within industries such as Manufacturing and Banking, among others.
You can register as a speaker here, or as an attendee here.
You can register as a speaker here, or as an attendee here.
Saturday, September 8, 2012
Whats next for WAAT?
It has been quite some time that I updated WAAT. The released version has been working well - but it does have its limitations as listed in the FAQs on github.
The biggest limitation I feel about the current release version of WAAT is that it does not work in a pure https kind of an environment. (http://essenceoftesting.blogspot.in/2011/06/waat-and-https.html)
Of late I have been spiking out different ways to overcome this limitation. I have experimented to create a HttpsSniffer, and hit various different road-blocks in that. That has forced me to look at another strategy.
So I have changed direction in coming to a solution. I am looking at creating something like a JSInjector / JSSniffer plugin - which executes a javascript in the browser from where the action is invoked. This is not as straight forward to use as the earlier approaches. The user of this plugin will need to understand the DOM and some javascripts better, maybe take help from the development team, but then once the way to retrive the basic information is known, then we are in calm waters again :)
If you are facing this similar issue in a pure https environment for web analytics testing, look out for more information in this space.
My plan is to update WAAT, followed-by WAAT-Ruby and then lastly release a new version of the WAAT-Ruby gem following that.
The biggest limitation I feel about the current release version of WAAT is that it does not work in a pure https kind of an environment. (http://essenceoftesting.blogspot.in/2011/06/waat-and-https.html)
Of late I have been spiking out different ways to overcome this limitation. I have experimented to create a HttpsSniffer, and hit various different road-blocks in that. That has forced me to look at another strategy.
So I have changed direction in coming to a solution. I am looking at creating something like a JSInjector / JSSniffer plugin - which executes a javascript in the browser from where the action is invoked. This is not as straight forward to use as the earlier approaches. The user of this plugin will need to understand the DOM and some javascripts better, maybe take help from the development team, but then once the way to retrive the basic information is known, then we are in calm waters again :)
If you are facing this similar issue in a pure https environment for web analytics testing, look out for more information in this space.
My plan is to update WAAT, followed-by WAAT-Ruby and then lastly release a new version of the WAAT-Ruby gem following that.
vodQA - The ABCs of Testing
I am late in writing about this, but if I do not share this, I am sure I will be feeling very bad later on.
I am very happy to write the next vodQA is happening in Bangalore on Saturday, 8th Sept 2012. The theme for this event is "The ABCs of Testing (Automation, Big Data Analytics, Could Testing)". Also look at our vodQA group on facebook for more information.
There are great topics lined up by great speakers. There are going to be sessions across multiple tracks, fish bowl sessions and am sure a lot of thought-provoking interactions.
I will write an update with links to presentations, pictures and videos after the event. See you all there!
I am very happy to write the next vodQA is happening in Bangalore on Saturday, 8th Sept 2012. The theme for this event is "The ABCs of Testing (Automation, Big Data Analytics, Could Testing)". Also look at our vodQA group on facebook for more information.
There are great topics lined up by great speakers. There are going to be sessions across multiple tracks, fish bowl sessions and am sure a lot of thought-provoking interactions.
I will write an update with links to presentations, pictures and videos after the event. See you all there!
Wednesday, August 22, 2012
Google Has Open Sourced Octane, a New JavaScript Benchmark Suite
Posted from: Google Has Open Sourced Octane, a New JavaScript Benchmark Suite
Should now make it easier to test for performance more seamlessly and regularly.
----------------------------
Should now make it easier to test for performance more seamlessly and regularly.
----------------------------
Google has open sourced Octane, a JavaScript benchmarking suite consisting of 13 tests measuring browser performance.
Google has open sourced Octane, a JavaScript benchmarking suite consisting of 13 tests meant to measure the performance of browsers loading and executing complex and large JavaScript applications such as games, interactive and rich web pages and online tools. Octane consists of 8 tests found in the initial V8 Benchmark Suite plus the addition of 5 new ones – pdf.js, Mandreel, GB Emulator, Code Loading, Box2DWeb - that are meant to measure performance areas not covered yet by other tests:
- Richards - OS kernel simulation benchmark, originally written in BCPL by Martin Richards (539 lines).
- Deltablue - One-way constraint solver, originally written in Smalltalk by John Maloney and Mario Wolczko (880 lines).
- Raytrace - Ray tracer benchmark based on code by Adam Burmister (904 lines).
- Regexp - Regular expression benchmark generated by extracting regular expression operations from 50 of the most popular web pages (1761 lines).
- NavierStokes - 2D NavierStokes equations solver, heavily manipulates double precision arrays. Based on Oliver Hunt's code (387 lines).
- Crypto - Encryption and decryption benchmark based on code by Tom Wu (1698 lines).
- Splay - Data manipulation benchmark that deals with splay trees and exercises the automatic memory management subsystem (394 lines).
- EarleyBoyer - Classic Scheme benchmarks, translated to JavaScript by Florian Loitsch's Scheme2Js compiler (4684 lines).
- pdf.js - Mozilla's PDF Reader implemented in JavaScript. It measures decoding and interpretation time (33,056 lines).
- Mandreel - Runs the 3D Bullet Physics Engine ported from C++ to JavaScript via Mandreel (277,377 lines).
- GB Emulator - Emulate the portable console's architecture and runs a demanding 3D simulation, all in JavaScript (11,097 lines).
- Code loading - measures how quickly a JavaScript engine can start executing code after loading a large JavaScript program, social widget being a common example. The source for test is derived from open source libraries (Closure, jQuery) (1,530 lines).
- Box2DWeb - Based on Box2DWeb, the popular 2D physics engine originally written by Erin Catto, ported to JavaScript. (560 lines, 9000+ de-minified)
The benchmark runs in Chrome 14+, Firefox 13+, IE 10, Opera 12 and Safari 5.1.7+ on the desktop, and in the mobile versions of Chrome, Firefox and Opera. It does not run in IE 9 because Microsoft’s browser does not implement WebGL's Typed Arrays, and in several mobile browsers which fail to execute some of the tests: Android Browser, Chrome on iOS 4 (due to iOS restrictions), and Safari on iOS 4.
Octane is more comprehensive than other JavaScript benchmarking tests such as V8, SunSpider, Kraken or Dromaeo. Google mentioned their intent to keep improving the test suite, inviting users to fill in issues reporting performance areas or applications that can be used as a base for more comprehensive JavaScript testing.
The source code of the Octane benchmark is available under a New BSD License.
Monday, July 30, 2012
Do we have rules for story writing?
I came across this interesting article from Pixar - The 22 rules of storytelling, according to Pixar.
It got me thinking - this applies to our Agile world of Software Development too!
- Anyone writing story cards - do you have certain rules / criteria in mind? I think quite a few things from the Pixar article apply in our world too!
- For us Testers, do we think about the functionality in the story cards based on these rules? Or, simply put, do we think about the new functionality in the story card affects the end-product? Do we think about the big picture enough?
Friday, July 27, 2012
Interesting way to create graphs and tables
Seems like a great way to create graphs and tables programmatically using ruby:
http://jordanhollinger.com/2012/07/23/introducing-graphene
http://jordanhollinger.com/2012/07/23/introducing-graphene
Wednesday, July 25, 2012
Monday, July 23, 2012
How will you test this?
How will you automate the testing of something like this? ... no - not the video, but the wonders they have achieved in the "Miniatur Wunderland"!
WAAT-Ruby crosses 300 downloads
I was very happy to see that WAAT-Ruby (Web Analytics Automation Testing framework) has crossed 300 downloads on rubygems.org.
Friday, July 20, 2012
Tuesday, July 17, 2012
Behavior Driven Testing (BDT) in Agile - slides uploaded
I spoke in SiliconIndia's SoftTec 2012 in Bangalore on 14th July on Behavior Driven Testing (BDT) in Agile.
It was a pretty big conference - with the Test Technologies / Methodologies track having an attendance of about 250 people, and the People / Process / Business track having an attendance of about 150 people. There were good interactions with the attendees and fellow speakers during the break times.
The topic of my talk was Behavior Driven Testing (BDT) in Agile. Off late, I have been speaking on this topic at various different places in some form or the other. Click here for all information related to BDT.
Here is the abstract of this talk:
The slides from this talk are available in the vodQA group on facebook and on SlideShare.net.
It was a pretty big conference - with the Test Technologies / Methodologies track having an attendance of about 250 people, and the People / Process / Business track having an attendance of about 150 people. There were good interactions with the attendees and fellow speakers during the break times.
The topic of my talk was Behavior Driven Testing (BDT) in Agile. Off late, I have been speaking on this topic at various different places in some form or the other. Click here for all information related to BDT.
Here is the abstract of this talk:
In this talk, I will explain Agile Testing and how a technique called "Behavior Driven Testing (BDT)" can make your testing more effective. I will also cover the differences between BDD (Behavior Driven Development) and BDT, how BDT affects the Test Pyramid, and the value proposition of using BDT.
The slides from this talk are available in the vodQA group on facebook and on SlideShare.net.
Monday, July 9, 2012
vodQA Geek Night - Behavior Driven Testing (BDT) update
We hosted a vodQA Geek Night focussing on Behavior Driven Testing (BDT) at ThoughtWorks, Pune on 5th July 2012. The workshop itself was very intense and saw great participation and interactions with the attendees and the workshop facilitators.
There were many firsts for this workshop:
There were many firsts for this workshop:
- The first vodQA Geek Night
- Event was announced, marketed and registrations done purely as a Facebook event
- 95% attendance
Pictures and slides have been uploaded on the vodQA group in Facebook.
Contact me or the vodQA Pune team for more information / questions.
Friday, June 29, 2012
Behavior Driven Testing (BDT) in Agile
I am speaking in SoftTec 2012 in Bangalore on 14th July on Behavior Driven Testing (BDT) in Agile.
Abstract:
Abstract:
In this talk, I will explain Agile Testing and how a technique called "Behavior Driven Testing (BDT)" can make your testing more effective. I will also cover the differences between BDD (Behavior Driven Development) and BDT, how BDT affects the Test Pyramid, and the value proposition of using BDT.
Feedback on WAAT
I am considering adding some functionality to WAAT. However, before I do that, I would like to know what your opinion is.
So, to all those who are either using, tried using, or, want to use WAAT: Can you please provide me some feedback based on the following questions:
So, to all those who are either using, tried using, or, want to use WAAT: Can you please provide me some feedback based on the following questions:
- Which flavor of WAAT do you use?
- Java
- Ruby
- Both
- Have you faced any problems using WAAT?
- If yes, what problems? How did you resolve them?
- WAAT using the httpSniffer approach has known limitations (namely: does not support https request capturing, and on non-Windows platform you need to run the tests using root access).
- Have you run into these limitations?
- How did you resolve the issue?
- Do you find the WAAT wiki useful?
- If not, what if done differently will provide more value?
- Any other thoughts / comments on how WAAT can be made better?
Looking forward for your comments.
Thanks.
Anand
Thursday, June 21, 2012
Test Driven Development via Agile Testing - slides + audio
I just finished a presentation on Test Driven Development via Agile Testing in the Next Generation Testing Conference in Bangalore. Went pretty well.
Here is a link to the slides. The audio recording of my talk can be downloaded from the link below. You will be able to listen to the talk using VLC Player or similar.
The following topics and answer questions related to:
- Overview of Agile Testing
- The Test Pyramid
- Different flavors of TDD
- BDD – Behavior Driven Development
- ATDD – Acceptance Driven Development
- BDT – Behavior Driven Testing
- Difference between BDD and BDT
- Tools that support BDT
- The value proposition BDT offers
Wednesday, June 20, 2012
vodQA Geek Night - Behavior Driven Testing (BDT) on 5th July
I had presented a topic on Behavior Driven Testing (BDT) in vodQA - Testing and Beyond.
We are now running a workshop as a followup to the session to provide a first-hand experience into understanding BDT, and how it can potentially help you in your testing efforts.
Since this is a workshop, seats are limited. If you are interested in attending, please join our vodQA group on facebook and confirm your presence for the vodQA Geek Night event scheduled at 5.30pm on 5th July 2012 in ThoughtWorks Pune.
We are now running a workshop as a followup to the session to provide a first-hand experience into understanding BDT, and how it can potentially help you in your testing efforts.
Since this is a workshop, seats are limited. If you are interested in attending, please join our vodQA group on facebook and confirm your presence for the vodQA Geek Night event scheduled at 5.30pm on 5th July 2012 in ThoughtWorks Pune.
Thursday, May 31, 2012
Test Driven Development via Agile Testing
I will be giving a talk in the "Next Generation Testing Conference" held in Bangalore on 21st June 2012.
The topic and abstract is as mentioned below. See you at the conference!
The topic and abstract is as mentioned below. See you at the conference!
Title:
|
Test Driven Development via Agile Testing |
Abstract covering main features of the talk:
|
In this talk, I will cover the following topics and answer questions related to:
· Overview of Agile Testing
· The Test Pyramid
· Different flavors of TDD
o BDD – Behavior Driven Development?
o ATDD – Acceptance Driven Development?
o BDT – Behavior Driven Testing?
§ Difference between BDD and BDT
§ Tools that support BDT
§ The value proposition
|
Sunday, May 20, 2012
vodQA NCR
vodQA NCR is being held on Saturday, June 23, 2012.
http://testing.thoughtworks.com/events/the-testing-spirit
See you there!
http://testing.thoughtworks.com/events/the-testing-spirit
See you there!
Thursday, May 17, 2012
Keeping your test suites "green"
My article on Keeping your test suites "green" has been published in SiliconIndia's QA City. Looking forward for your comments.
Same article quoted below:
In days where we are talking and thinking more and more on how to achieve "Continuous Delivery" in our software projects, Test Automation plays an even more crucial role.
To reap the benefits of test automation, you want to run it as often as possible. However, just putting your test automation jobs in some CI tool like Hudson / Jenkins / GO / etc., and have it run every so often is of little value, unless, the tests are passing, or the failures are identified and analyzed immediately, AND, proper action is taken based on the failures.
If the number of failures / jobs are quite a few, then the test failure analysis and test maintenance activity takes a lot of time. Also, as a result, the development / product / project team may start losing confidence in the automation suite because the CI always shows the jobs in red. Eventually, test automation may lose priority and value, which is not a good sign.
Before I explain a technique that may help keep your test suites "green" - and reduce the test failure analysis and maintenance time, let us understand why we get into this problem.
I have seen the functional tests failing for 3 main reasons:
1. The product has undergone some "unexpected change". As a result, the test has caught a regression bug as the product has changed when it was not supposed to.
2. The product has undergone some "expected" change and the test has not yet been updated to keep up with the new functionality.
3. There is an intermittent issue - maybe related to environment / database / browser / network / 3rd party integration / etc.
Regardless of the reason, if there is even 1 failure in your CI job, it means the whole job fails and turns "red".
This is painful and more importantly, this does not provide the correct picture of the health of the system.
To determine the health of the system, we now need to:
• Spend dedicated time per test run to ensure the failures in the jobs are analyzed and accounted for,
• In case of genuine failures, defects are reported against the product, or,
• In case of test failures based on expected product changes, update the tests to be in accordance with the new functionality, or,
• In case of intermittent failures, rerun the test again to confirm the failure was indeed due to an intermittent issue.
This is not a trivial task to keep doing on every test run. So can something be done to keep your test suites green, and provide a true representation of what the health of the product under test?
Here is a strategy, which will reduce the manual analysis of your test runs, and, provide a better understanding into how the product conforms to what its supposed to do:
Lets make some assumptions:
1. Say, you have 5 jobs of various types in your CI
2. Each job uses a specific tag / annotation to run specific types of tests.
Now here is what you do:
1. Create appropriate commands / tasks in your test framework to execute tests with a new "failing_tests" tag / annotation.
2. Create a new job in CI - "Failing Tests" and point it to run the tests with tag / annotation "failing_tests".
3. Analyze all your existing / earlier jobs, and for all tests that have failed for any of the reasons mentioned earlier, comment out the original tag / annotation, and instead, add the tag / annotation "failing_tests" to such tests.
Run all the tests again and the now the following should be seen:
• The above steps have ensured the tests that pass, will continue to pass, with the added benefit of making the CI job green.
• The tests that fail, will continue to fail - but in another, special "Failing Tests" CI job.
• As a result, all the original 5 jobs you had in CI, will now turn GREEN and you just need to monitor the "Failing Tests" job.
This means now that your effort of test analysis has been reduced from 5 jobs to just 1 job.
When a failing test passes, replace the "failing_tests" tag with the original tag back to it.
If you want to categorize the failing tests in a better way, you could potentially create separate category "Failing Tests" jobs like:
• "Failing Tests - Open Defects"
• "Failing Tests - Test updates needed"
• "Failing Tests - Intermittent / environment issues"
Regardless of your approach, the solution should be simple to implement, and you should be saving time at the end of the day, to focus on more important testing activities, instead of just analyzing the test failures.
One of my colleagues asked:
"What if a smoke test is failing? Should we move that also to a Failing Tests job?"
My answer was:
"As with most things, you cannot apply one rule for everything. In this case also, you should not apply one strategy to all problems. As each problem is different in nature, you need to create a correct strategy that solves the problem in the best possible way.
That said, fundamentally, the smoke suite should always be "green". If there is any reason it is not, then we need to stop everything else, and make sure this is a passing test suite.
However, if you have various jobs representing the smoke suite, then you could potentially create a "Smoke - Failing Suite" on the above mentioned lines IF that helps reduce time wasted in test result analysis and provides the correct product health representation quickly, and consistently."
To summarize:
• Create a failing tests CI job and run all the failing tests as part of this job
• All existing CI jobs should turn "green"
• Monitor the failing tests and fix / update them as necessary
• If any of the passing tests fail at any point, first move them to the "Failing Tests" job to ensure the other jobs remain "green"
• When a failing test passes, move that test back from the "Failing Tests" job to the original job.
Same article quoted below:
In days where we are talking and thinking more and more on how to achieve "Continuous Delivery" in our software projects, Test Automation plays an even more crucial role.
To reap the benefits of test automation, you want to run it as often as possible. However, just putting your test automation jobs in some CI tool like Hudson / Jenkins / GO / etc., and have it run every so often is of little value, unless, the tests are passing, or the failures are identified and analyzed immediately, AND, proper action is taken based on the failures.
If the number of failures / jobs are quite a few, then the test failure analysis and test maintenance activity takes a lot of time. Also, as a result, the development / product / project team may start losing confidence in the automation suite because the CI always shows the jobs in red. Eventually, test automation may lose priority and value, which is not a good sign.
Before I explain a technique that may help keep your test suites "green" - and reduce the test failure analysis and maintenance time, let us understand why we get into this problem.
I have seen the functional tests failing for 3 main reasons:
1. The product has undergone some "unexpected change". As a result, the test has caught a regression bug as the product has changed when it was not supposed to.
2. The product has undergone some "expected" change and the test has not yet been updated to keep up with the new functionality.
3. There is an intermittent issue - maybe related to environment / database / browser / network / 3rd party integration / etc.
Regardless of the reason, if there is even 1 failure in your CI job, it means the whole job fails and turns "red".
This is painful and more importantly, this does not provide the correct picture of the health of the system.
To determine the health of the system, we now need to:
• Spend dedicated time per test run to ensure the failures in the jobs are analyzed and accounted for,
• In case of genuine failures, defects are reported against the product, or,
• In case of test failures based on expected product changes, update the tests to be in accordance with the new functionality, or,
• In case of intermittent failures, rerun the test again to confirm the failure was indeed due to an intermittent issue.
This is not a trivial task to keep doing on every test run. So can something be done to keep your test suites green, and provide a true representation of what the health of the product under test?
Here is a strategy, which will reduce the manual analysis of your test runs, and, provide a better understanding into how the product conforms to what its supposed to do:
Lets make some assumptions:
1. Say, you have 5 jobs of various types in your CI
2. Each job uses a specific tag / annotation to run specific types of tests.
Now here is what you do:
1. Create appropriate commands / tasks in your test framework to execute tests with a new "failing_tests" tag / annotation.
2. Create a new job in CI - "Failing Tests" and point it to run the tests with tag / annotation "failing_tests".
3. Analyze all your existing / earlier jobs, and for all tests that have failed for any of the reasons mentioned earlier, comment out the original tag / annotation, and instead, add the tag / annotation "failing_tests" to such tests.
Run all the tests again and the now the following should be seen:
• The above steps have ensured the tests that pass, will continue to pass, with the added benefit of making the CI job green.
• The tests that fail, will continue to fail - but in another, special "Failing Tests" CI job.
• As a result, all the original 5 jobs you had in CI, will now turn GREEN and you just need to monitor the "Failing Tests" job.
This means now that your effort of test analysis has been reduced from 5 jobs to just 1 job.
When a failing test passes, replace the "failing_tests" tag with the original tag back to it.
If you want to categorize the failing tests in a better way, you could potentially create separate category "Failing Tests" jobs like:
• "Failing Tests - Open Defects"
• "Failing Tests - Test updates needed"
• "Failing Tests - Intermittent / environment issues"
Regardless of your approach, the solution should be simple to implement, and you should be saving time at the end of the day, to focus on more important testing activities, instead of just analyzing the test failures.
One of my colleagues asked:
"What if a smoke test is failing? Should we move that also to a Failing Tests job?"
My answer was:
"As with most things, you cannot apply one rule for everything. In this case also, you should not apply one strategy to all problems. As each problem is different in nature, you need to create a correct strategy that solves the problem in the best possible way.
That said, fundamentally, the smoke suite should always be "green". If there is any reason it is not, then we need to stop everything else, and make sure this is a passing test suite.
However, if you have various jobs representing the smoke suite, then you could potentially create a "Smoke - Failing Suite" on the above mentioned lines IF that helps reduce time wasted in test result analysis and provides the correct product health representation quickly, and consistently."
To summarize:
• Create a failing tests CI job and run all the failing tests as part of this job
• All existing CI jobs should turn "green"
• Monitor the failing tests and fix / update them as necessary
• If any of the passing tests fail at any point, first move them to the "Failing Tests" job to ensure the other jobs remain "green"
• When a failing test passes, move that test back from the "Failing Tests" job to the original job.
I have been profiled
SiliconIndia's QA City portal has put up my career profile on their site. You can see that here.
Monday, April 30, 2012
Theoretical Vs Practical knowledge
http://dilbert.com/strips/comic/2012-04-30/ |
Funny ... but on the other hand, at times this is true. Just because something has been written about, does not necessarily mean it is always true. Things change, evolve, and we need to change and move accordingly. At times, we need to flow against the tide for what we think and believe is the right thing to do.
This definitely applies to what I have seen in my career so far ... so keep thinking in innovative and creative ways - even if at times you have to swim against the tide!
Multi-tasking .... good or bad?
Many a times I end up trying to do too many things at almost the same time. I have got mixed results out of this approach.
I think off late, more often than not, I have not been too successful at juggling many things together ... this could be because of mental fatigue and burnout.
As a result I have consciously tried to take a step away from items of relatively lower priority. This has helped me tremendously. Also, I came across this post (http://blogs.hbr.org/schwartz/2012/03/the-magic-of-doing-one-thing-a.html)- which talks about techniques how to be more effective in your work. See if this helps you too!
I think off late, more often than not, I have not been too successful at juggling many things together ... this could be because of mental fatigue and burnout.
As a result I have consciously tried to take a step away from items of relatively lower priority. This has helped me tremendously. Also, I came across this post (http://blogs.hbr.org/schwartz/2012/03/the-magic-of-doing-one-thing-a.html)- which talks about techniques how to be more effective in your work. See if this helps you too!
Friday, April 20, 2012
vodqa Pune (17th Mar '12) videos, pictures, slides, feedback ...
Dear Testing enthusiasts,
Our recently concluded vodQA organized by ThoughtWorks Pune, on 17th March 2012 (http://testing.thoughtworks. com/events/testing-and-beyond) ,
was a huge success. Your participation, energy, questions, thoughts,
comments and feedback raised the level of this event to great heights!
We thank you for that.
You can join the following groups to keep up-to-date with what’s happening in vodQA, to know when the next vodQA is happening, to connect with fellow testing enthusiasts, share thoughts related with testing, etc.
LinkedIn group: vodQA (http://linkd.in/IaHcFG)
Facebook group: vodQA (http://on.fb.me/HUPAX9)
Twitter: @vodqa
Here is what happened in this 7th edition of vodQA (4th in Pune):
Statistics:
375+ attendee registrations
35+ speaker registrations
130+ attendees
All videos from this edition of vodQA are available here: http://bit.ly/JRRTtH, and pictures are available here: http://on.fb.me/Jq4Qyh
Based on feedback received, here are the topics that attendees found most useful:
- Open Space
- Mobile Testing
- BDT [ Behavior Driven Testing ]
Summary of feedbacks received:- video feedbacks (http://bit.ly/Jm88Qd)
- Impressive, please continue conducting such events
- I have attended vodQA for first time, it is an amazing experience
- Nice initiative taken by ThoughtWorks
- Have this event once in Quarter
- Parallel tracks (Participants found it difficult to choose one session over the other)
- Each speaker should have a Slide with email and contact details
- Should have more experienced speakers
- More time for Open Space
- Make arrangements for Parking
- Food can be improved
Want to hear more of:
- Cloud Computing, Security Testing, Malware, Ethical Hacking, Agile methodologies, mobile tech
- Current industry trends and topics
Suggestions given by attendees for next vodQA: http://bit.ly/HXGZ9I
External blogs by:
Anand Bagmar: http://bit.ly/HU78r5
Savita Munde: http://bit.ly/FOS6ll
Srinivas Chillara: http://bit.ly/IaVRAB
Sessions/Topics details:
- vodQA opening
YouTube: http://bit.ly/Jb3MPl
- Opening note by Chaitanya Nadkarny
YouTube: http://bit.ly/JckWcR
- Quiz
YouTube: http://bit.ly/I8o5sv
- Testing is Dead. Long Live Testing - Shrinivas Kulkarni
Synopsis: Last year, three leading software doctors pronounced testing dead. As we mourn the alleged demise of our craft - questions raise as what to do next? This talk analyses the meaning and impact of death of testing. The talk then deliberates on potential next steps and challenges ahead of us.
YouTube: http://bit.ly/JpXI4O
Slides: http://slidesha.re/IrBL4o
- Testing a Massively Multi-player Online Game Server - Nirmalya Sengupta, Srinivas Chilllara
Synopsis: An online game server's functional and non-functional features lead to non-standard challenges for both architecting as well as testing. This talk starts with an overview and then discusses one testing scenario in depth. We stress particularly on testing the asynchronous nature of the application's method calls. A few general approaches of testing such applications terms are alluded to at the end.
YouTube: http://bit.ly/HWO1NP
Slides: http://slidesha.re/JRZKHQ
- Virtualization Impact on Software Testing - Parthasarthi T
Synopsis: Virtualization Impact on Software Testing
YouTube: http://bit.ly/HWjFrj
Slides: http://slidesha.re/Jq4bNe
- Mobile Testing: Challenges and Solutions - Ashwini Phalle
Synopsis: Different testing requirements that mobile applications have, challenges and solutions.
Challenges:
1. Complex mobile testing matrix, Expensive test environment
2. Repetitive testing
3. Mobile testing for devices located at various locations
Solutions:
1. Risk Based Testing approach
2. Using Mobile device emulators
3. Use of Automation tools
4. Leveraging external services
YouTube: http://bit.ly/Jb4hZw
Slides: http://slidesha.re/Jb9Rer
- Open Space discussions
YouTube: http://bit.ly/Jb4k7O
- The Marshmallow Challenge - Sneha Kadam
Synposis: This is a fun and instructive design exercise that encourages teams to experience simple but profound lessons in collaboration, innovation & creativity. It challenges you to find hidden assumptions in business requirements & learn to Fail-Fast-Fail-Often! In 18 minutes, teams must build the tallest free-standing structure out of spaghetti, tape, string and one marshmallow which must be on top.
YouTube: Part 1: http://bit.ly/HR4XiX
YouTube: Part 2: http://bit.ly/J7t8zJ
- Mobile Testing: In and Out - Sudeep Somani
YouTube: http://bit.ly/JifEQH
- BDT (Behaviour Driven Testing) - Anand Bagmar
Synopsis: What is Behavior Driven Testing (BDT)? How does it differ from Behavior Driven Development? What tools support this kind of testing? The value proposition BDT offers.
YouTube: http://bit.ly/HUSuuY
Slides: http://slidesha.re/I69BNK
- Code Coverage of Function Testing Automation Scripts - Aakash Tyagi
Synopsis: Challenge As the product is a vast product that provides so regression suite of this was very big. It was taking about 14 days to execute and with every release it was increasing. The main challenge was to keep regression suite comprehensive as well small so that it can be executed many time. Solution Emma was used to find code coverage of product code then redesign the regression suite.
Slides: http://slidesha.re/HWQJCQ
- Negative Testing, in a positive vein - Srinivas Chillara
Synopsis: How to think about "negative testing", and why it may not be truly negative.
YouTube: http://bit.ly/IaKOr4
- Virtual Communication and Testers - Archana Dhingra
Synopsis: What is Virtual communication and its importance in IT industry. - The common mistakes we all commit while communicating in a Virtual environment - How to effectively manage and communicate with virtual teams. - Conflict resolution in a Virtual setup.
YouTube: http://bit.ly/Jq3uUc
- Automation Reusable Framework based on QC - Vysali Alaparthi
Synopsis: About ART:A hybrid framework named as ART (Automation Reusable Test) is used for end-to-end automation as ART framework supports automation of web, windows, and AS/400 applications. ART framework uses automation tool owned by HP i.e. Quick Test Professional (QTP) for execution of automated keyword-driven test scripts.Key Achievements: Efforts involved in test cases/scripts integration has reduced.
YouTube: http://bit.ly/Jlb8MH
Slides: http://slidesha.re/IrBtua
Closing note: Shalabh Varma
YouTube: http://bit.ly/Jb9fFO
Looking forward for your continued comments, feedback, thoughts and support to make vodQA more successful, and the QA community more vibrant and connected!
See you in the next vodQA.
Thank you.
vodQA Team.
vodqa-pune@thoughtworks.com
Our recently concluded vodQA organized by ThoughtWorks Pune, on 17th March 2012 (http://testing.thoughtworks.
You can join the following groups to keep up-to-date with what’s happening in vodQA, to know when the next vodQA is happening, to connect with fellow testing enthusiasts, share thoughts related with testing, etc.
LinkedIn group: vodQA (http://linkd.in/IaHcFG)
Facebook group: vodQA (http://on.fb.me/HUPAX9)
Twitter: @vodqa
Here is what happened in this 7th edition of vodQA (4th in Pune):
Statistics:
375+ attendee registrations
35+ speaker registrations
130+ attendees
All videos from this edition of vodQA are available here: http://bit.ly/JRRTtH, and pictures are available here: http://on.fb.me/Jq4Qyh
Based on feedback received, here are the topics that attendees found most useful:
- Open Space
- Mobile Testing
- BDT [ Behavior Driven Testing ]
Summary of feedbacks received:- video feedbacks (http://bit.ly/Jm88Qd)
- Impressive, please continue conducting such events
- I have attended vodQA for first time, it is an amazing experience
- Nice initiative taken by ThoughtWorks
- Have this event once in Quarter
- Parallel tracks (Participants found it difficult to choose one session over the other)
- Each speaker should have a Slide with email and contact details
- Should have more experienced speakers
- More time for Open Space
- Make arrangements for Parking
- Food can be improved
Want to hear more of:
- Cloud Computing, Security Testing, Malware, Ethical Hacking, Agile methodologies, mobile tech
- Current industry trends and topics
Suggestions given by attendees for next vodQA: http://bit.ly/HXGZ9I
External blogs by:
Anand Bagmar: http://bit.ly/HU78r5
Savita Munde: http://bit.ly/FOS6ll
Srinivas Chillara: http://bit.ly/IaVRAB
Sessions/Topics details:
- vodQA opening
YouTube: http://bit.ly/Jb3MPl
- Opening note by Chaitanya Nadkarny
YouTube: http://bit.ly/JckWcR
- Quiz
YouTube: http://bit.ly/I8o5sv
- Testing is Dead. Long Live Testing - Shrinivas Kulkarni
Synopsis: Last year, three leading software doctors pronounced testing dead. As we mourn the alleged demise of our craft - questions raise as what to do next? This talk analyses the meaning and impact of death of testing. The talk then deliberates on potential next steps and challenges ahead of us.
YouTube: http://bit.ly/JpXI4O
Slides: http://slidesha.re/IrBL4o
- Testing a Massively Multi-player Online Game Server - Nirmalya Sengupta, Srinivas Chilllara
Synopsis: An online game server's functional and non-functional features lead to non-standard challenges for both architecting as well as testing. This talk starts with an overview and then discusses one testing scenario in depth. We stress particularly on testing the asynchronous nature of the application's method calls. A few general approaches of testing such applications terms are alluded to at the end.
YouTube: http://bit.ly/HWO1NP
Slides: http://slidesha.re/JRZKHQ
- Virtualization Impact on Software Testing - Parthasarthi T
Synopsis: Virtualization Impact on Software Testing
YouTube: http://bit.ly/HWjFrj
Slides: http://slidesha.re/Jq4bNe
- Mobile Testing: Challenges and Solutions - Ashwini Phalle
Synopsis: Different testing requirements that mobile applications have, challenges and solutions.
Challenges:
1. Complex mobile testing matrix, Expensive test environment
2. Repetitive testing
3. Mobile testing for devices located at various locations
Solutions:
1. Risk Based Testing approach
2. Using Mobile device emulators
3. Use of Automation tools
4. Leveraging external services
YouTube: http://bit.ly/Jb4hZw
Slides: http://slidesha.re/Jb9Rer
- Open Space discussions
YouTube: http://bit.ly/Jb4k7O
- The Marshmallow Challenge - Sneha Kadam
Synposis: This is a fun and instructive design exercise that encourages teams to experience simple but profound lessons in collaboration, innovation & creativity. It challenges you to find hidden assumptions in business requirements & learn to Fail-Fast-Fail-Often! In 18 minutes, teams must build the tallest free-standing structure out of spaghetti, tape, string and one marshmallow which must be on top.
YouTube: Part 1: http://bit.ly/HR4XiX
YouTube: Part 2: http://bit.ly/J7t8zJ
- Mobile Testing: In and Out - Sudeep Somani
YouTube: http://bit.ly/JifEQH
- BDT (Behaviour Driven Testing) - Anand Bagmar
Synopsis: What is Behavior Driven Testing (BDT)? How does it differ from Behavior Driven Development? What tools support this kind of testing? The value proposition BDT offers.
YouTube: http://bit.ly/HUSuuY
Slides: http://slidesha.re/I69BNK
- Code Coverage of Function Testing Automation Scripts - Aakash Tyagi
Synopsis: Challenge As the product is a vast product that provides so regression suite of this was very big. It was taking about 14 days to execute and with every release it was increasing. The main challenge was to keep regression suite comprehensive as well small so that it can be executed many time. Solution Emma was used to find code coverage of product code then redesign the regression suite.
Slides: http://slidesha.re/HWQJCQ
- Negative Testing, in a positive vein - Srinivas Chillara
Synopsis: How to think about "negative testing", and why it may not be truly negative.
YouTube: http://bit.ly/IaKOr4
- Virtual Communication and Testers - Archana Dhingra
Synopsis: What is Virtual communication and its importance in IT industry. - The common mistakes we all commit while communicating in a Virtual environment - How to effectively manage and communicate with virtual teams. - Conflict resolution in a Virtual setup.
YouTube: http://bit.ly/Jq3uUc
- Automation Reusable Framework based on QC - Vysali Alaparthi
Synopsis: About ART:A hybrid framework named as ART (Automation Reusable Test) is used for end-to-end automation as ART framework supports automation of web, windows, and AS/400 applications. ART framework uses automation tool owned by HP i.e. Quick Test Professional (QTP) for execution of automated keyword-driven test scripts.Key Achievements: Efforts involved in test cases/scripts integration has reduced.
YouTube: http://bit.ly/Jlb8MH
Slides: http://slidesha.re/IrBtua
Closing note: Shalabh Varma
YouTube: http://bit.ly/Jb9fFO
Looking forward for your continued comments, feedback, thoughts and support to make vodQA more successful, and the QA community more vibrant and connected!
See you in the next vodQA.
Thank you.
vodQA Team.
vodqa-pune@thoughtworks.com
Subscribe to:
Posts (Atom)