Friday, October 26, 2018

Agile Testing, Analytics Testing and Measuring Consumer Quality from Poland and USA

The last few weeks have been very hectic for me. In between my consulting assignments, I traveled to Krakow, Poland for Agile & Automation Days 2018, and then to Arlington, Virginia in USA for STPCon Fall 2018.

In the Agile & Automation Days 2018 conference, I spoke about "Measuring Consumer Quality - The Missing Feedback Loop" and conducted a 1/2 day workshop on "Analytics Rebooted - A Workshop".

In STPCon Fall 2018, I conducted 2 workshops - 1/2 day each - "Practical Agile Testing Workshop" and "Analytics Rebooted - A Workshop" and also spoke about "Measuring Consumer Quality - The Missing Feedback Loop"

Overall, I had a very good trip, amazing conversations and interactions with the attendees and the speakers. I would be lying if I say I am not tired and my throat has gone sore. But, would I do this again? Absolutely! Going to conferences and meeting people, sharing my experiences with them, and learning from their experiences gives me a lot of happiness and satisfaction.

Below are the abstracts of the workshops and the talk. 

Contact me via LinkedIn, or twitter, or my site - essenceoftesting.com if you need any additional information, or if you want help in learning / implementing these or other topics related to Quality / Testing / Automation.



Practical Agile Testing Workshop

Workshop Description:

The Agile Manifesto was published in 2001. It took the software industry a good few years to truly understand what the manifesto means, and the principles behind it. However, choosing and implementing the right set of practices to get the true value from working the Agile way has been the biggest challenge for most!

While Agile is now mainstream, and as we get better at the development practices to “being Agile”, Testing has still been lagging behind in most cases. A lot of teams are still working in the staggered fashion (a.k.a. Iterative waterfall way of working). Here teams may be testing after development completes, or Automation is done in the next Iteration / Sprint, etc.

In this workshop, we will learn and share various principles and practices which teams should adopt to be successful in testing (in-cycle) in Agile projects.

Workshop Agenda:
  • What is Agile testing? - Learn what does it mean to Test on Agile Projects
  • Effective strategies for Distributed Testing - Learn practices that help bridge the Distributed Testing gap!
  • Test Automation in Agile Projects - Why? What? How? - Why is Test Automation important, and how do we implement a good, robust, scalable and maintainable Test Automation framework!
  • Build the "right" regression suite using Behavior Driven Testing (BDT) - Behavior Driven Testing (BDT) is an evolved way of thinking about Testing. It helps in identifying the 'correct' scenarios, in form of user journeys, to build a good and effective (manual & automation) regression suite that validates the Business Goals. 
Key learning for participants in this workshop:
  • Understand the Agile Testing Manifesto.
  • Learn the essential Testing practices and activities essential for teams to adopt to work in Agile way of working.
  • Discover techniques to do effective testing in distributed teams.
  • Find out how Automation plays a crucial role in Agile projects.
  • Learn how to build a good, robust, scalable and maintainable Functional Automation framework.
  • Learn, by practice, how to identify the right types of tests to automate as UI functional tests - to get quick and effective feedback.




Analytics Rebooted – A Workshop

Workshop Description:

I have come across some extreme examples of Business / Organizations who have all their eggs in one basket - in terms of # understand their Consumers (engagement / usage / patterns / etc.), # understand usage of product features, and, # do all revenue-related book-keeping

This is all done purely on Analytics! Hence, to say “Business runs on Analytics, and it may be OK for some product / user features to not work correctly, but Analytics should always work” - is not a myth!

What this means is Analytics is more important now, than before.

In this workshop, we will not assume anything. We will discuss and learn by example and practice, the following:
  • How does Analytics works (for Web & Mobile)? 
  • Test Analytics manually in different ways 
  • Test Analytics via the final reports
  • Why some Automation strategies will work, and some WILL NOT WORK (based on my experience)!
  • We will see demo of the Automation running for the same.
  • Time permitting, we will setup running some Automation scripts on your machine to validate the same



Measuring Consumer Quality – The Missing Feedback Loop

Session Description:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organization, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about “loosely” - is the feedback loop from your end-users to making better decisions.

SO, what is this feedback loop? Is it a myth? How do you measure it? Is there a “magic” formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:
  • The importance of knowing your Consumers
  • How do you know your product is working well?
  • How do you know your Consumers are engaged with your product?
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behavior, before making any code change?

Attendees will have deeper understanding and appreciation of the following:
  • What is Consumer Quality and how does it help shape your business!
  • Ways to measure Consumer Quality
  • Why is understanding Consumer Engagement vital to the success of your product


Friday, October 12, 2018

Conference season here is - talks, workshops, travelling, networking!

September & October 2018 is a busy conference season for me.

On 27th September, I played a game - "Collaboration - A Taboo!" at ATA GTR 2018 with an audience of 100+ people. There was absolute chaos in the game - a lot of it self-inflicted ... and thankfully - exactly was I wanted it to be. So much fun, energy and enthusiasm in the room meant there was no one feeling drowsy in the post lunch session! 

Typically I play this game in 45-min to 1 hour duration. At ATA GTR 2018 though, I had only 30 min to play the game, and add my own twist on top of it. But, never have I ever taken more than the allocated time - and I managed to get the objectives of the game achieved as well in these 30 min.

Below are some pictures from the game.




Then on 28th September, I spoke on "Measuring Consumer Quality - The Missing Feedback Loop" at StepIn's PSTC 2018. Slides from that talk can be found here.

In October, I will be off to Agile & Automation Days in Krakow, Poland. Here I will be speaking about "Measuring Consumer Quality - The Missing Feedback Loop" and also conducting a workshop on - "Analytics Rebooted - A Workshop". See detailed schedule here

Then I fly directly to Arlington, VA to participate in STPCon Fall 2018. Here I will be conducting 2 workshops - "Analytics Rebooted - A Workshop" and "Practical Agile Testing Workshop". I am also speaking about "Measuring Consumer Quality - The Missing Feedback Loop".

Will share experiences from these conferences soon!


Tuesday, September 11, 2018

Testing in the Agile World

Thanks to ThoughtWorks, I was introduced to many things - 


The list is actually quite long - but that is not the intention of this post.

The main takeaway in my learning at ThoughtWorks though, is how to Test better, and be more effective in that for the end-user. 

Even before my time at ThoughtWorks, I never agreed with the thought process that Functional Automation can / should be done only when the feature is stable. But at ThoughtWorks, I did learn many more tips and tricks and techniques and processes how to do this Functional Automation in a better way, as the product is evolving.

On 9th April 2011, I had written a detailed blog post / article regarding how can we test better in the Agile world. 

This post was titled - "Agile QA Process", and the document was uploaded to slideshare with the name - "Agile QA Process". I am very pleasantly surprised that till date, that document has had over 74K views and almost 2.7K downloads, and is still my topmost viewed post on slideshare.

When I look back at the document, it still seems very relevant and applicable, to me! 

What do you think?

Thursday, September 6, 2018

Some good examples of Data Science, AI & ML

Following up on my earlier post about ODSC - Data Science, AI, ML - Hype, or Reality?, I thought it is good to also share some of the good examples of work happening in the field.

Here are some of the examples I got to hear in the ODSC conference, most of which are available to the common human:
  • Amazing work done in the complex field of Speech recognition 
    • Why complex? Think about languages, dialects, multiple conversations at the same time, different speed of talking, etc.
  • Text to speech
    • Ex: This is especially very helpful for people with disabilities
  • Speech to Text
    • Ex: Alexa, Google Voice, etc. type of applications
  • Traffic control / Routes / Navigation
    • Ex: Google Maps
  • Recommendation engines
    • Ex: eCommerce products
  • Preventive maintenance
    • Lot of advanced vehicles have a number of sensors that can alert the driver / car manufacturer about potential issues coming up / service due for the vehicle
  • Autonomous vehicles
    • Ex: Self driving vehicles
    • Ex: Optimizing Cab scheduling / routing - There was a good session on how OLA manage its complexity in scheduling and routing - which is very applicable to eCommerce, Aviation industry, Hotel industry, etc.
    • I recently also saw a video about Volvo truck driver getting out of the truck in a difficult terrain, and walking in front of the truck, controlling its movement using a game-like controller
  • Medical equipment / gadgets for preventive / alerting health-care products

Also, Dr. Ravi Mehrotra, from IDeaS made a very powerful statement in his keynote - that I loved!! 

He said - "Best way to learn, is to forget what is not important".

This statement resonates a lot with what I think .... one needs to forget what is not (as) important, in order to focus and prioritize on what is important and can add value.

Especially true for Testers to keep in mind!


Monday, September 3, 2018

ODSC - Data Science, AI, ML - Hype, or Reality?

I got a chance to attend ODSC India, held in Bangalore on 31st Aug / 1st Sept. For those who don't know, ODSC is the largest Applied Data Science and AI conference, and it was conducted in India the first time this year.

I was very excited to attend this for couple of reasons:

  • I was attending a conference after a long time (i.e. where I was not speaking). So this was going to be a pure learning and knowing expedition for me.
  • Data Science / AI / ML have become huge buzzwords in the industry now. I had some opinions about it - but that was with limited knowledge / understanding about it. I was hungry to learn some specific of these buzzwords.


Since I was going to travel to Bangalore for ODSC anyway, I also decided to participate in the pre-conference workshop - Advanced Data Analysis, Dashboards And Visualization. I thought it would be interesting to learn about the What, Why and How of the techniques of Data Analysis, Dashboards and visualization - which would help me as I rebuild / extend TTA (Test Trend Analyzer). Though the workshop was good, it focused completely on Tableau as a tool and unfortunately did not meet my objectives / expectations. That said, there is another tool I came across in the conference - KNIME - seems interesting and am going to try it out.

The conference was good though. I attended a lot of sessions and had lot of hallway-conversations with many interesting people. Typical outcome of attending a conference, some sessions I liked better than others, some were amazing, some were mediocre. 

Here is my unstructured assessment of what I now think about what I heard and discussed:
  • Advanced mathematics learnt in colleges has an application in data science. So if children / kids ask why should they study Statistics - here is an answer!
  • Creating data models without Business Context will not work. If it does, you have been lucky :)
  • There are some interesting case studies and success stories of AI & ML. But these are the same success stories around since quite some time. All the other "noise" of AI & ML so far seems a hype so far.
  • There is a lot of value in understanding historical data better. Based on that understanding, there can be opportunities to forecast the future. There is a huge risk of doing this forecasting, IF % of uncertainty is not included as part of it. However, it is very easily ignored.
  • Understanding of Neural Networks, computing, and algorithms is essential to building intelligent solutions for complex problems.
  • It is not sufficient to get better / accurate prediction results. Being able to explain how and why those results are better / same / worse is equally important. In many cases, this would be a regulatory requirement.
  • Data Science is the "art" & "science" of understanding data better. To do this, we need to first cleanse / prep the data, simplify it using various techniques, and learn techniques to visualize the data.
  • There is a "grammar of graphics" and a "grammar of interactive graphics" - which helps in thinking about data visualization.
  • Deploying these AI / ML solutions to production is not a trivial task - mainly due to the fact of high computing and huge volume of data processing required to make it production ready. - This is a huge opportunity for the general Software Development / Testing/ DevOps community to solve problems faced by data scientists / people in the data science / AI / ML domain.
  • With data privacy laws rightly becoming stricter, you need to be careful and use only legally obtained sample datasets for analyzing / training the data models - else there is going to be huge penalties for companies involved. (This is in reference to GDPR, a new law coming up in USA and also India.)
  • Earlier, only PhD holder were qualified folks to work on Data Science. Now-a-days, the trend is to get relevant training to interns, and have them work on these problems, and then get the results validated / explained by the PhD specialists.
  • In a nutshell - Data Science, AI, ML are using specialized types of tools and technologies to solve different problems. People / organizations have been doing these activities before the buzzwords were formed / or got popular.
So, what is my core takeaway from this? 
  • As with any new buzzword, there is interesting work happening in Data Science, AI & ML - but the majority claiming to be in the field are just creating and riding the hype!

That said, I want to do the following:
  • Find opportunities to investigate and understand the Data Science + AI + ML in more detail. 
  • Understand the skills and capabilities required from a software developer + QA role perspective to contribute more effectively in solving these newer problem statements
  • Learn python / R 
  • Experiment with various tools / libraries related to data visualization


Wednesday, July 25, 2018

Converting JSON into usable objects

JSON is a great way to specify data / information and, off late, it is the format of my choice to specify test data.


I find it to be -
  • light weight 
  • easy to understand 
  • almost very intuitive to know if you have made an error in the syntax 
  • easy to read into code and parse 
  • easy create meaningful custom objects and use in code 

Recently, thanks to a friend - Abhijeet Vaikar, I came across a tool - quicktype.io - that helps in transforming the raw JSON (from various sources) directly into custom objects, in a variety of languages.

Site: https://quicktype.io/

The tool: https://app.quicktype.io/

I got to know about this tool at perfect time as I am building a new tool for dynamic logging in Java - AutoLogJ (but more about AutoLogJ later). quicktype.io does what it promises - and it saved me a lot of time to build the custom POJOs for the same.

Thanks Abhijeet Vaikar and the quicktype team!

Monday, July 23, 2018

A few thoughts on Test Automation


Deepanshu Agarwal and Brijesh Deb asked some very interesting questions on a LinkedIn post. Since I have some verbose thoughts on this, thought it is better to respond via a blog post instead.

  • Why is Test Automation still considered suitable only for regression testing? What about writing automation tests sooner as in case of Test Driven Development?
    • [Anand] - Depends what you call test automation? If ONLY FUNCTIONAL, then its better to explore the product first, investigate / have conversations with developers on what lower-level tests are already automated, and then based on cost / risk-value analysis, decide what else needs to be automated at Functional layer.

      A tangential rant ....
      The reason we think about classifications such as SMOKE, SANITY, REGRESSION in Functional Automation ONLY has a big reason. These tests are inherently very slow, brittle and it takes a lot of effort to ensure these tests give poor feedback on exact point / reason of failure. 

      I have never seen any other form of tests - say Unit tests, which would be magnitudes in number larger than the functional tests (hopefully) ever have any such classification. We all just say, the unit tests ran, not the smoke unit tests ran. 

      We need to grow up and understand the reason behind this. We need to make our top-of-the-pyramid tests as less in number as possible. We need to ensure we use good programming / development practices and get quick and reliable feedback from these tests. Else we will keep focussing on the symptoms, and never get to the root cause.

      --- Rant ends

      Once we understand this, then it is a matter of understanding in the context what can and needs to happen first, and what next. In most cases TDD will work. But TDD as a Functional Spec may, or may-not be an overkill .... the team has to decide that.
  • Why do the automated tests always have to derived from manual tests?
    • [Anand] - What is a manual test? Something that a machine is not performing? How do you do "manual testing"? Is Exploratory Testing subset of Manual Testing, or the other way, or any other thoughts on that?

      From the perspective of "automated tests" - I read it as "automated functional tests" here. In that case, the answer for the above question holds true here as well.

      Continuing from that thought - I think the approach (of deriving automated tests from so-called manual tests) is better than thinking upfront what tests I am going to automate and then proceed with the implementation without any thought or regard to any other learning along the way.
  • The tests classified as manual tests are only focused at ensuring certain checks. What about actually running some tests to discover the unknown?
    • [Anand] I don't want to get into the 'checks' debate. It is futile!

      All I have understood is - you cannot just spend time looking at the requirements / specs and write down (in your mind / bullet points / story cards / some fancy ALM tool) your test cases / scenarios. 

      That list is just a starting point of your journey of exploration and experimentation with the product-under-test. If you think that what you have identified is your actual scope of testing, then ALL THE BEST to you, your team and your product - because there are going to be so many opportunities you have missed out to make the product better and usable for the end-user. Unfortunately, lot of organisation still look for "regression" testing cycles - where (you think) you execute all the tests that were identified in a time long ago. However, everyone knows, it is best case / best effort, IF AT ALL, of actually following each and every step of that regression cycle. Such a waste of time and effort - when more meaningful testing could have been performed during that time.
  • Why is that exploratory tests are still considered suitable only for manual testing? How about automating exploratory tests using AI?
    • [Anand] What is the meaning of "exploration"?
      As per a quick online search, this is what it means:


      Now - how can you automate the unknown / unfamiliar? You can use tools to help figure out what is unknown / unfamiliar ... but once you know it, then it does not remain 'unknown'. I think buzzwords like AI and ML are tools to help bridge the gap in the known and the unknown. But we would still need to guide and use these tools and technologies to our advantage, to aid in our exploration.

Friday, July 20, 2018

Implementing Soft Assertions

Back in 2009 / 2010, I was working on implementing end-2-end tests for a web site using Java / Selenium / TestNG based automation. The challenge I was facing was that the tests used to fail for trivial (but valid) reasons - and I wondered that the test did not even get to core validations before it failed. How will the team ever know about the main issues in the product if the test fails for trivial issues? 

That was a trigger point for me to think about Soft Assertions - what if there was a way to say if there is a type of failure that I want to know about, but the test can continue with the remaining set of validations - unless something does not make sense to proceed with.

Ex: If the text message of a field is incorrect, I can continue. But if login fails, no point in proceeding with the rest of the test.

This idea seemed interesting - so I came up with the following requirements from such an  implementation as listed below:

  • Clear distinction between what type of failure I can continue from, or not
    • Ex: assert.** is for hard asserts. verify.** is for soft asserts
  • All failures that I can continue from (i.e. soft asserts), need to be collated and at the end of the test, the complete list of those soft assert failures should be presented with the test result (and in the report), while the test failed just once
    • Ex: There were 5 soft assertion failures)
  • Capture relevant screenshots whenever Soft Assertion failed
  • If there was a hard assert along the way of the test execution, the test failure should include the prior soft assert failures along with the hard assert failure, as appropriate

For the actual implementation, I did the following (in 2009/2010):
  • I looked into the TestNG code base, and I could not really find any out-of-the-box support for what I wanted to do. 
  • So for lack of knowledge on better ways of implementation, 
    • I checked-out the TestNG code, 
    • added the Soft Assertion implementation, and, 
    • built a custom TestNG.jar file
    • checked-in the jar file as a library artefact in our automation framework. 

In hindsight, I should have sent that functionality as a PR to the project. 

But not all is lost, TestNG now (or maybe since some time now) has support for Soft Assertions - out-of-the-box. And it is pretty straightforward to implement / use it as well.

Implementing Soft Assertions in your test framework

See this gist for implementation that you can you use with TestNG (I tested with v6.10).

Using Soft Assertions in your tests

Here is how you can use the Soft Assertions in your tests.


Soft Assertions in any other tech stack?

What if you are not using TestNG, or Java - rather, what if you are using completely different programming language / tools / test-runner? Can you still use Soft Assertions? 

Absolutely YES! All you need to understand is the concept, and figure out the best way to implement the same, if any out-of-the-box solution does not exist in that tech stack. 

Hope this helps you!


Thursday, July 12, 2018

Return of the todo (learning) list

Since at least a decade, I have had a list of TODOs which I actively updated and maintained. The items on this list focused on - 

  • what new things I wanted to learn / experiment with
  • new conference talk ideas
  • open source ideas / updates
Unfortunately, due to various reasons (some of which were my doing as well), the focus on learning and experimentation got lost ... and I felt awful about it. I also shared my pain with some friends and colleagues of unable to find time / opportunity / focus / support to be in the continuous learning phase. But not much could be done / changed in those circumstances.

But I am very happy to share that the list is back, and back with a bang!!!

The days of learning and experimentation continue. My list is now overflowing, and continuing to grow with ideas and things I want to learn and experiment.

I am happy again!!

Monday, April 16, 2018

Essence of Testing - A new beginning

In my career so far, I have been very fortunate to have got great mentors, and a variety of opportunities to learn, add value, and share my experiences with others around me.

Here are some of these experiences:
  • Worked in various sized organisations across the globe in the past couple of decades
  • The teams have been big and small
  • Played a variety of roles - Quality Analyst, SDET (Software Development Engineer in Test), Product Quality Engineer (PQE), Automation Engineer, Consultant, Coach, Project Manager, Director - Quality, Support Engineer, etc.
  • Worked with teams having products in different domains - Health care, eCommerce, Banking / Finance, Retail, Entertainment (OTT), Research, etc. 
  • With in organisations (B2B and B2C space): WebMD, Borland Software, Microsoft (Redmond, USA), AmberPoint (USA / India), ThoughtWorks, Vuclip 
  • Shared experiences with other via Meetups and Conferences world-wide
  • Created opensource tools like WAAT, TTA, TaaS
After working as an employee in these organisations for almost a couple of decades now, I have taken a plunge to do something different.


I am now looking forward to work with Organisations and Teams, to help co-create optimized solutions towards shipping a quality product. Leave me a message on my blog, or send me an email at abagmar@gmail.com to talk more on how we can work together!


As part of this journey, even before I was able to buy my own laptop, and warm it up, I got an interesting consulting assignment, thanks to a dear friend.
This assignment was exactly what I needed to get started - it was a Discovery Workshop, with the following objective:
  • Learn and understand the current state of the team and the product they were building
  • Understand current (perceived) challenges
  • Suggest improvements for the team in areas of:
    • Tweaks in the current processes
    • Practices to be adopted / got better at / or stopped (as anti-patterns and not adding value)
    • Identify opportunities to Test better, and early
    • Suggest a Test Automation Pyramid that is fit-for-purpose for the team
    • Suggest strategy, tools and approach for end-2-end (e2e) functional automation

As I usually do, I started off the workshop with my favorite tool for note taking - Mind Maps! As the conversations evolved and got specific, I started with a Balanced Mind-Map  to Fishbone Mind-Map.




Eventually I consolidated my thoughts and created the Discovery Workshop Report for the client in the form of a slide deck, using the following approach.


  • Learning from Discovery
    • Talk with the Team members
    • See & understand the project management tool, quality of requirements, etc.
    • See the code - understand complexity, types of tests written, quality of tests, etc.
    • See the Testing related artifacts - test cases, test execution strategy, exploratory testing, etc.
    • See the CI server - how deployments happen, what causes the builds to fail, etc.
  • Recommendations
    • For all the above areas, I created recommendations on what different aspects may help the team move ahead in a better way
Discovery and Recommendations were based on each specific activity:
    • Process
    • Architecture
    • Requirements
    • Development
    • Testing
    • Deployments

Once the learning from the Learning and Recommendation stages were well understood (by me), I then created a Suggested Plan of Execution (in phases / milestones)
  • From the recommendations, I created milestone based plans on what can be started immediately, and what decisions the team needs to take to move forward in other areas

In this 2 week time, unknowingly, in retrospect, the whole engagement was quite agile. There were periodic demos of the POCs, regular progress sharing, and changing direction of discovery and recommendations based on learning.

The Client also appreciated the quality of conversations, and the results that were shared.

All in all, a very satisfying beginning to my new journey with Essence of Testing!


Tuesday, March 27, 2018

Test Driven Development (TDD) and its modern variations

NOTE / RECOMMENDATION - This blog post is to be read with your funny side switched on!

First - lets answer the question - What is TDD?
Directly from https://en.wikipedia.org/wiki/Test-driven_development - Test-driven development (TDD) is a software development process that relies on the repetition of a very short development cycle: Requirements are turned into very specific test cases, then the software is improved to pass the new tests, only. This is opposed to software development that allows software to be added that is not proven to meet requirements.
American software engineer Kent Beck, who is credited with having developed or "rediscovered"[1] the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.
Test-driven development is related to the test-first programming concepts of extreme programming, begun in 1999,[3] but more recently has created more general interest in its own right.

However, over the years, fortunately or unfortunately, I have also come across various different techniques (used consciously / unconsciously) in Software Development, to get the work done. Some of them are listed below - in alphabetical order:

BDD = Behavior / Business Driven Development
BDD = Blame Driven Development
BDD = Buzzword Driven Development
CDD = Calendar Driven Development
CDD = Checklist Driven Development
CDD = Chadi (stick) Driven Development
CDD = Constraint Driven Development
DDD = Date Driven Development
DDD = Defect Driven Development
DDD = Document (PRD) Driven Development
EDD = Escalation Driven Development
EDD = Estimation Driven Development
EDD = Excel Driven Development
FDD = Fashion Driven Development
FDD = Fear Driven Development
FDD = Footwear (punishment) Driven Development
HDD = Hope Driven Development (fingers crossed)
IDD = Instinct Driven Development
IDD = Issue Driven Development
JDD = Jira Driven Development
MDD = Metrics Driven Development
MDD = Manager / Management Driven Development
PDD = Patch(work) Driven Development
PDD = Plan Driven Development
PDD = Prayer Driven Development
PDD = Process Driven Development
RDD = Resource Driven Development
RDD = Resume Driven Development
SDD = Stackoverflow Driven Development
SDD = Stakeholder Driven Development

And the last one -
NDD = No-Drive (towards) Development

Any other style of development you have come across in your experience?
Hope you had fun reading the list! 


Saturday, March 17, 2018

Measuring Consumer Quality - The Missing Feedback Loop

I spoke in vodQA at ThoughtWorks, Pune on "Measuring Consumer Quality - the Missing Feedback Loop". 

This talk address the why and how from my earlier blog post on "Understanding, Measuring and Building Consumer Quality". I recommend you read that first, before going through the slides and video for this talk.


Abstract:

How to build a good quality product is not a new topic. Proper usage of methodologies, processes, practices, collaboration techniques can yield amazing results for the team, the organisation, and for the end-users of your product.

While there is a lot of emphasis on the processes and practices side, one aspect that is still spoken about "loosely" - is the feedback loop from your end-users to making better decisions.

SO, What is this feedback loop? Is it a myth? How do you measure it? Is there a "magic" formula to understand this data received? How to you add value to your product using this data?

In this interactive session, we will use a case study of a B2C entertainment-domain product (having millions of consumers) as an example to understand and also answer the following questions:

  • The importance of knowing your Consumers 
  • How do you know your product is working well? 
  • How do you know your Consumers are engaged with your product? 
  • Can you draw inferences and patterns from the data to reach of point of being able to make predictions on Consumer behaviour, before making any code change? 

Video:


Slides can be found here.

Pictures:



Friday, March 9, 2018

MAD-LAB - Capabilities & Features - Agile India 2018

I spoke about "Build your own MAD-LAB - for Mobile Test Automation for CD" at Agile India 2018.

Though I have spoken on this similar topic answering the question - "Why I needed to build my own MAD-LAB?" before at vodQA in July 2017 at Vuclip, quite a few things have changed since then.

Knowing the value of "being agile", a day before my scheduled talk in Agile India 2018, I decided to revamp the content substantially. To add to my challenges, (and thanks to "testing" my slides before the talk in the conference room), I also realised the slide size format I was using is incorrect, and also the projector was not "setup / configured" correctly, making all my slide colours go haywire.

So after last 10 minutes of scrambling before the talk time, I managed to get this done correctly (at least that is what I think now in hindsight.

Moral of the above story - do a test / dry-run of your slides before your audience comes in!

That said, here is the abstract of the talk.


Abstract

In this age of a variety of cloud-based-services for virtual Mobile Test Labs, building a real-(mobile)-device lab for Test Automation is NOT a common thing – it is difficult, high maintenance, expensive! Yet, I had to do it! 

The slides are part of the discussion on the Why, What and How I built my own MAD-LAB (Mobile Automation Devices LAB). The discussion also includes the Automation Strategy, Tech Stack, Capabilities & Features of MAD-LAB and the learnings from successful & failed experiments in the journey. 

Slides

Below are the slides from my talk. The link to the video will be shared once available.




Some pictures



Friday, January 26, 2018

Agile Testing & Patterns for a good Test Automation Frameworks

2018 started with a bang! I got an opportunity to speak and share my experiences in 2 rocking meetups.

Patterns of a "good" Test Automation Framework

TechnoWise meetup on 13th Jan on Patterns of a "good" Test Automation Framework went very well, with lot of interaction and discussions along the way.



What is Agile Testing? How does Automation Help?


Then, there was an impromptu meetup setup by a very proactive ISQA community at GO-JEK office in Jakarta, Indonesia on 16th Jan. The topic there was What is Agile Testing? How does Automation help? In this meetup, I also covered aspects of Career Path of a Tester, and QA Skills and Capabilities.

There were many amazing experiences from this meetup -

  • The ISQA community is very active. The meetup was setup in literally a few days and there were over 150+ attendees
  • The GO-JEK office space is a very fun place. They actually have an auditorium in the office to host meetups and similar activities, apparently, once a week!!
  • The questions / interactions with the attendees were very insightful

Here are slides shared in the meetup. I will share the link of the video as well once available.





Friday, December 29, 2017

Understanding, Measuring and Building Consumer Quality

It has been a long time since I posted anything on my blog. For those who don't know, I am working in Vuclip, a B2C company in the OTT space, where we have millions of consumers using our product via Android and iOS native apps, and the Browser too.

In the past few months, I have been in deep water taking on a new and very exciting initiative. Before I share what that is - here is a traditional approach to Quality.


Typically practices, processes, tools are chosen and implemented to help build a good Quality Product for the end-user. Evolving from Waterfall methodology to Agile methodology has been challenging for many (organizations and individuals), but has proven to be a huge step forward to achieving the goal of building a good and usable product.

In this course of time, we have (thankfully) changed the thought process of considering QA to be the "gate-keeper of Quality" to QA being a "Quality Advocate and Quality Enabler" for the team and the product. A very important change as a result has been changing the focus of QA from "finding defects" to "preventing defects".

And rightly so! After-all, why should the QA be the gate-keeper and -
  • take the responsibility and blame of someone giving poor / incomplete requirements? or,
  • someone writing bad code during development?
The QA is not a scavenger meant to clean up mess created by others. The QA instead is an enabler who -
  • helps bring all stakeholders together through the life-cycle of the product - from conceptualization to end-delivery, 
  • asks a lot of questions to find gaps, clarify assumptions, etc.
  • helps find and radiate information including risks, and,
  • is an active part of doing whatever it takes to prevent defects coming into the system
The Agile practices help do this in a collaborative way, getting features to completion in an incremental fashion, and iterating / pivoting based on the feedback received. This is also what practices related to Continuous Delivery enables us to do well.

But this is nothing new, at least for me. After all, during my fantastic journey at ThoughtWorks, I would say that these were basic tenets of why and how we worked.

That said, my eye-opener in the past few months has been to take this thought process many steps forward.

My agenda has been - how can I help influence and raise the bar of quality in such a way that we not only build a quality product, but also be in a position to predict how our millions of consumers will be able to use it.

This initiative we are calling as Consumer Quality -
  • how do we understand Quality (= value) of the product as perceived by our Consumers, 
  • what data can be relevant to understand this, how can we be proactive about looking at this data while building a quality product, and,
  • the Nirvana stage - how can we predict what actions taken will have desired impact on Consumer Quality!
I hope to be able to share with you more of this in 2018!

Happy New Year everyone! Keep Learning, Keep Sharing!