Showing posts with label #testing. Show all posts
Showing posts with label #testing. Show all posts

Monday, November 23, 2015

TTA in Discuss Agile Day Pune

I spoke in Discuss Agile Day, Pune on 22nd Nov on "To Deploy, or Not To Deploy - Decide using TTA's Trend & Failure Analysis".

Below are the details of the talk:

Abstract

In a fast-moving environment, where Continuous Integration (CI) and Continuous Delivery (CD) are a necessity and not a luxury, how can teams decide if a product is ready to be deployed to the next environment and go 'live'?

What is the health of your product portfolio at any point in time?
Can you identify patterns over a period of time in making better decisions to better the quality of your product(s)?
Test Automation across all layers of the Test Pyramid enables to get quick feedback about the health of the product-under-test.

However, in an organization having multiple products in its portfolio, how can you get the collated quality / health information from all the products, quickly and in real-time? Or, for a large program of work, which has various projects being worked on in parallel by numerous teams across the world, how can the relevant people quickly get the consolidated quality / health information for the whole program?

In such cases, how can you:
- figure out any Trends / Patterns in the quality, or,
- do any meaningful Comparative Analysis (say between the quality of last release Vs the next release), or,
- do quick Failure Analysis and prioritize the 'fixing' of issues in an efficient fashion, and,
- do some quick Functional Performance Benchmarking.

At present this needs to be done manually.
Learn an effective way to answer the above questions - with TTA (Test Trend Analyzer), an open source product.

TTA give you real-time and visual insights into the health of the product portfolio using the Test Automation results. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.

Slides from the talk


Monday, October 12, 2015

Web Analytics and the new kids in town!

I spoke in Agile Testing Alliance Global Gathering on 8th Oct in Bangalore on "The What, Why and How of Web Analytics Testing".

This talk was my take on explaining a very important, yet quite ignored, aspect of Product / Application Development - Web Analytics. Below is the abstract of the talk, followed by slides and video from the talk.


Topic: The What, Why & How of Web Analytics Testing

Learning Objectives:

The most used and heard about buzz words in the Software Industry today are … IoT and Big Data!

With IoT, with a creative mindset looking for opportunities and ways to add value, the possibilities are infinite. With each such opportunity, there is a huge volume of data being generated - which if analyzed and used correctly, can feed into creating more opportunities and increased value propositions.

There are 2 types of analysis that one needs to think about.
1. How is the end-user interacting with the product? This will give some level of understanding into how to re-position and focus on the true value add features for the product.
2. With the huge volume of data being generated by the end-user interactions, and the data being captured by all devices in the food-chain of the offering, it is important to identify patterns from what has happened, and find out new product / value opportunities based on usage patterns.

Learn what is Web Analytics, why is it important, and see some techniques how you can test it manually and and also automate that validation.


Slides from the talk

Saturday, August 22, 2015

Patterns in Test Automation

I spoke in vodQA Hyderabad on Sat, 22nd August 2015 about Patterns in Test Automation - Frameworks, Data & Locators.

The slides are available on SlideShare:


The video is available on YouTube:



Abstract

Building a Test Automation Framework is easy - there are so many resources / guides / blogs / etc. available to help you get started and help solve the issues you get along the journey.

However, building a "good" Test Automation Framework is not very easy. There are a lot of principles and practices you need to use, in the right context, with a good set of skills required to make the Test Automation Framework maintainable, scalable and reusable.

Design Patterns play a big role in helping achieve this goal of building a good and robust framework.

In this talk, we will talk about, and see examples of various types of patterns you can use for:

  • Build your Test Automation Framework
  • Test Data Management
  • Locators / IDs (for finding / interacting with elements in the browser / app)
Using these patterns you will be able to build a good framework, that will help keep your tests running fast, and reliably in your CI / CD setup!

Learning outcome


  • Patterns for building Test Automation Framework
  • Patterns for Test Data Management, with pros and cons of each
  • Patterns for managing locators / IDs for interaction with UI



Friday, August 14, 2015

Client-side Performance Testing workshop video

As mentioned here, I conducted a Client-side Performance Testing workshop in TechJam.

It was a full house, and almost turned into a flop show because there was no wifi available - an essential requirement for the workshop. There were 2 things that saved me:
1. The attendees, thankfully (in this case) did not read the prerequisites well - most of them came without a laptop
2. Because of the above, I could get by using a 3G USB connection and just do a demo of the tools I wanted to show.

End of the day, all was good. I got good feedback from the participants that they really enjoyed the workshop, and it was very informative and useful. (Thank you all again for the kind words!)

Below is the video from the workshop.


The slides are available here:


Sunday, August 9, 2015

Questions about the Test Pyramid

After watching my presentation on "Enabling Continuous Delivery (CD) in Enterprises with Testing", I recently got asked a couple of questions about the Test Pyramid. Thought it would be good to reply publicly - that may help others who had similar doubts. 

If you have any other questions, please reach out, or add it as comments on this post.

  • Why do you talk about a JavaScript Test? I mean, why you don't consider this type of testing inside another? So, what do you mean by JavaScript test.
    • JavaScript testing requires different toolset, not the standard xUnit based ones. Hence I classify it separately. Also, there is potentially a lot of logic that can be built in the JavaScript layer - so it is essential to write tests for that too - say using Jasmine.
  • What's the difference between View and UI?
    • UI test should focus on business / user journey validations. However a view test is different. Consider a journey which has a 5 step / screen workflow. To validate some UI change on the 4th step / screen, you will need to go through, in sequence, from step 1 to 4 and then validate the changes. This is very slow and costly approach. Instead, if you build the right type of stubs / mocks, then you can setup the state in your product which simulates the step 1-3 are completed, and directly open the UI, go to step #4, and validate your changes. This is the difference in View and UI tests.

Thursday, August 6, 2015

Agile Testing - Metrics can be fun too!

Metrics are meaningless unless in the right context. In this case, my "right" context is purely a "feel-good-factor". 

In April 2011, I published the "Agile QA Process" paper on SlideShare. I am very happy to see it has received over 30000 views and has been downloaded over 1400 times!

On a similar note, I created a mindmap for Test Insanse - titled - "Agile QA - Capabilities & Skills". That also seems to be hitting a good note - with almost 1000 views in under 25 days!

So in this case - Metrics are fun! I don't mind this ego boost to continue writing more, and sharing more!

Saturday, August 1, 2015

Experiences from "Agile Testing" workshop

As mentioned in this post, I conducted an "Agile Testing" workshop on 24th July 2015.

Here are the slides from the workshop:

  • What is Agile testing? - Learn what does it mean to Test on Agile Projects


  • Effective strategies for Distributed Testing - Learn practices that help bridge the Distributed Testing gap!


  • Test Automation in Agile Projects - Why? What? How? - Why is Test Automation important, and how do we implement a good, robust, scalable and maintainable Test Automation framework!


  • Build the "right" regression suite using Behavior Driven Testing (BDT) - Behavior Driven Testing (BDT) is an evolved way of thinking about Testing. It helps in identifying the 'correct' scenarios, in form of user journeys, to build a good and effective (manual & automation) regression suite that validates the Business Goals.

Thursday, July 16, 2015

Automating Look & Feel, along with Functional validation

Typically we automate the validation of functionalities - using various options and approaches based on the Test Automation Pyramid. However, there are various reasons why one may want to automate the Look & Feel of their application (web / native)

This post is not about if or when we should automate the Look & Feel aspects of the product. This post assumes you have done the discussions, evaluations and validations required to reach the decision that, yes, we need to automate the Look & Feel of the product under test. 

The next question is - how? What are the alternatives of accomplishing this type of automation?

If you need basic validation, there is an open-source alternative - where you can do Visual Testing and Validations using PhantomCss. Vishnu & Shridhar spoke about this in vodQA Pune - Innovations in Testing. See the slides and videos from their awesome session.

If you need more flexibility and functionality from the Look & Feel automation, and want to do this for cross-browser / multiple devices, then I recommend you look at Applitools Eyes product. I had have seen a couple of demos from Moshe and Adam about this product - and they were kind to record one of the demos and allow me to share it with all - you can see the demo here on youtube.



Applitools has a free account for individuals, with of course, certain limitation in usage (not functionality). You can sign-up for your account here and try it out.

Tuesday, July 14, 2015

Client-side Performance Testing Workshop in TechJam, 13th August 2015

I am conducting a Client-side Performance Testing workshop in TechJam on Thursday, 13th August 2015.

You can register for the same from the TechJam page.


Abstract

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing.
Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.
Lastly - just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity - using WebPageTest (private instance setup), and experiment with yslow - as a low-cost, programmatic alternative to WebPageTest.

Expected Learnings

  1. What is Performance Testing and Performance Engineering.
  2. Hand's on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
  3. Examples / code walk-through of some ways to automate Client-side Performance Testing.

Prerequisites

  1. Participants are required to bring their own laptop for this workshop.
  2. Also, please install phantomJS on your machine (http://phantomjs.org/download.html)