Showing posts with label experiences. Show all posts
Showing posts with label experiences. Show all posts

Tuesday, November 6, 2018

Is the Future of Test Automation I predicted already here?

Today, almost at the end of 2018, I have come across many tools focused on making Test Automation, easier, faster, reliable and more valuable to the teams & the product - like testim.iotestcraft.iokataloncypress.iomabltest.ai, etc. These tools are very interesting and very promising for the value proposition they are bringing to the table. 

As I reflect on these shiny new tools, my mind wanders back to 2009 / 2010 when I was toying with the idea of what would be next in Test Automation Tools & Infrastructure space. I had penned my thoughts and published an article on ThoughtWorks Insights with the title - "Future of Test Automation Tools & Infrastructure" (https://www.thoughtworks.com/insights/blog/future-test-automation-tools-infrastructure). 

If we look deeper in my post, the tools I mentioned above (and many others that I probably am unaware of), are conceptually on the lines of what I had sort-of thought in 2009 / 2010. They are using a very interesting blend of past experiences, in some cases advanced technology like AI & ML, in some cases leveraging cloud / SaaS model, and more importantly - pushing the boundaries to do things differently! I am personally very happy to see this happen.

That gets another set of questions in my mind now - if what I had thought of back then is now true, and a reality, then what is next? What will the next generation of new, interesting, shiny tools look like in the next 5 years?


Monday, July 23, 2018

A few thoughts on Test Automation


Deepanshu Agarwal and Brijesh Deb asked some very interesting questions on a LinkedIn post. Since I have some verbose thoughts on this, thought it is better to respond via a blog post instead.

  • Why is Test Automation still considered suitable only for regression testing? What about writing automation tests sooner as in case of Test Driven Development?
    • [Anand] - Depends what you call test automation? If ONLY FUNCTIONAL, then its better to explore the product first, investigate / have conversations with developers on what lower-level tests are already automated, and then based on cost / risk-value analysis, decide what else needs to be automated at Functional layer.

      A tangential rant ....
      The reason we think about classifications such as SMOKE, SANITY, REGRESSION in Functional Automation ONLY has a big reason. These tests are inherently very slow, brittle and it takes a lot of effort to ensure these tests give poor feedback on exact point / reason of failure. 

      I have never seen any other form of tests - say Unit tests, which would be magnitudes in number larger than the functional tests (hopefully) ever have any such classification. We all just say, the unit tests ran, not the smoke unit tests ran. 

      We need to grow up and understand the reason behind this. We need to make our top-of-the-pyramid tests as less in number as possible. We need to ensure we use good programming / development practices and get quick and reliable feedback from these tests. Else we will keep focussing on the symptoms, and never get to the root cause.

      --- Rant ends

      Once we understand this, then it is a matter of understanding in the context what can and needs to happen first, and what next. In most cases TDD will work. But TDD as a Functional Spec may, or may-not be an overkill .... the team has to decide that.
  • Why do the automated tests always have to derived from manual tests?
    • [Anand] - What is a manual test? Something that a machine is not performing? How do you do "manual testing"? Is Exploratory Testing subset of Manual Testing, or the other way, or any other thoughts on that?

      From the perspective of "automated tests" - I read it as "automated functional tests" here. In that case, the answer for the above question holds true here as well.

      Continuing from that thought - I think the approach (of deriving automated tests from so-called manual tests) is better than thinking upfront what tests I am going to automate and then proceed with the implementation without any thought or regard to any other learning along the way.
  • The tests classified as manual tests are only focused at ensuring certain checks. What about actually running some tests to discover the unknown?
    • [Anand] I don't want to get into the 'checks' debate. It is futile!

      All I have understood is - you cannot just spend time looking at the requirements / specs and write down (in your mind / bullet points / story cards / some fancy ALM tool) your test cases / scenarios. 

      That list is just a starting point of your journey of exploration and experimentation with the product-under-test. If you think that what you have identified is your actual scope of testing, then ALL THE BEST to you, your team and your product - because there are going to be so many opportunities you have missed out to make the product better and usable for the end-user. Unfortunately, lot of organisation still look for "regression" testing cycles - where (you think) you execute all the tests that were identified in a time long ago. However, everyone knows, it is best case / best effort, IF AT ALL, of actually following each and every step of that regression cycle. Such a waste of time and effort - when more meaningful testing could have been performed during that time.
  • Why is that exploratory tests are still considered suitable only for manual testing? How about automating exploratory tests using AI?
    • [Anand] What is the meaning of "exploration"?
      As per a quick online search, this is what it means:


      Now - how can you automate the unknown / unfamiliar? You can use tools to help figure out what is unknown / unfamiliar ... but once you know it, then it does not remain 'unknown'. I think buzzwords like AI and ML are tools to help bridge the gap in the known and the unknown. But we would still need to guide and use these tools and technologies to our advantage, to aid in our exploration.

Thursday, July 12, 2018

Return of the todo (learning) list

Since at least a decade, I have had a list of TODOs which I actively updated and maintained. The items on this list focused on - 

  • what new things I wanted to learn / experiment with
  • new conference talk ideas
  • open source ideas / updates
Unfortunately, due to various reasons (some of which were my doing as well), the focus on learning and experimentation got lost ... and I felt awful about it. I also shared my pain with some friends and colleagues of unable to find time / opportunity / focus / support to be in the continuous learning phase. But not much could be done / changed in those circumstances.

But I am very happy to share that the list is back, and back with a bang!!!

The days of learning and experimentation continue. My list is now overflowing, and continuing to grow with ideas and things I want to learn and experiment.

I am happy again!!

Thursday, June 30, 2016

Learnings from Selenium Conference 2016, Bangalore

The value one gets by attending any conference / training / meetup / etc. is subjective to various aspects, some of which are mentioned below (in no particular order):

  • Individual skills & capabilities
  • Past experiences
  • Existing knowledge / information / expertise on the subject 
  • Open mindedness
  • Willingness to learn
  • Current work (tools & tech stack, challenges, risks, priorities, backlog, tech debt, team members, etc.)

The above aspects definitely played a part in what takeaways I had from the recently concluded Selenium Conference 2016 in Bangalore as well.

Here are my key takeaways, which I am going to work on learning more about, or implementing in the near future - special thanks to +Dave Haeffner , +Marcus Merrell , +Simon Stewart+Bret Pettichord for helping me find these takeaways as part of various conversations during these few days.


  • Related to Protractor
    • Use Proxy Server in tests (Protractor framework) to capture HAR file on specific actions (AJAX calls) - and capture performance metrics from the same
    • Read and experiment with the Marionette driver for Firefox - maybe it helps me overcome some of my challenges with Firefox & Maps in CI environment (headless using xvfb)
    • Remove "phantomJS" as a supported browser from my framework by ensuring headless tests work with Chrome & Firefox using xvfb
    • Highlight element when running tests before taking screenshots - will help in debugging
    • Experiment with different loggers & reporters - Allure, Winston logger
    • Better "promise" handling in framework to keep abstraction layers sane
  • Revive WAAT - Web Analytics Automation Testing Framework - create new plugin using Proxy Server approach. Also remove Omniture Debugger and HttpSniffer plugin.
  • Refocus energy on TTA - Test Trend Analyzer.
  • Keep vodQA going strong - its a good community initiative

See you all in Selenium Conference UK in November 2016!


Monday, March 14, 2016

Protractor for Angular apps?

Already asked these questions in the vodQA group on LinkedIn - but thought to repeat the same here as well - in case someone else also reads this, and has some thoughts.

I am experimenting (again) with Protractor for automation against Angular-based web-apps. This time around, my comfortness with Javascript is better (by a couple more % than before) - so I am better prepped for this challenge. 

That said, I am interested in knowing a few things on this:

  • Has anyone in the group worked with protractor recently? 
  • What has been your experiences in working with it? 
  • Who are the roles involved in the automation implementation, execution and maintenance? 
  • What are the typical utilities you built in this framework?
  • How have you been modelling you page-object pattern with JS / protractor based frameworks? Or, is there some other better set of patterns for JS that should be used?
  • How did you build your page objects? How did you build and manage the composition / nesting of pages? Did the method of a page return an appropriate page object?
  • How many tests exist in your framework? 
  • Do you run your tests in parallel?
  • Do your tests run in CI? If yes, which driver do you use? Protractor site discourages the use of phantomJS. 
  • Would it be possible to share some (non-confidential) examples of how you built your Page Objects? How are your specs written? Any example of that possible to see?
  • Did anyone manage to run their tests against Safari / IE11 as well?
  • What about soft asserts? Did you implement this?
  • I saw a strange issue when running my test against chrome - I got the element is not clickable at xxx coordinates. However the same test ran against Firefox and phantomjs. Anyone seen this before?
  • Given that protractor site does not recommend using phantomJS driver much, anyone used xvfb for running their tests in CI?
  • What reporters do you use?

Thursday, February 18, 2016

Update & Learning from Webinar on Test Automation - Principles & Practices

On request from a very enthusiastic Tester - Buddhini from Sri Lanka, I did a webinar for the Sri Lanka Testing Community on - "Test Automation - Principles & Practices".

Below is the flyer they created.



This was a different type of webinar - with all attendees in Sri Lanka in one room, and me, the presenter, speaking over GoToMeeting from ThoughtWorks Pune, India.

The only thing that did not work out well - was the interactions - which was expected anyway - since the speaker and the attendees were not really face-2-face in the same room. Also, it was unfortunate - that the Internet connection was not the best - hence could not really hear any question / comments what the attendees were asking. That said, the attendees were very responsive - and thanks to video enabled, we could use visual gestures to keep track and have simple yes / no type of interactions.

The slides used during the talk can be seen below.




Lastly, there were some questions that were asked by attendees during registration / talk. I will do a followup post with my answer to these. In the meantime, please post more questions, if at all, in the comments section.

  • What is the best way we can use to do Load testing? If it is for Java projects then can we use Java Thread classes or is Jmeter a good tool to use? If so how can we use Jmeter tool in a better way? What are the good tools to do performance testing?
  • Is it a good practice to automate GUI of the screens or should we always automate server side testing?
  • Is Perl a good interpreter to test server side testing? If it is not then what are the good tools we can use?
  • Is selenium a good tool to automate the functionality? Are there any other tools?
  • How to build an automation framework?
  • Are there any other open source tools for Desktop applications (other than Sikuli and AutoIT)?

Tuesday, January 12, 2016

The story of a 'small' vodQA ending up being 'x-large'

We are extremely happy to start the new year with YASV (Yet Another Successful vodQA) event, this time with the theme - Agile Testing Workshop, conducted on 9th January 2016 in ThoughtWorks, Pune office.

Why the theme - "Agile Testing Workshop"?

Over the past few years, after having worked on numerous projects, interacted with a lot of clients (and their partners / vendors), and gaining insights from speaking with individuals & teams in conferences & organizations, we (the vodQA Pune team), realized that a decent portion of the Software (testing) Industry lacks decent / good understanding of Agile and effective Testing on Agile projects / teams.

So, we decided to conduct the next vodQA in Pune - focussed on Agile Testing to answer questions like - "What is Agile and what does it mean to Test on Agile projects / teams?"

Highlights

  • When we started planning for this edition of vodQA, the plan was to keep it very lean - in planning, execution and participation as well. For this, we planned to keep this vodQA 'small'. Little did we realize it would end up being a patiala peg.
  • What started out as an event aimed at 30 attendees soon shot up to 180+ RSVPs on Facebook to 160+ confirmations and eventually we had 85+ attendees. Including ThoughtWorkers, we (again) crossed 100+ people for vodQA Pune! - There went a lot of our 'being-lean' out of the window!
  • This event was completely driven by the Facebook group (from announcements to registrations to updates).
  • We had a quite a few attendees travel from out of Pune for vodQA (ex: Mumbai, Nagpur)
  • This was one of the most vocal, enthusiastic and interactive audience vodQA Pune has seen. They shared their experiences and asked a lot of questions as well.
  • True to our objective for this vodQA, we ensured there was sufficient time between sessions / workshops to facilitate discussions and answer specific questions from the attendees.
  • We had impromptu fishbowl discussion on certain Parking Lot questions.
  • After the first session of the day (Agile Game), the attendees celebrated (it was over) by bursting the balloons - early Diwali some would say … :)
  • A huge shoutout to the organisers who were constantly tweaking their execution methods, days before the event as our expected turnout gradually rose from 30 to 100+.

Agenda and Slides

TopicBySlides
Welcome noteAnand Bagmar
Agile GameAbhay Dalvi, Vardhan Bhatt & Vikrant Chauhan
Tea break

What is Agile Testing?Amit Gundiyal & Prasad Kalgutkarhttp://www.slideshare.net/vodqanite/what-is-agile-testing-56891493
Effective Strategies for Distributed TestingPreeti Mishrahttp://www.slideshare.net/vodqanite/strategies-for-distributed-testing
Lunch

Testing the Mysterious SphereAnjali Wadhwa, Ashwini Ingle & Preeti Mishrahttp://www.slideshare.net/vodqanite/testing-the-mysterious-sphere
Break

Test Automation - Principles, PracticesVardhan Bhatt & Vikrant Chauhanhttp://www.slideshare.net/vodqanite/lessons-learnt-from-test-automation-principles-practices
Tea + Snacks break

Patterns in Test Automation (Framework + Data)Anand Bagmarhttp://www.slideshare.net/abagmar/patterns-in-test-automation
 

Feedback

  • Overall workshop was wonderful. Presentation and content was good. Helpful to understand and implement in our current process.
  • Agile testing game taught us to focus more on quality than quantity & take feedback as soon as possible from the PO
  • Though I am not working in Agile env currently, I understood whole session and got to learn something.

The always rocking vodQA Pune team!!
vodQA Pune team

Monday, November 30, 2015

Enabling CD at Agile Noida

On 29th November 2015, I spoke in Agile Noida on "Enabling Continuous Delivery (CD) in Enterprises with Testing".

Below is the abstract, slides and video from the talk.

Abstract

The key objectives of any organization is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!
In such a fast moving environment, CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury!
There are various practices that organizations need to implement to enable CD. Changes in requirements (a reality in all projects) needs to be managed better. Along with this, processes and practices need to be tuned based on the team capability, skills and distribution.
Testing (automation) is one of the important practices that needs to be setup correctly for CD to be successful. But, this is tricky and requires a lot of discipline, rigor and hard work by all the team members involved the product delivery.
All the challenges faced in smaller organizations get amplified when it comes to Enterprises. There are various reasons to this - but most common reasons are - scale, complexity of the domain, complexity of the integrations (to internal / external system), involvement of various partners / vendors, long product life-cycles, etc.
In such situations, the Testing complexity and challenges also increase exponentially!
Learn, via a case study of an Enterprise, a large Bank, the Testing approach required to take them on the journey to achieving CD.

Slides


Video

 

 

Wednesday, September 23, 2015

Selenium Conference 2015 - it simply came, and went so fast

Its been a crazy summer - the 2nd week of September 2015 just amplified that…

A good few months ago we - the Selenium Conference Planning Committee started on the journey of planning this years Selenium Conference 2015. We started with debating where to have this years conference, till Portland magically came up on the radar, and became a reality. We met over Google Hangout every 2 weeks initially, and then as we got closer to the date, every week.

Can’t believe as I am writing this post, the conference is already over (a couple of weeks ago) …

The team put in a lot of hard work - me doing the least of that … and the turnout (approx 500 people), the interactions and the quality of talks proves the hard work paid dividends.

I traveled from Pune, India on 5th Sept at around 6pm headed to Portland, Oregon. The journey - from home to the hotel took approximately 35 hours.

After crazy 4 days, and a total of around 25-30 hours of sleep in 5 nights (thanks to the jet lag), and having delivered 3 talks as well, it was another 35 hour trip back home ... the only good thing after this hectic trip - I never got adjusted to the US time zone - which meant no jet-lag when I came back home :) This was a first for me :)

Slides & Videos from Selenium Conference 2015:

All the slides and videos for all the talks are available here.

Below is the list of my talks:

To Deploy or Not-to-Deploy - decide using TTA's Trend & Failure Analysis

I got a lot of very good feedback for this talk, and also quite a few people expressed interest in trying it out! Looking for feedback from their experiences!


Video of the talk is available on YouTube here:


Slides are available here:


Automate across Platform, OS, Technologies with TaaS

This topic is so relevant with anyone working in large enterprises, or when it is being "mandated" to work on a common test automation framework.

Video of the talk is available here:

Slides are available here:


Say ‘No’ to (more) Selenium Tests

I paired with Bhumika on this talk. We were very agile in preparing for this talk - a day in advance to be precise. Also, it was very bold topic to have in a Selenium Conference - standing in front of 200+ Selenium enthusiasts, and telling them - do NOT write more Selenium tests. But went pretty well ... given that we were able to walk on our own feet out of the room, and that people were able to get the message we were trying to deliver :D

Video of the talk is available here:


Slides are available here:


Friday, August 14, 2015

Client-side Performance Testing workshop video

As mentioned here, I conducted a Client-side Performance Testing workshop in TechJam.

It was a full house, and almost turned into a flop show because there was no wifi available - an essential requirement for the workshop. There were 2 things that saved me:
1. The attendees, thankfully (in this case) did not read the prerequisites well - most of them came without a laptop
2. Because of the above, I could get by using a 3G USB connection and just do a demo of the tools I wanted to show.

End of the day, all was good. I got good feedback from the participants that they really enjoyed the workshop, and it was very informative and useful. (Thank you all again for the kind words!)

Below is the video from the workshop.


The slides are available here:


Sunday, August 9, 2015

Questions about the Test Pyramid

After watching my presentation on "Enabling Continuous Delivery (CD) in Enterprises with Testing", I recently got asked a couple of questions about the Test Pyramid. Thought it would be good to reply publicly - that may help others who had similar doubts. 

If you have any other questions, please reach out, or add it as comments on this post.

  • Why do you talk about a JavaScript Test? I mean, why you don't consider this type of testing inside another? So, what do you mean by JavaScript test.
    • JavaScript testing requires different toolset, not the standard xUnit based ones. Hence I classify it separately. Also, there is potentially a lot of logic that can be built in the JavaScript layer - so it is essential to write tests for that too - say using Jasmine.
  • What's the difference between View and UI?
    • UI test should focus on business / user journey validations. However a view test is different. Consider a journey which has a 5 step / screen workflow. To validate some UI change on the 4th step / screen, you will need to go through, in sequence, from step 1 to 4 and then validate the changes. This is very slow and costly approach. Instead, if you build the right type of stubs / mocks, then you can setup the state in your product which simulates the step 1-3 are completed, and directly open the UI, go to step #4, and validate your changes. This is the difference in View and UI tests.

Tuesday, July 14, 2015

Client-side Performance Testing Workshop in TechJam, 13th August 2015

I am conducting a Client-side Performance Testing workshop in TechJam on Thursday, 13th August 2015.

You can register for the same from the TechJam page.


Abstract

In this workshop, we will see the different dimensions of Performance Testing and Performance Engineering, and focus on Client-side Performance Testing.
Before we get to doing some Client-side Performance Testing activities, we will first understand how to look at client-side performance, and putting that in the context of the product under test. We will see, using a case study, the impact of caching on performance, the good & the bad! We will then experiment with some tools like WebPageTest and Page Speed to understand how to measure client-side performance.
Lastly - just understanding the performance of the product is not sufficient. We will look at how to automate the testing for this activity - using WebPageTest (private instance setup), and experiment with yslow - as a low-cost, programmatic alternative to WebPageTest.

Expected Learnings

  1. What is Performance Testing and Performance Engineering.
  2. Hand's on experience of some open-source tools to monitor, measure and automate Client-side Performance Testing.
  3. Examples / code walk-through of some ways to automate Client-side Performance Testing.

Prerequisites

  1. Participants are required to bring their own laptop for this workshop.
  2. Also, please install phantomJS on your machine (http://phantomjs.org/download.html)

Tuesday, June 30, 2015

"Agile Testing" workshop in July

[UPDATE] I think I will include some interesting Innovation Games to demonstrate the need for collaboration and more effective working. Or it could be something more specific to Testing on Agile projects. So many options ... 

---------------------------------

As part of Unicom's World Conference Next Generation Testing, I am doing a 1-day workshop on "Agile Testing", on Friday, 24th July 2015 in Bangalore. You can register for the workshop from here, or contact me for more information.

Below are the details of the workshop. 

Agile Testing

Abstract

The Agile Manifesto was published in 2001. It took the software industry a good few years to truly understand what the manifesto means, and the principles behind it. However, choosing and implementing the right set of practices to get the true value from working the Agile way has been the biggest challenge for most!

While Agile has now gone mainstream, and as we get better at the development practices to being Agile, Testing has still been lagging behind in most cases. A lot of teams are still working in the staggered fashion - with testing following next after development completed.

In this workshop, we will learn and share various principles and practices which teams should adopt to be successful in testing on Agile projects. 

Agenda


  • What is Agile testing? - Learn what does it mean to Test on Agile Projects
  • Effective strategies for Distributed Testing - Learn practices that help bridge the Distributed Testing gap!
  • Test Automation in Agile Projects - Why? What? How? - Why is Test Automation important, and how do we implement a good, robust, scalable and maintainable Test Automation framework!
  • Build the "right" regression suite using Behavior Driven Testing (BDT) - Behavior Driven Testing (BDT) is an evolved way of thinking about Testing. It helps in identifying the 'correct' scenarios, in form of user journeys, to build a good and effective (manual & automation) regression suite that validates the Business Goals.

Key Learnings for participants in this workshop


  • Understand the Agile Testing Manifesto
  • Learn the essential Testing practices and activities essential for teams to adopt to work in Agile way of working
  • Discover techniques to do effective testing in distributed teams
  • Find out how Automation plays a crucial role in Agile projects
  • Learn how to build a good, robust, scalable and maintainable Functional Automation framework
  • Learn, by practice, how to identify the right types of tests to automate as UI functional tests - to get quick and effective feedback


Pre-requisites


  • At-least a basic working knowledge and understanding of Agile

Thursday, June 18, 2015

Enabling CD & TTA in Discuss Agile 2015

I had the opportunity to speak in Discuss Agile Delhi 2015 on 2 topics.

Here are the details on the same:


Enabling Continuous Delivery (CD) in Enterprises with Testing

Abstract:

The key objectives of any organization is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!
In such a fast moving environment, CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury!
There are various practices that organizations need to implement to enable CD. Changes in requirements (a reality in all projects) needs to be managed better. Along with this, processes and practices need to be tuned based on the team capability, skills and distribution.

Video:




Slides:






To Deploy, or Not to Deploy? Decide using Test Trend Analyzer (TTA)

Abstract:

The key objectives of organizations is to provide / derive value from the products / services they offer. To achieve this, they need to be able to deliver their offerings in the quickest time possible, and of good quality!
In order for these organizations to to understand the quality / health of their products at a quick glance, typically a team of people scramble to collate and collect the information manually needed to get a sense of quality about the products they support. All this is done manually.
So in the fast moving environment, where CI (Continuous Integration) and CD (Continuous Delivery) are now a necessity and not a luxury, how can teams take decisions if the product is ready to be deployed to the next environment or not?
Test Automation across all layers of the Test Pyramid is one of the first building blocks to ensure the team gets quick feedback into the health of the product-under-test.
The next set of questions are:
    •    How can you collate this information in a meaningful fashion to determine - yes, my code is ready to be promoted from one environment to the next?
    •    How can you know if the product is ready to go 'live'?
    •    What is the health of you product portfolio at any point in time?
    •    Can you identify patterns and do quick analysis of the test results to help in root-cause-analysis for issues that have happened over a period of time in making better decisions to better the quality of your product(s)?
The current set of tools are limited and fail to give the holistic picture of quality and health, across the life-cycle of the products.
The solution - TTA - Test Trend Analyzer
TTA is an open source product that becomes the source of information to give you real-time and visual insights into the health of the product portfolio using the Test Automation results, in form of Trends, Comparative Analysis, Failure Analysis and Functional Performance Benchmarking. This allows teams to take decisions on the product deployment to the next level using actual data points, instead of 'gut-feel' based decisions.

Slides: