Showing posts with label automation. Show all posts
Showing posts with label automation. Show all posts

Sunday, May 22, 2022

Automating the real-user scenarios across multi-apps, and multi-devices

Simulating real-user scenarios as part of your automation is a solved problem. You need to understand the domain, the product, the user, and then define and implement your scenario.

But there are some types of scenarios that are complex to implement. These are the real-world scenarios having multiple personas (users) interacting with each other to use some business functionalities. These personas may be on the same platform or different (web / mobile-web / native apps / desktop applications).

Example scenarios:

  • How do you check if more than 1 person is able to join a zoom / teams meeting? And that they can interact with each other?
  • How do you check if the end-2-end scenario that involves multiple users, across multiple apps works as expected?
    • Given user places order on Amazon (app / browser)
    • When delivery agent delivers the order (using Delivery app)
    • Then user can see the order status as "Delivered"

Even though we will automate and test each application in such interactions independently, or test each persona scenarios independently, we need a way to build confidence that these multiple personas and applications can work together. These scenarios are critical to automate!

teswiz, an open-source framework can easily automate these multi-user, multi-app, multi-device scenarios. 

Example: Multi-user, Multi-device test scenario

 

Example: Multi-user, Multi-app, Multi-device test scenario

See teswiz and getting-started-with-teswiz projects for information, or contact me.
 

Friday, March 25, 2022

How to handle Select certificate popup from Chrome using Selenium WebDriver?

For one of the applications, when I run some Selenium WebDriver tests. I see a Select a certificate popup. I am not able to handle this, and hence my test fails. 

Has anyone been able to handle this popup, or rather, avoid this popup being shown?


If I manually select a certificate and click OK, I am prompted for my mac credentials:


Thanks in advance!

Tuesday, March 22, 2022

Ways to implement fakesms service / otp validation in Functional Test Automation

I had asked a question in my earlier blog post about 

How do you automate OTP related test scenarios? Do you use a fake SMS service? Does it have restapi to query the SMS messages? geography support? 

A lot of people replied to it directly on the post, or via twitter and LinkedIn. I thank them all for their responses and suggestions!
 
In addition, if anyone needs to test email flows, getnada.com is a great way you can automate and validate emails. They have rest apis for the same.
 
These solutions will allow you to automate seemingly difficult to automate functionality, WITHOUT having to create "testing-hooks" in your product. Hence, you will be able to test the code that will be shipped to Production, which is very important!

Hope this helps others who may be looking for similar things.

Monday, February 21, 2022

Why not to use PageFactory and FindsBy in Selenium WebDriver

Many users of Selenium WebDriver may be using the PageFactory created by Simon Stewart. However, it is not a good idea to use it.

You may be thinking why should I not use it? It is so easy to use it, and its popular.

Well, here are 2 reasons why you should not use the PageFactory:

Reason #1. Simon Stewart (https://twitter.com/shs96c), the creator of WebDriver, and the PageFactory himself says, do not use it. It is not recommended.

The `FindsBy` annotation isn't recommended, because the PageFactory class is really badly implemented and inflexible, but it's not going away in the java bindings.

The `FindsByX` interfaces are going away. Better to use a `By` locator and use that.


PageFactory is really badly implemented
 

https://twitter.com/shs96c/status/1196865907185868801


Reason #2: While Reason #1 should have been sufficient, many people implementing automation using Selenium WebDriver do not know, or did not pay heed to what Simon said. So another WebDriver & WATIR contributor, Titus Fortner (https://twitter.com/titusfortner) explained in detail why using PageFactory is not a good idea in his blog post - https://titusfortner.com/2021/02/03/page-factory-optimization.html

 

I sincerely hope these reasons are sufficient for you to move away from the PageFactory and use something more efficient. 

 

Wednesday, January 19, 2022

Using fakesms service in Functional Test Automation

How do you automate OTP related test scenarios? Do you use a fake SMS service? Does it have restapi to query the SMS messages? geography support? 

To clarify - this needs to be done as part of my functional test automation, where,

  • the test could be running against a browser, where the browser does not have access to the phone, or,
  • the test could be running against a real mobile device (without SIM), so no way to receive the SMS, or,
  • the test could be running against an emulator (no SIM), so no way to receive the SMS
Scenarios include: login, payment, SMS content 

Hence I am thinking about using a fakesms service which has API access capabilities to retrieve the SMS. This will help when running automation on browser or devices / emulators without SIM.

Note:
  • There is no access to DB or API to query the OTP. 
  • I don't mind using a paid service

 

Thanks in advance for your help!

Thursday, June 17, 2021

Business-Layer Page-Object Pattern

Business-Layer Page-Object Pattern for Functional / System / end-2-end Test Automation

  1. Tests should talk business language
  2. The test is deterministic, for a specific scenario. The test implementation is an orchestration of corresponding business operations
  3. Business layer is an abstraction layer between the test intent & page objects
  4. Implementation of business layer method is essentially an orchestration of other business operations, or for the granular business operation, an orchestration of page objects
  5. The business layer method does the assertions of expectations
  6. There should be no assertions in page objects
  7. Each operation (in business or page object) being successful means there are a defined number of methods / operations the product can now do (as you are driving the product under test to do your bidding)
  8. Hence, to #7, each operation can have one 1 valid page / business object as its return type


See the sample tests implemented in teswiz for an example of Business-Layer Page-Object pattern implementation.

Friday, January 22, 2021

Visual Assertions - not another buzzword

Visual Testing and Visual Assertions may seem like yet another buzzword in the Software industry.

Being curious, I did an experiment using Applitools Visual AI to see if this is something that can genuinely help, or if it is a buzzword. You can read about this experiment, refer to the code and see the resulting data from this post - "Visual Assertions - Hype or Reality?".

Wednesday, December 9, 2020

Getting started with implementing Automation

Getting started with implementing tests for automation (web or native apps) may seem daunting for those who are doing this for the first time. 

Assuming you are using open-source tooling like Selenium or Appium, there are multiple ways you can get started.

  1. DIY - Build your own framework by scripting based on the documentation

  2. Use Selenium-IDE for quick record and playback

  3. Use TestProject Recorder for quick record and playback

  4. Use TestProject SDK to build your own custom scripts for automating the tests

Each of the above approaches has its own pros-and-cons. Let's look at this in some detail:

Approach #1 - DIY - Build your own framework

Selenium: https://www.selenium.dev/documentation/en/

Appium: https://appium.io/docs/en/about-appium/intro/

Pros

Cons

You can build all features and capabilities as per your design & requirement

*You need to learn a programming language


*You have to build everyone on your own (though you can use supporting libraries)



* Depending on the context of the team, the above points can also be considered as an advantage


Approach #2 - Selenium-IDE

https://www.selenium.dev/selenium-ide

Pros

Cons

Easy to set up

Basic reports

Works in Chrome & Firefox

Works only for automating Web applications

Code can be exported in various formats


Recorded tests can be run from command line


Tests can be run in your own CI


Will always be in-sync with underlying WebDriver




Approach #3 - TestProject Recorder

https://testproject.io/easy-test-automation/

Pros

Cons

Advanced recorder (lot of actions, validations, self-healing, customisations possible, and a lot of community Addons)

Recorder works only in Chrome, but tests can be executed on all browsers

Recorder works for Web applications as well as Native Apps (in real devices or emulators) for Android and iOS (even iOS on Windows machine)

Generated code is very simple - good as a reference to see how the underlying implementation / interaction is done

TestProject agent automatically determines all available browsers available and devices connected to the machine and execution can be customised accordingly

Each recorded test needs to be exported individually. No concept of reuse in this approach

Can schedule test runs as one-time, or repeated activity via build-in scheduler / CI/CD tool integrations or via their RESTful API


Reports are comprehensive with meaningful data, including screenshots and option to download to PDF format


Code can be generated from the recorded script


Can share tests easily using the "Share test" feature




Approach #4 - TestProject SDK

https://testproject.io/advanced-scripting-capabilities

Pros

Cons

Probably the most powerful way of these 4 approaches as it uses WebDriver / Appium under the hood. Get the power of building your own framework, while reusing out-of-the-box features like driver management, automatic reporting, etc.

You need to learn a programming language

Driver management is TestProject responsibility. Test implementer can focus on automating tests




Sunday, December 6, 2020

Long time no see? Where have I been

It has been a few months since I published anything on my blog. Does not mean I have not been learning or experimenting with new ideas. In fact, in the past few months I have been privileged to have my articles published on Applitools and Test Project blogs.

Below is the link to all those articles, for which I have received very kind reviews and comments on LinkedIn and Twitter.

Apart from this, I have also been contributing to open source - namely - Selenium, AppiumTestDistribution and building an open-source kickstarter project for API testing using Karate and for end-2-end testing for Android, iOS, Windows, Mac & Web.

Lastly, I have also been speaking in virtual conferences, webinars and last week I also recorded a podcast, which will be available soon.

The end of Smoke, Sanity and Regression

https://applitools.com/blog/end-smoke-sanity-regression/

Do we need Smoke, Sanity, Regression suites?
  • Do not blindly start with classifying your tests in different categories. Challenge yourself to do better!
  • Have a Test Automation strategy and know your test automation framework objective & criteria (“Test Automation in the World of AI & ML” highlights various criteria to be considered to build a good Test Automation Framework)
  • Choose the toolset wisely
  • After all the correct (subjective) approaches taken, if your test execution (in a single browser) is still taking more than say, 10 min for execution, then you can run your tests in parallel, and subsequently, split the test suite into smaller suites which can give you progressive indication of quality
  • Applitools with its AI-power algorithms can make your functional tests lean, simple, robust and includes UI / UX validation
  • Applitools Ultrafast Grid will remove the need for Cross-Browser testing, and instead with a single test execution run, validate functionality & UI / Visual rendering for all supported Browsers & Viewports

Design Patterns in Test Automation

Criteria for building a Test Automation Framework

Writing code is easy, but writing good code is not as easy. Here are the reasons why I say this:
  • “Good” is subjective.
  • “Good” depends on the context & overall objective.

Similarly, implementing automated test cases is easy (as seen from the getting started example shared earlier). However, scaling this up to be able to implement and run a huge number of tests quickly and efficiently, against an evolving product is not easy!

I refer to a few principles when building a Test Automation Framework. They are:
  • Based on the context & (current + upcoming) functionality of your product-under-test, define the overall objective of Automation Testing.
  • Based on the objective defined above, determine the criteria and requirements from your Test Automation Framework. Refer to my post on “Test Automation in the World of AI & ML” for details on various aspects you need to consider to build a robust Test Automation Framework. Also, you might find these articles interesting to learn how to select the best tool for your requirements:
    • Criteria for Selecting the Right Functional Testing Tools
    • How to Select the Best Tool – Research Process
    • How To Select The Right Test Automation Tool

Stop the Retries in Tests & Reruns of Failing Tests

Takeaways
  • Recognise reasons why tests could be flaky / intermittent
  • Critique band-aid approach to fixing flakiness in tests
  • Discuss techniques to identify reasons for test flakiness
  • Fix the root-cause, not the symptoms to make your tests stable, robust and scalable!

Measuring Code Coverage from API Workflow & Functional UI Tests

 
Why is the Functional Coverage important?
I choose an approach keeping the 80-20 rule in mind. The information the report provides should be sufficient to understand the current state, and take decisions on “what’s next”. For areas that need additional clarity, I can then talk with the team, explore the code to get to the next level of details. This makes it a very collaborative way of working, and joint-ownership of quality! 🚀

You can choose your own way to implement Functional Coverage – based on your context of team, skills, capability, tech-stack, etc.

Saturday, September 5, 2020

Questions at at Taquelah - Does your functional automation really add value?

I spoke at Taquelah Lightning Talks on one of my favorite topics - 

Does your functional automation really add value?


 


You can find the slides here  - https://www.slideshare.net/abagmar/does-your-functional-automation-really-add-value

Some references:

https://essenceoftesting.blogspot.com/2020/07/does-your-functional-automation-really.html

https://essenceoftesting.blogspot.com/2020/03/tracking-functional-coverage.html

Tuesday, August 4, 2020

Waiting for parallel features to complete in Karate / IntelliJ

Anyone using KarateDSL (https://github.com/intuit/karate) getting the below mentioned error?

[main] INFO http://com.intuit.karate.Runner - waiting for parallel features to complete ...

Restarting IntelliJ did not help.

Karate version: 0.9.5
JDK: Adopt Open JDK 11.0.8

Any tips / idea how to resolve it?

Tuesday, July 7, 2020

Does your functional automation really add value?


We all know that automation is one of the key enablers for those on the CI-CD journey.

Most teams are:

  • implementing automation
  • talking about its benefits
  • up-skilling themselves
  • talking about tooling
  • etc.

However, many a times I feel we are blinded because of the theoretical value test automation provides, or because everyone says it adds value, or because of the shiny tools / tech-stacks we get to use , or ...

To try and understand more about this, can you answer the below questions?

In your experience, or in your current project:
  1. Does your functional automation really add value?
  2. What makes you say it does / or does not?
  3. How long does it take for tests to run and generate reports?
  4. In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
  5. How easy is it to debug and get to the root cause of failures?
  6. How long does it take to update an existing test?
  7. How long does it take to add a new test?
  8. Do your tests run automatically via CI on a new build, or do you need to “trigger” the same?
  9. What is the test passing percentage?
  10. Do you “rerun” the failing tests to see if this was an intermittent issue?
  11. Is there control on the level of parallel execution and switch to sequential execution based on context?
  12. How clean & DRY is the code?

In my experience, unfortunately most of the functional automation that is built is:
· not optimal
· not fit-for-purpose
· does not run fast enough
· gives inconsistent feedback, hence unreliable

Hence, for the amount of effort invested in implementing automation,
  1. Are you really getting the value from this activity?
  2. How can automation truly provide value for teams?


Wednesday, March 11, 2020

Tracking functional coverage from your api / functional UI (e2e) tests

Tracking and having a high coverage of your product code via automated tests is an important way of building a quality product.

It is easy to measure code coverage when running unit tests. You will find a plethora of tools (free & commercial) with a variety of features for any programming language you may be using. You can integrate these tools as part of your pre-commit hooks (i.e. you will not be able to push your code to version control if the limits set for code coverage fall below the minimum limit set), and also as part of your CI builds (i.e. fail the build if the code coverage limit falls below the expected limit).

The reason capturing code coverage works easily for unit tests, and maybe integration tests, is that these tests run in isolation, directly on the code. You do not need to have the product deployed to any environment to run the tests to measure the coverage. I found these great resources you can check out to understand more about how code coverage works:
http://www.semdesigns.com/Company/Publications/TestCoverage.pdf
https://confluence.atlassian.com/clover/about-code-coverage-71599496.html

However, very frequently this question comes up - how can we measure code coverage of the API tests, or of the functional UI (e2e / end-2-end) tests? I remember this question coming up since the past 8-10 years at least. Every time, I have given the same answer because I have not come across, nor seen any better way of answering this question. It is time I wrote it down for easier access to others as well.

Solution #1

Preconditions:

A big criteria for the above strategy to work is to ensure the environment is isolated - i.e. NO ONE is using the environment (for navigating through the product, testing of any kind other than the tests being triggered to measure coverage).
  • Deploy the product-under-test to an isolated environment, and start measuring code coverage. 
  • Then trigger the API / UI tests, 
  • That will tell you how much code coverage is achieved by these tests.
The above answer has some big gaps though:
  • What are you trying to understand from the code coverage of your API / UI tests? What value will it bring to the team? How will it make the product better?
  • Do you expect the code coverage of your API / UI tests to be similar / identical / better than the unit tests? IF yes, we need to have a different conversation about the Test Pyramid


 

Solution #2

However, I believe there is a better approach to this. 

Based on the Test Automation Pyramid shown above, the API / Web Service tests and UI tests are business facing tests. In this case, it will add more value to measure what functional / component coverage the tests have. 

With this approach, the code coverage from the Technology Facing Tests (Unit / Integration / ...) will focus on technical aspects of coverage, and the Business Facing Tests (API / UI) will focus on functional aspects of coverage. 

When looked at together, this will give a better sense of understanding of the overall quality of the product.

Tracking Functional Coverage

So, how can we track functional coverage? 

Unfortunately, there does not seem to be an out-of-the-box way to do this. But below is how I have implemented this before:
  • I used cucumber-jvm in my test framework
  • For each (end-2-end Test) Scenario, in addition to the tags required for the test (as per the test framework design), I added the following types of tags to the test:
    • functional areas touched by the test
    • components / modules touched by the test
  • I used cucumber-reporting plugin to generate rich, html reports
    • The beauty of the reports generated by cucumber-reporting reports is that I can now see for my test execution the different statistics of the tags when the tests ran. 
    • Below is an example for the cucumber-reporting github page:
 
    • With this report, you can now get to know:
      • the overall tag execution state for the complete test run
      • the number of times the tag was run as part of how many test scenarios
      • drill down into tag-specific reports as well

Why is the above report important?

As mentioned above, I am adding custom tags to each scenario - based on module / functionality / etc. If there is any critical functionality of my product, I would want to have more concentration of tests covering that feature / module, compared to others. 

Another way to visualize the tag statistics is in form of tag heat maps. You want your critical functionality to have a good sized bubble in the heat map. Also, any small bubbles, or non-existing bubbles would mean you do not have coverage for that feature / module.

The above example is one of the easiest ways to implement feature coverage for API / UI tests. But it is very likely you are not using cucumber-jvm, and cucumber-reporting plugin. But if this approach makes sense to you, then you could very easily implement it in your test framework, using the constructs and features of that programming language and tools.

Friday, October 11, 2019

Overcoming chromedriver version compatibility issues the right way

I encountered an interesting challenge recently when doing Native Android / iOS app automation - this was related to Chrome browser versions getting updated automatically and my tests failing because of errors like:


org.openqa.selenium.SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 74
23:04:25 (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Windows NT 6.3.9600 x86_64) (WARNING: The server did not provide any stacktrace information)


So I asked a question on LinkedIn


And I tweeted asking how to manage ChromeDriver version when running WebDriver / Appium tests.






The answer was common and obvious – use WebDriverManager. This is a beautiful, simple and indeed the right answer and solution to the problem.

However, that was a partial answer for me. 

Here is my context and problem statement in detail:

  • My Test Automation Framework is based on Java / Appium and I use AppiumTestDistribution (ATD) 
  • ATD is open-source, and takes away my pain and effort of managing appium and the devices and also takes care of running the tests in parallel or distributed mode, on android as well as iOS
  • In my local lab setup, I have many different android devices connected - which run tests as directed by ATD
  • Since you cannot control how Google PlayStore / Apple App Store pushes out new versions of apps for different android / iOS versions on devices, it is easily possible to end up with different versions of chrome browser in your device lab. When this happens, the tests start failing because of chromedriver incompatibility issues.
Once I was very kindly reminded by the community about WebDriverManager (which I had forgotten about), I now knew what was to be done.

I looked at the ATD code, and realised that it was using the default chromedriver version as setup when I had installed appium. This chromedriver was being used when instantiating a new instance of the AndroidDriver.

So I submitted a PR for ATD - which essentially did the following:
  • Query the chrome browser versions on each connected device
  • For the **highest version of the browser, use WebDriverManager and get the appropriate chromedriver downloaded
  • Pass the path to the correct chromedriver when creating an instance of the AndroidDriver
**highest version - what does that mean? Well, I also got confused initially. But the answer was simple. On some devices, the Chrome browser is installed by default, as a system app. This cannot be removed. So as new versions of the browser get installed, the default Chrome system app is always there. So when you query for the versions of Chrome on the device, you will see 2 such versions. My code logic was to get all these versions, and pick the highest version from them.

Here is the code snippet of how I solved the problem:
Special thanks to Sai Krishna for quickly approving and merging this PR.

Hope this provides more information about my problem statement, and how I used your suggestion for WebDriverManager to solve the problem.