30 Days of Automation in Testing Day 19: Share a resource on testability along with your thoughts about the topic


(Quang Le) #1

I would like to share about implementing a smoke test automation for e-commerce shopping sites to increase the performance of our team.
I worked on a support tickets project which testing on so many e-commerce sites about 100 sites. The tickets will be created by the client’s requirements of each site. The QA will test the tickets after the developer done it and smoke test the site. The tickets are not the same and just done one-time only, no regression, but the smoke test will start after a ticket had done. When I joined the project, I wanted to apply the automation for the smoke test to reduce the testing time for the QA. But there are some difficult things, the smoke test process for each site is not the same. So I need to analyze and define the scope of smoke test for e-commerce shopping sites and apply the automation for them.

I applied automation for below items:

  1. Users information compare with database or admin sites
  2. Access to a product by clicking on categories…
  3. Searching products, access to product details page by searching
  4. Products sort and paging depending on the configuration in admin site or not
  5. Checkout process: the process when a client checkout an order (verify UI all pages of the flow (home page, product details page, basket page, checkout steps, carousel…), functions of this process (promotion, pricing, taxing, shipping cost, shipping function…)
  6. Checkout with guest, checkout with login, checkout with the same address in billing and shipping, checkout with some kinds of promotions…
  7. Checkout orders should be created in the database or in admin sites or a third-party after client submitted the orders, compare information in receipts online with the database or a third-party after client submitted the orders
  8. Check email confirmation should be sent immediately after client submitted the orders (order code, products, tax, ship, price, promotion… and all information in email

The automation for the smoke test was applied and the QAs just test the tickets and click run the smoke test. Or testing when the smoke test was running. The testing time for a ticket was reduced so much. Some cases, the smoke test also help the QA to get the order ID to test ticket. The testing time of the whole team was reduced, the performance of each QA is increased, the number of testing tickets is increased. The client was happy with us.

This is my experience that the automation will help increase the performance of the project and be exceed the client expectation

Above things are my testability analyzed in my project depending on the testability of James Bach: Heuristics of Software Testability


(Heather) #2

From our friends on Twitter:


(Pablo) #3

Some time ago, a developer introduced me to the idea of Single Responsibility Principle which absolutely blew my mind. I wrote a confluence page regarding how best to structure our code (internally, can’t share), and I’ve since updated my blog (today):

tl;dr
Single-Responsibility Principle & S.O.L.I.D proved instrumental in how best to structure my tests scripts into a clean, concise, legible test harness that proved to be efficient and effective at articulating workflows, providing timely feedback to the Developers, and finding bugs in a reliable manner.

Resources:


(Trung) #4

I read the story about how Google tests software. In this book, James Whittaker provides a blueprint for Google’s success in the rapidly evolving world of app testing. It’s really useful for another testing project using testing technology and organizational structure as described in this book, especially projects working on webs.

Below is the ebook link:


(David) #5

I read a three articles on testability and summarize my findings and some opinions…

http://www.professionalqa.com/software-testability

Here are a few definitions:

The ISO defines testability as “attributes of software that bear on the effort needed to validate the software product.”

“Testability establishes the boundary to which the jeopardy of costly or hazardous bugs can be abridged to an acceptable level.”

Testability… can be defined as the property that measures the ease of testing a piece of code or functionality, or a provision added in software so that test plans and scripts can be executed systematically.

…the state of software artifact, which decides the difficulty level for carrying out testing activities on that artifact.

Each article had generally the same ideas on how to measure testability. They generally break down into the following, which I have summarized, perhaps poorly, in my own words.

Observability - How well we can “see” into the software, or perceive what is going on. One quote I liked on this is “correct expected output is not enough to ensure that the background processes are giving the correct results.”

Controllability - How much control do we, as testers, have over different parts of the software.

Availability - What is available, to us, as testers, in order to carry out testing. This could be the software itself, it’s various components, the source code, etc. Not mentioned here is hardware, something we deal with where I test.

Simplicity - This is probably self explanatory, but more complex software requires more testing, and thus is more difficult to test.

Stability - Oh don’t get me started ! :smiley: We sometimes encounter changes to our environment which make testing not only difficult, but can lead to misleading results. Tests may pass or fail, or appear to, dependent on environmental changes, which is NOT what we are supposed to be testing. The test environment should not change.

-Dave K


(Kumar) #6

Recently, I have been introduced to a new test approach - Specification By Example.

This approach is quite fascinating in the sense that it could enable the teams to deliver the rite product quicker with a great quality within a short test duration. This approach achieves the benefits of BDD, TDD and eary testing principles. The best part is the test scenarios(behaviours) form the critical part of web development and testing is more of assuring the quality and not finding defects.

Just to give a quick snippet. Here, the requirements are gathered in the form of scenarios/ behaviours with multiple examples that the dev use as a base to develop and later for linking those scenarios in feature file to automated suite making the requirements a living test document.

You can read more about it here: https://qakumar.wordpress.com/2018/07/25/day-19-share-a-resource-on-testability-along-with-your-thoughts-about-the-topic/


(AMIT) #7

@qakumarnz for me too Specification By Example stands out as a new testing approach. I had the privilege to attend the 1 day workshop on it conducted by very own Gojko Adzic. Early testing by evaluating the scenarios by multiple examples and using those for development, automated testing just bakes quality early on assuring quality.