Hi Iām currently using a cucumber implementation to test mobile apps on Android and iOS. Iām using a Git repo of gherkin scenarios tagged to identify the relevant apps and platforms (e.g. app1, app2, ios, android) where all the procedural elements of the test are abstracted out into the step definitions (written in Kotlin and SWIFT). Iām doing it this way to try and prevent unwanted proliferation of Gherkin scenarios where we might have multiple identical gherkin scenarios for different apps e.g. login fo app 1 and login for app 2
So if I give a pseudo code example
GENERIC GHERKIN SCENARIO
Feature: App log in
Scenario: User can log in with valid credentials
Given the user is on the log in screen
When the user attempts to log in with valid credentials
Then the user can log in
Note that thereās no detail in here about hard coded credentials, which fields credentials should go in or which buttons should get tapped etc . Itās a behavioural test and currently holds true for all our apps where there is a log facility.
Step definitions for App1
step(āthe user is on the log in screenā) {
Launch app
Assert user is on log in screen
}
step(āthe user attempts to log in with valid credentialsā) {
Fill in these fields to log into App1
Tap login button
}
step(āthe user can log inā) {
Assert user is now on correct post login landing page
}
Step definitions for App2
step(āthe user is on the log in screenā) {
Launch app
Assert user is on log in screen
}
step(āthe user attempts to log in with valid credentialsā) {
Fill a whole set of different fields to log into App2
Tap login button
}
step(āthe user can log inā) {
Assert user is now on correct post login landing page
}
My questions are
āDoes this look like a sensible approach to peopleā
āCan people see any downsides to the way Iām doing it?ā One immediate downside seems to be that by having all the detail in the step definition it might make things awkward if I ever wanted to use the Gherkins as a guide for manual testers.
How have others approached the same issue of preventing feature file proliferation?
We are not using Cucumber in our Team but maybe I can give you a few hints.
I think your Gherkin is fine as it helps understand the user story and adds acceptance criteria.
For me the definition of acceptance criteria through gherkin is a key feature. Otherwise it is hard to track test coverage and testability. Especially in Agile Teams Gherkin and BDD is helpful as User Storyās tend to have too much room for interpretation.
You are at the moment just looking on the happy path, i think best practice is covering exception handling as well.
You could use Scenario Outlines to define the Test Environments to handle duplicated Scenarios.
That is a good question. I normally catch this during writing the scripts for the test.
In your example I would write one Test for login with a paramterized target environment.
Data driven parts (data tables) or scenario outlines are the only way to handle this in Gherkin itself, as far as i know.
Hi Constantin. Thanks for the feedback. Weāve been working with this approach for a few weeks now and it seems to be going ok (we are including -ve scenarios too but the example above was just intended to be as simple as possible for illustration). We ran into one issue which wasnāt really gherkin related but was that as the Android and iOS teams delivered features at different rates test suites would fail because there would be step definitions for a gherkin scenario in say Kotlin but not SWIFT - we got round this by just tagging scenarios in the repo so that theyād only be executed when each team was ready.
As for manual level testingā¦well people either need to look at the step definitions or external docs as well as the gherkin if they need actual āclick here / enter thisā type detail. Not an issue for us as weāre primarily using this approach to support automated executable documentation. Cheers, Rob
We are trying to avoid duplicating feature files for different tests types, e.g. Manual Test, API, Mobile Browser (Appium) and Desktop Browser (Selenium).
The plan is to create a unified Scenarios held in a central repository.
The write the separate step definition for API, Desktop and Mobile based on the definition in the central repository.
Itās very early days at the moment as we are just writing the feature files at the moment. But Iām aware this could be a very brittle process because if somebody updates existing feature files, it has the potential of breaking API, Desktop and Mobile automation.
If anybody has been there and done this and has the solution Iām all ears
The last thing I want to do is create separate feature files for each test type.
If either of you are interested we could do a zoom session at some point and I can show you what weāve done. Weāre still a work in progress but some feedback would be great. Iām sure I can show the tests without getting into anything commercially sensitive.
Iām also trying to setup Cucumber for IOS ( Swift ) , but I have encountered many problems with Xcode targets / libraries / gherkin language not being recognized and so onā¦ ( I tried Cucumberish / XCTest )
Can you please provide more useful info. about how to setup the project , or a github link with a short project example ?