I have worked almost exclusively in startups for my QA engineering career, except for the time I spent at Samsung research labs. In a startup I think the best way to create a testing team is to go into the company with a strategic mindset. If you show the company a QA roadmap, time to implement, and success criteria right off the bat, the C level or management is more likely to listen to your proposal for headcount. For instance, say initially you speak about success metrics, this is how we qualify a successful release of a product, how much risk we are willing to tolerate for specific code deploys and how much time that testing will take, it is easy to benchmark when any of those factors impact your existing baselines. As that number changes it is easy to make a case that helps increase headcount.
I like to make a proposal to my higher ups that show various levels of QA, the cost of the position as a marketplace reality, and the amount of time it will require to find qualified candidates to fill the spot.
Now for how I test at a startup, I create a roadmap and it looks something like this:
Notes on building a practice of Quality for “company”
The first 30 days: The point of QA is not to disrupt the flow of development but to understand it while working towards expediting the process of delivering quality products to the end user. Towards this end, the first 30 days should be designed so maximal effort is expended understanding the product, user base, feature sets, and timelines for upcoming development cycles. This includes absorbing any existing processes of code review, unit testing, and design to develop feedback cycles in order to intelligently build out systems that expand upon the current practice.
Create a workflow diagram of existing products
Separate the individual products by feature set
Map product states and components
Understand where code stability exists
Create a pipeline for regression documentation and testing
Create a pipeline for automating stable feature sets
This helps to understand where code complete feature sets may be influenced by future development
Create regression documentation and test cases to be used for each code release
The regression document should be automated sometime in the first 6 months depending on complexity. This is key as it allows developers to run these tests locally to verify no regressions happen for minor code changes and increases the feedback loop and understanding of the risk for code changes immediately.
Become familiar with existing tools
What are the current workflows from design to production
What tools are connected to CI platform (CircleCI?)
What are these hooks functions?
Working with, then optimizing issue tracking for instantaneous feedback of development issues (clubhouse?)
Which database is used and which microservices access it (datomic,sql, etc?)
Is load testing done on the applications services to ensure that rapid scaling is possible?
Understanding developments current level of testing
What is the current code coverage in unit tests?
How is coverage decided
Is it based on individual developer or initiative based
Are these unit tests integrated with CI tools on build?
Are linters used in the build tools so that code style does not affect compiled outcomes?
Mining historical data of releases
What are the most common blocker issues as seen through engineering/product/design teams viewpoints?
Does the product have a feedback mechanism to deliver bugs found in the wild into the development pipeline?
How does this flow work
What is the assumed priority of an externally reported bug
Start creating a traceability matrix to understand how each service or CI job affects the entirety of the app.
In sprint 3.4 we released code for X,Y,Z and the code base R was affected
Draw conclusions from those interactions and validate through exploratory testing these feature sets so that regressions can be known and covered for future development.
Days 30- 90: Now that the product is quantified and business needs are understood,optimizing existing processes to help increase the speed of development is key. This can include helping to removing some internal tooling for better tooling, getting rid of any encumbering processes that block development and rapid quality engineering. This should also be where we start to research automation tools and implementing tests on those tools efficacy in the current workflow of the engineering teams. Tools may include visual regression testing, browser syncing, and automated tests suites written in a paradigm that allows developers to also create automation, if there is time between development cycles.
Days 90 - 180: Now that the product is understood, regression models are formulated for new features and updated for old features where necessary. Automation frameworks have been validated and implemented, the QA practice should start to suggest quality metrics for code releases. By this stage QA should start to help define what key performance indicators are for release cycles to understand success metrics more thoroughly. This tends to be a stage of rapid growth as QA aligns with engineering’s goals and team build outs are considered to cover diverse product offerings, though this depends on the depth of testing needed.