Advice For Testing In A Startup

I previously asked “How would you start a test team in a company”.

I was thinking about it from a different angle lately. That post focussed on setting up a testing/QA team within a company. I’m wondering about those of you who have worked in a startup, not actually starting a testing team but choosing what or how to test.

What is your own experience of software testing (or quality assurance if you prefer :slight_smile: ) in a startup? What advice would you give to a startup looking to test well or test better? Do you focus more on automation? Security? Performance? Exploratory? Helping people to understand the risks across the board?

In my own experience where you’re trying to get funding in multiple rounds, metrics were a huge part of the presentations to those investors. It wasn’t focussed solely on the testing efforts, they wanted to see customer uptake etc but it was a part of that. Have you ever had to do that?

2 Likes

I have worked almost exclusively in startups for my QA engineering career, except for the time I spent at Samsung research labs. In a startup I think the best way to create a testing team is to go into the company with a strategic mindset. If you show the company a QA roadmap, time to implement, and success criteria right off the bat, the C level or management is more likely to listen to your proposal for headcount. For instance, say initially you speak about success metrics, this is how we qualify a successful release of a product, how much risk we are willing to tolerate for specific code deploys and how much time that testing will take, it is easy to benchmark when any of those factors impact your existing baselines. As that number changes it is easy to make a case that helps increase headcount.

I like to make a proposal to my higher ups that show various levels of QA, the cost of the position as a marketplace reality, and the amount of time it will require to find qualified candidates to fill the spot.

Now for how I test at a startup, I create a roadmap and it looks something like this:

Notes on building a practice of Quality for “company”
The first 30 days: The point of QA is not to disrupt the flow of development but to understand it while working towards expediting the process of delivering quality products to the end user. Towards this end, the first 30 days should be designed so maximal effort is expended understanding the product, user base, feature sets, and timelines for upcoming development cycles. This includes absorbing any existing processes of code review, unit testing, and design to develop feedback cycles in order to intelligently build out systems that expand upon the current practice.

Create a workflow diagram of existing products
Separate the individual products by feature set
Map product states and components
Understand where code stability exists
Create a pipeline for regression documentation and testing
Create a pipeline for automating stable feature sets
This helps to understand where code complete feature sets may be influenced by future development
Create regression documentation and test cases to be used for each code release
The regression document should be automated sometime in the first 6 months depending on complexity. This is key as it allows developers to run these tests locally to verify no regressions happen for minor code changes and increases the feedback loop and understanding of the risk for code changes immediately.
Become familiar with existing tools
What are the current workflows from design to production
What tools are connected to CI platform (CircleCI?)
What are these hooks functions?
Working with, then optimizing issue tracking for instantaneous feedback of development issues (clubhouse?)
Which database is used and which microservices access it (datomic,sql, etc?)
Is load testing done on the applications services to ensure that rapid scaling is possible?
Understanding developments current level of testing
What is the current code coverage in unit tests?
How is coverage decided
Is it based on individual developer or initiative based
Are these unit tests integrated with CI tools on build?
Are linters used in the build tools so that code style does not affect compiled outcomes?
Mining historical data of releases
What are the most common blocker issues as seen through engineering/product/design teams viewpoints?
Does the product have a feedback mechanism to deliver bugs found in the wild into the development pipeline?
How does this flow work
What is the assumed priority of an externally reported bug
Start creating a traceability matrix to understand how each service or CI job affects the entirety of the app.
In sprint 3.4 we released code for X,Y,Z and the code base R was affected
Draw conclusions from those interactions and validate through exploratory testing these feature sets so that regressions can be known and covered for future development.

Days 30- 90: Now that the product is quantified and business needs are understood,optimizing existing processes to help increase the speed of development is key. This can include helping to removing some internal tooling for better tooling, getting rid of any encumbering processes that block development and rapid quality engineering. This should also be where we start to research automation tools and implementing tests on those tools efficacy in the current workflow of the engineering teams. Tools may include visual regression testing, browser syncing, and automated tests suites written in a paradigm that allows developers to also create automation, if there is time between development cycles.

Days 90 - 180: Now that the product is understood, regression models are formulated for new features and updated for old features where necessary. Automation frameworks have been validated and implemented, the QA practice should start to suggest quality metrics for code releases. By this stage QA should start to help define what key performance indicators are for release cycles to understand success metrics more thoroughly. This tends to be a stage of rapid growth as QA aligns with engineering’s goals and team build outs are considered to cover diverse product offerings, though this depends on the depth of testing needed.

1 Like

I think @phillipe has a great roadmap that I wish I had had when I was in my startups/small companies!

Things I would add:

  1. You often wear a lot of hats in a startup, and may end up taking an afternoon to do things you would not expect, such as fixing production issues, being customer support, hanging up framed customer praise (an actual thing that happened before an investor meeting in our office). This can be really disruptive, and put you well outside of your comfort zone.
  2. I did a lot of ad hoc testing, partially because I was just starting out, but also because we had a lot of fast release pressures. That was also disruptive to longer-term efforts. It is definitely not an environment for someone who is risk averse or who is wanting a structured, rhythmic job (though it is something that should be worked toward)
  3. The newest things being worked on were always hit hardest. Turn around time is faster for features that developers are currently working on or just finished, and that’s what is usually going to add value and drive in customers and new investors.
  4. Prioritized breadth over depth in most cases. A lot of startups are trying to really keep moving fast with releases, and while edge cases are important, it’s too easy to get into the weeds with bugs that no one has time to fix and that may affect no one. Document as you go, but be prepared to let things go.
3 Likes