What do your Test Environments look like?

I currently have docker running on my PC and I switch between Pull Requests (git) to test on local environments. Sometimes Iā€™ll test on an integrated ā€˜stagingā€™ environment too.
At this point my thinking is that we could test at any given point in time. Whether itā€™s local, on staging or in production, it doesnā€™t really matter too much, it just means different checks at different times.
This switching local environments contantly is very painful. Is there any service/tool that generates a virtual environment per Pull Request on demand?
Or did you come up with different answers to this problem?

My main problem at this moment is testing the simpler things (some flows, text, design, different browsers,ā€¦) on different Pull Requestā€™s (PR) at the same time. An example:

  1. A PR is ready, I set up the environment, test data, state correctly to test it locally.
  2. I find some issues, give feedback and move to the next PR.
  3. I do step 1 and 2 again, but for the next PR.
  4. By then the first PR has integrated changes based on the feedback I have, so I switch back, sometimes having to do the setup again.

This is painful. Especially if thereā€™s often many PRā€™s being ā€˜juggledā€™ at the same time.
I found env0.com does something to virtualise local environments per PR. Anyone know alternatives or has worked with something similar?
Iā€™m especially interested in less-regulated environments, startup style contexts, where risk of failure in production is less big.

4 Likes

Iā€™m answering this assuming youā€™re mainly manually testing things ā€¦

Is there a reason you canā€™t juggle your containers a bit more effectively? Or maybe even make images with known states instead of containers? i.e. have a database where workflow is in known states for certain accounts, another database with accounts in different states, and just switch which container you use to persist the environment? (substitute whatever container/image you need for the persistence layer)

Alternatively, it seems like you might be able to iterate on your persistence layer image, and add fixture accounts/data/state to it, so that it can be leverage more effectively? It might not get things 100% to the state you need, but even if it gets you half way, and you only need to nudge the workflow a few more steps along, that seems like itā€™d be useful? You could slowly add more accounts/states/etc as necessary, and the persistence layer you use for testing gets more and more useful over time?

Managing the persistence layer and having multiple sets of them is something weā€™re working on too.

Itā€™s not a microservice context. (Yet?)
Iā€™m new at the project and my knowledge of the application, persistence layer isnā€™t there yet.
Iā€™m finding my way in how everything is ordered and structured, sometimes manipulatiā€™g directly in the database to get to a state I need.
Itā€™s far from ideal, but for now it works. We gradually have to build in more testability.

At the moment, weā€™re building something that will create an environment for me when I add a tag to a PR and will tear it down when I remove the tag.

Coupling this with a more encompassing persistence layer, I think Iā€™ll go a lot faster with providing feedback.

Curious, what does the env & test data setup entail? Does it vary widely between PRs/branches/envs? Or is it mostly the same repetitive steps/info? What is the container(s) for? Do they encapsulate the service/application + dependencies, test data, etc. or just some subset of those?

In my particular workflow, I have a local dev container image for the ā€œruntimeā€ needed to do dev/unit testing, etc. w/o the need for local dependencies (outside of docker). I map the path for the source code being tested, so that I only need rebuild the container when software/library dependencies change. Testing code across branches/PRs is simply swapping it with git commands, and the container picks it up naturally via the mapped path. That may or may not relate to what you are doing.