Tonights masterclass, our first of 2020, āTest Environments Management with Dockerā is on a topic I started to dive into right before leaving my last testing job so I was really interested to see how someone handled it better than I did
As always, the recording of the masterclass will be available to MoT Pro members in the masterclass section.
If we didnāt get to your questions tonight or youād like to continue the conversation, why not ask them here?
So on The Club, we have a whole post for those interested in getting started with Docker
Links from the chat:
VSCode plugin is just āDockerā - publisher is Microsoft VS Marketplace Link: Docker - Visual Studio Marketplace
Questions from the chat:
Can you two docker containers : one jenkins and other automated test casesā¦ work together?
In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you donāt have to hand edit the Dockerfile after each build
Questions we didnāt get to on the evening:
What limitations of Docker have you encountered in practice? In what situations would you consider a different approach?
Do you recommend docker-compose over kubernetes?
Any tips or tricks you can suggest in setting up test data in docker container ?
Can you (easily) inject secrets into a container? ā Putting passwords etc. in a (potentially public) git repo is not recommended.
Docker vs LXD?
Isnāt there any dependency for any application based on the type of containers it is being stacked- e.g.Google docker, Azure docker? Reason being we do have native platform dependent applications. How is that handled in docker?
Iāve seen docker have issues with volumes acting slow when you map big folders. Have you come across this yourself?
How can docker help in test automation in a DevOps env.? How can it be integrated in a testing pipeline
Do you have a wishlist of enhancements/changes to Docker? If so, whatās on it?
In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you donāt have to hand edit the Dockerfile after each build
Is it bad to have docker running in WSL on Windows?
I dropped out almost half way, hope to be able to catch more of the webcast somehow. Please shoot if I get these answers wrong, I last touched this about a year ago. Answers
Q: āCan you two docker containers?ā
A: "Think of a dockerfile as a template, each copy of the template is a brand new machine, identical. Docker will inject identity into each machine to prevent having 20 computers on your lan all with the same computername because it generates a random name each machine. Itās possible to exercise some control over the computername. So itās a template, and you use it to spin up as many āinstancesā as you like. Once shut down an instance is gone forever. There is no way to get any files that the instance created, unless you copied them before stopping the container.
Q : In dev environment hod to I get 2 dockerfiles with the latest code onto them.
A : You dont, or rather You check out the master repo branch at a specific point. And then you finalize that into your dockerfile. This will speed thing up when you create a container, but it will only have the initial checkout. You need to checkout the delta as part of your test script. Itās normally goign to be pretty fast though. A tactic might be to roll the dockerfile forward by rebuilding it once in a while - this is extra work, since its a new revision. If you have branches, this will have to be balanced against how much lifting work it will entail. A initial pull with all branches might just bloat the image (the built Dockerfile).
My take on it:
2. Docker is really dead simple, and this is from someone who is not a devops guru.
You need to know the linux/unix system reasonably well, enough to write bash scripts to do all the things you want to set up in your environment.
You can even run docker under Windows, but dont! Run it under Ubuntu on Virtualbox, it performs better.
I used docker (based on a Ubuntu 16 distro release) to do builds and to trigger test agents and collect logs. I had most pain just moving credentials around in Jenkins and into the containers
It will take you about 1 day to get Docker working from absolute scratch, go for it.
Once you see what it can do, and decide it helps, build a proper docker server.
Itās headless, Testing GUI apps are no dice. Sadly this is my situation currently :sadface:
Thatās not true, but it is restrictive to Linux GUI (and web browsers on Linux) only via xvfb, X11 forwarding from docker container (as guest/remote) out to the (local) host which has GUI to render, and via VNC.
I myself set up a docker image to launch JMeter with preconfigured load test scripts available on the container with option to load the GUI rather than the command line mode, via VNC.
If with respect to binaries and packages that have to be run or served by the docker container, the dockerfile or image needs to be rebuilt on each successful code commit/push. However, if the one just needs the repository files (e.g. scripts, test scripts, test data, other files, assets), or if able to pull the branch built assets off some other artifactory (that CI builds saves to), then you can reuse same docker image/container, just that when it starts up, you point it to or map a volume to where the latest code/binaries are so that the container picks it up for execution. The files donāt all have to be completely stored within the image. This type of scenario is useful if you just need a common runtime with base dependencies (e.g. python, ruby, node, java), where the actual code to run can be pulled externally and runnable as long as you have the core generic runtime in the docker image.
There are ways to do it. One common option is through environment variables, another could be through volume mapping where the mapped volume contains the secrets, although maybe it needs to be encrypted volume and decrypted on access inside docker container, etc.
For environment variables method, can set environment variables on the docker host, then pass(through) the variables by name to docker container a startup. Specifying environment variables isnāt only by key, value pair, if you provide no value and just key name, it pulls from the host the variableās value to inject, something like that.
Beyond docker, it might be worth looking into vagrant and terraform. Similar concept to docker but for provisioning bigger parts of environment within a virtual machine or cloud infrastructure like AWS, GCP, etc. Useful for testing.
But from personal experience at work, if the cloud environment has many components and scales big, terraform can still be slow to run/deploy up an environment from scratch. e.g. 2 hours minimum for the base environment setup.
Yes, A X11 graphics surface is not something I am familiar with, I excluded it as an option to test against, because although itās legit graphical, and a good test environment, and you can VNC into it, 90% of my desktop users are Windows. Would assume that X11 gurus will know enough to install the dummy driver plumbing needed, but itās totally beyond my depth. Thanks for reminding that itās an option @daluu