Masterclass: Test Environments Management with Docker - Further Discussion

Tonights masterclass, our first of 2020, ā€œTest Environments Management with Dockerā€ is on a topic I started to dive into right before leaving my last testing job so I was really interested to see how someone handled it better than I did :sweat_smile:

As always, the recording of the masterclass will be available to MoT Pro members in the masterclass section.

If we didnā€™t get to your questions tonight or youā€™d like to continue the conversation, why not ask them here?

1 Like

So on The Club, we have a whole post for those interested in getting started with Docker

Links from the chat:
VSCode plugin is just ā€œDockerā€ - publisher is Microsoft VS Marketplace Link: Docker - Visual Studio Marketplace

Questions from the chat:

  1. Can you two docker containers : one jenkins and other automated test casesā€¦ work together?
  2. In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you donā€™t have to hand edit the Dockerfile after each build

Questions we didnā€™t get to on the evening:

  1. What limitations of Docker have you encountered in practice? In what situations would you consider a different approach?
  2. Do you recommend docker-compose over kubernetes?
  3. Any tips or tricks you can suggest in setting up test data in docker container ?
  4. Can you (easily) inject secrets into a container? ā€” Putting passwords etc. in a (potentially public) git repo is not recommended.
  5. Docker vs LXD?
  6. Isnā€™t there any dependency for any application based on the type of containers it is being stacked- e.g.Google docker, Azure docker? Reason being we do have native platform dependent applications. How is that handled in docker?
  7. Iā€™ve seen docker have issues with volumes acting slow when you map big folders. Have you come across this yourself?
  8. How can docker help in test automation in a DevOps env.? How can it be integrated in a testing pipeline
  9. Do you have a wishlist of enhancements/changes to Docker? If so, whatā€™s on it?
  10. In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you donā€™t have to hand edit the Dockerfile after each build
  11. Is it bad to have docker running in WSL on Windows?
  12. Is docker used in production? How?
    12.1 Answer within the chat: Google: 'EVERYTHING at Google runs in a container' ā€¢ The Register

That was a very useful webinar, thank you. Iā€™ve tried my hand on Docker before, this motivated to try again.

3 Likes

I missed the answer to this question because the video went down:

For the distributed environment, deployed a cross multiple app servers and db etcā€¦ How can docker be used?

Can anyone summarise the key points of the answer please?

  1. I dropped out almost half way, hope to be able to catch more of the webcast somehow. Please shoot if I get these answers wrong, I last touched this about a year ago.
    Answers
  • Q: ā€œCan you two docker containers?ā€

  • A: "Think of a dockerfile as a template, each copy of the template is a brand new machine, identical. Docker will inject identity into each machine to prevent having 20 computers on your lan all with the same computername because it generates a random name each machine. Itā€™s possible to exercise some control over the computername. So itā€™s a template, and you use it to spin up as many ā€œinstancesā€ as you like. Once shut down an instance is gone forever. There is no way to get any files that the instance created, unless you copied them before stopping the container.

  • Q : In dev environment hod to I get 2 dockerfiles with the latest code onto them.

  • A : You dont, or rather You check out the master repo branch at a specific point. And then you finalize that into your dockerfile. This will speed thing up when you create a container, but it will only have the initial checkout. You need to checkout the delta as part of your test script. Itā€™s normally goign to be pretty fast though. A tactic might be to roll the dockerfile forward by rebuilding it once in a while - this is extra work, since its a new revision. If you have branches, this will have to be balanced against how much lifting work it will entail. A initial pull with all branches might just bloat the image (the built Dockerfile).

My take on it:
2. Docker is really dead simple, and this is from someone who is not a devops guru.

  • You need to know the linux/unix system reasonably well, enough to write bash scripts to do all the things you want to set up in your environment.
  • You can even run docker under Windows, but dont! Run it under Ubuntu on Virtualbox, it performs better.
  • I used docker (based on a Ubuntu 16 distro release) to do builds and to trigger test agents and collect logs. I had most pain just moving credentials around in Jenkins and into the containers
  • It will take you about 1 day to get Docker working from absolute scratch, go for it.
  • Once you see what it can do, and decide it helps, build a proper docker server.
1 Like

Reasons not to use docker

  1. Itā€™s headless, Testing GUI apps are no dice. Sadly this is my situation currently :sadface:
  2. You ā€œcanā€ run Windows OS (headless versions like server core) under it, but probably only worthwhile if your app targets Server core.

Our team still plans to use docker, because itā€™s a great way to scale up build, and unit testing. Sadly no way to run MacOS as far I know either.

1 Like

Itā€™s headless, Testing GUI apps are no dice. Sadly this is my situation currently :sadface:

Thatā€™s not true, but it is restrictive to Linux GUI (and web browsers on Linux) only via xvfb, X11 forwarding from docker container (as guest/remote) out to the (local) host which has GUI to render, and via VNC.

I myself set up a docker image to launch JMeter with preconfigured load test scripts available on the container with option to load the GUI rather than the command line mode, via VNC.

1 Like

If with respect to binaries and packages that have to be run or served by the docker container, the dockerfile or image needs to be rebuilt on each successful code commit/push. However, if the one just needs the repository files (e.g. scripts, test scripts, test data, other files, assets), or if able to pull the branch built assets off some other artifactory (that CI builds saves to), then you can reuse same docker image/container, just that when it starts up, you point it to or map a volume to where the latest code/binaries are so that the container picks it up for execution. The files donā€™t all have to be completely stored within the image. This type of scenario is useful if you just need a common runtime with base dependencies (e.g. python, ruby, node, java), where the actual code to run can be pulled externally and runnable as long as you have the core generic runtime in the docker image.

1 Like

There are ways to do it. One common option is through environment variables, another could be through volume mapping where the mapped volume contains the secrets, although maybe it needs to be encrypted volume and decrypted on access inside docker container, etc.

For environment variables method, can set environment variables on the docker host, then pass(through) the variables by name to docker container a startup. Specifying environment variables isnā€™t only by key, value pair, if you provide no value and just key name, it pulls from the host the variableā€™s value to inject, something like that.

Some examples on the web:

https://blog.bekt.net/p/docker-aws-credentials/

1 Like

Beyond docker, it might be worth looking into vagrant and terraform. Similar concept to docker but for provisioning bigger parts of environment within a virtual machine or cloud infrastructure like AWS, GCP, etc. Useful for testing.

But from personal experience at work, if the cloud environment has many components and scales big, terraform can still be slow to run/deploy up an environment from scratch. e.g. 2 hours minimum for the base environment setup.

Yes, A X11 graphics surface is not something I am familiar with, I excluded it as an option to test against, because although itā€™s legit graphical, and a good test environment, and you can VNC into it, 90% of my desktop users are Windows. Would assume that X11 gurus will know enough to install the dummy driver plumbing needed, but itā€™s totally beyond my depth. Thanks for reminding that itā€™s an option @daluu