Masterclass: Test Environments Management with Docker - Further Discussion

Tonights masterclass, our first of 2020, “Test Environments Management with Docker” is on a topic I started to dive into right before leaving my last testing job so I was really interested to see how someone handled it better than I did :sweat_smile:

As always, the recording of the masterclass will be available to MoT Pro members in the masterclass section.

If we didn’t get to your questions tonight or you’d like to continue the conversation, why not ask them here?

1 Like

So on The Club, we have a whole post for those interested in getting started with Docker

Links from the chat:
VSCode plugin is just “Docker” - publisher is Microsoft VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker

Questions from the chat:

  1. Can you two docker containers : one jenkins and other automated test cases… work together?
  2. In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you don’t have to hand edit the Dockerfile after each build

Questions we didn’t get to on the evening:

  1. What limitations of Docker have you encountered in practice? In what situations would you consider a different approach?
  2. Do you recommend docker-compose over kubernetes?
  3. Any tips or tricks you can suggest in setting up test data in docker container ?
  4. Can you (easily) inject secrets into a container? — Putting passwords etc. in a (potentially public) git repo is not recommended.
  5. Docker vs LXD?
  6. Isn’t there any dependency for any application based on the type of containers it is being stacked- e.g.Google docker, Azure docker? Reason being we do have native platform dependent applications. How is that handled in docker?
  7. I’ve seen docker have issues with volumes acting slow when you map big folders. Have you come across this yourself?
  8. How can docker help in test automation in a DevOps env.? How can it be integrated in a testing pipeline
  9. Do you have a wishlist of enhancements/changes to Docker? If so, what’s on it?
  10. In a Dev environment, how do you get these Dockerfiles to be based on the latest version of the code? How does it know what to base it on (repo/branch/version)? Surely you don’t have to hand edit the Dockerfile after each build
  11. Is it bad to have docker running in WSL on Windows?
  12. Is docker used in production? How?
    12.1 Answer within the chat: https://www.theregister.co.uk/2014/05/23/google_containerization_two_billion/

That was a very useful webinar, thank you. I’ve tried my hand on Docker before, this motivated to try again.

3 Likes

I missed the answer to this question because the video went down:

For the distributed environment, deployed a cross multiple app servers and db etc… How can docker be used?

Can anyone summarise the key points of the answer please?

  1. I dropped out almost half way, hope to be able to catch more of the webcast somehow. Please shoot if I get these answers wrong, I last touched this about a year ago.
    Answers
  • Q: “Can you two docker containers?”

  • A: "Think of a dockerfile as a template, each copy of the template is a brand new machine, identical. Docker will inject identity into each machine to prevent having 20 computers on your lan all with the same computername because it generates a random name each machine. It’s possible to exercise some control over the computername. So it’s a template, and you use it to spin up as many “instances” as you like. Once shut down an instance is gone forever. There is no way to get any files that the instance created, unless you copied them before stopping the container.

  • Q : In dev environment hod to I get 2 dockerfiles with the latest code onto them.

  • A : You dont, or rather You check out the master repo branch at a specific point. And then you finalize that into your dockerfile. This will speed thing up when you create a container, but it will only have the initial checkout. You need to checkout the delta as part of your test script. It’s normally goign to be pretty fast though. A tactic might be to roll the dockerfile forward by rebuilding it once in a while - this is extra work, since its a new revision. If you have branches, this will have to be balanced against how much lifting work it will entail. A initial pull with all branches might just bloat the image (the built Dockerfile).

My take on it:
2. Docker is really dead simple, and this is from someone who is not a devops guru.

  • You need to know the linux/unix system reasonably well, enough to write bash scripts to do all the things you want to set up in your environment.
  • You can even run docker under Windows, but dont! Run it under Ubuntu on Virtualbox, it performs better.
  • I used docker (based on a Ubuntu 16 distro release) to do builds and to trigger test agents and collect logs. I had most pain just moving credentials around in Jenkins and into the containers
  • It will take you about 1 day to get Docker working from absolute scratch, go for it.
  • Once you see what it can do, and decide it helps, build a proper docker server.
1 Like

Reasons not to use docker

  1. It’s headless, Testing GUI apps are no dice. Sadly this is my situation currently :sadface:
  2. You “can” run Windows OS (headless versions like server core) under it, but probably only worthwhile if your app targets Server core.

Our team still plans to use docker, because it’s a great way to scale up build, and unit testing. Sadly no way to run MacOS as far I know either.

1 Like

It’s headless, Testing GUI apps are no dice. Sadly this is my situation currently :sadface:

That’s not true, but it is restrictive to Linux GUI (and web browsers on Linux) only via xvfb, X11 forwarding from docker container (as guest/remote) out to the (local) host which has GUI to render, and via VNC.

I myself set up a docker image to launch JMeter with preconfigured load test scripts available on the container with option to load the GUI rather than the command line mode, via VNC.

1 Like

If with respect to binaries and packages that have to be run or served by the docker container, the dockerfile or image needs to be rebuilt on each successful code commit/push. However, if the one just needs the repository files (e.g. scripts, test scripts, test data, other files, assets), or if able to pull the branch built assets off some other artifactory (that CI builds saves to), then you can reuse same docker image/container, just that when it starts up, you point it to or map a volume to where the latest code/binaries are so that the container picks it up for execution. The files don’t all have to be completely stored within the image. This type of scenario is useful if you just need a common runtime with base dependencies (e.g. python, ruby, node, java), where the actual code to run can be pulled externally and runnable as long as you have the core generic runtime in the docker image.

1 Like

There are ways to do it. One common option is through environment variables, another could be through volume mapping where the mapped volume contains the secrets, although maybe it needs to be encrypted volume and decrypted on access inside docker container, etc.

For environment variables method, can set environment variables on the docker host, then pass(through) the variables by name to docker container a startup. Specifying environment variables isn’t only by key, value pair, if you provide no value and just key name, it pulls from the host the variable’s value to inject, something like that.

Some examples on the web:

https://blog.bekt.net/p/docker-aws-credentials/

1 Like

Beyond docker, it might be worth looking into vagrant and terraform. Similar concept to docker but for provisioning bigger parts of environment within a virtual machine or cloud infrastructure like AWS, GCP, etc. Useful for testing.

But from personal experience at work, if the cloud environment has many components and scales big, terraform can still be slow to run/deploy up an environment from scratch. e.g. 2 hours minimum for the base environment setup.

Yes, A X11 graphics surface is not something I am familiar with, I excluded it as an option to test against, because although it’s legit graphical, and a good test environment, and you can VNC into it, 90% of my desktop users are Windows. Would assume that X11 gurus will know enough to install the dummy driver plumbing needed, but it’s totally beyond my depth. Thanks for reminding that it’s an option @daluu