Recording video upon ui test failure (for a total noob)

Hi all! So I am somewhat of a noob that’s been left a massive task of making our UI automation less brittle and easier to debug what’s going on. Currently, it takes a screenshot upon failure but this is very limited unless it’s a hard error.

The devs at my company are really busy and it seems impossible to get time with them to sit and go through this. I guess if I can get it working, it will be good for the CV anyway :slight_smile:

I’ve been doing a couple C# courses so I’m ok at coding and have written a few stable UI tests; but this subject feels a little over my head so I need someone to explain it like I’m 5. On the Slack channel #automation I was advised to use FFmpeg but the google searches I’ve done haven’t been that helpful in explaining how to actually go about implement it.

Tech stack:
We use POM & Page Factory if that makes a difference
TeamCity (But in the future I want to move this to our Azure devops pipeline to take advantage of MS Devops test suite)

ANY help would really be appreciated, even if it’s just sending me the articles I need to know.

Sorry for being such a noob.

Kind regards


Don’t apologise for asking questions :heart: We learn by asking questions and sharing with each other and you’re on a learning journey :grin:

1 Like

okay, i’m digging myself a hole here, but no you are not a noob, that’s my job. I’m a bit new to web based testing and still getting into it but selenium makes it dead easy to make screenshots. I also wrote a blog about page-objects, but in Python, not specflow…but one thing I did while doing this was prototyped screenshotting and converting to a movie.

  1. I write in python so it was reasonably easy, well threads in python are hard, but it goes like this, write a function that increments a global variable and takes a screenshot, each screenshot gets a number, format the number as “00-screenshot-someactivity.png” replace 00 with the global_count variable, but with leading zeroes. Now in python threads are tricky, so I spawn a thread that loops every 1 seconds and does a screenshot, I then have to kill the thread off at the end of the test (slightly tricky to do in reality).

  2. Then in your page object hijack the click() function somehow so you can highlight the element you are about to click on > driver.execute_script(
    “arguments[0].style.border=‘3px ridge #ff33ff’”, element)
    then add a screenshot to this code ,

filename = f"{global_count:02d}-screenshot-{pagename}.png"
global_count = global_count +1
but this time change the filename slightly, and increment the same global variable. Then click the button/element as normal.

  1. You now end up with files all time-stamp/sequentially organized, but with screenshots at 1 second intervals in all the places where your page was waiting to load interspersed, which might feel a bit janky, and you can adjust the background timer a bit. you can manually drop these files into a move app, or… magically use ffmpeg. ffmpeg is an alternative to irfanview, which will do the same thing, i prefer ffmpeg because it’s portable and supported.

  2. install ffmpeg and google about until you basically understand how it works, take note some older commandline parameters no longer work, so look for new examples only. THis script can be ported to mac/linux but as is works on windows.

REM creates a color palette file first
c:\tools\ffmpeg\bin\ffmpeg.exe -f image2 -i %%02d.png -v 0 -vf palettegen -y palette.png
REM use the palette file to speed up conversion
c:\tools\ffmpeg\bin\ffmpeg -framerate 2 -loop 0 -i %%02d.png -v 0 -i palette.png -lavfi paletteuse -y out.gif
dir > done.txt

This makes the png’s into an animated gif. At this point, the sky is the limit, you can opt to convert to an .AVI, a .MOV or a .MP4. You can use an image editing library to actually “watermark” each png with the current clock time like a timecode. Or watermark them with the name of the test-case. If you did some ffmpeg research, you can now start to tweak the framerate slightly too or turn off autorepeat of the out.gif file.

That last step obviously needs to run as fast as possible at the end of the test, and deal with remote nodes and so on as necessary in your framework. Obviously you only want to do the video step if your test case state is failed. You could try speed it up for example by scaling the video dimensions down a bit as well to cut file size and disk I/O perhaps. Just copy the animated gif to the test results folder and you have a cool jenkins/teamcity artifact for your failed test.

Disclaimer: I only have this as prototype code, I decided the multithreading would case too much pain in our system and because the video making step is quite time consuming, only some of this made it into my production test code in the end.


Thank you so much for taking the time to write all that up. I really appreciate the help from the bottom of my heart.

1 Like

I always feel so dumb when I can’t figure things out by myself. Thanks for the encouragement!


My biggest problem with taking a screenshot on failure, is that the screenshot itself is often about 5-10 seconds after the fault occurs. In a selenium test that failed because a webpage took too long to fully load, or a script took too long, will often not show the point in time the problem occurred for some many valid failure domains, the eyeballs that a screenshot would give you only add confusion.

Basically I guess, I’m adding that the test automation thing is a long journey. Testing is not unique in being a domain that is largely uncharted. But it is unique in that it is tied to the beast that is the technology advance, in the way that the products we build, often move as fast as is possible.