Hello guys,
I am new here, and I am working as a software tester in the AR and VR domains. Right now I am working as a manual software tester, but if I want to migrate my manual skills to the automation side, it is possible to make automation scripts for AR and VR applications. I have searched a lot on Google but have not gotten any relevant answers.
I don’t really know something about this but in my point of view AR and VR are very new domains and so, any automation tools and scripts might not be created yet.
Most likely the automation support for testing these features would have to come in-house at the workplace with the developer team support to build unless QA team alone builds it from scratch.
It most likely would involve backend APIs to help configure things and test. Validation would be by APIs/code, or some image recognition technology to compare rendered output (screenshotted or recorded motion frames in action as video) compared against a known snapshot for what to expect (screenshot or video) at a given area or point in time in the AR/VR gameplay.
The audio validation would be similar, capturing rendered audio and comparing to known snapshot. Comparison of audio and video would be using available tools on the market if not custom built. Tools around image recognition, pixel comparison/detection, audio frequency and tone comparison/detection etc.
APIs to help with validation is probably to validate certain state in the game, or to configure actions/state of game.
This is new area of testing would be quite complicated most likely. Would be interesting to see what comes up in this field of testing on the open source side as well as commercial.
Hi @jay_070 . Welcome to the MOT community.
As Georgios and David point out, yes you are in uncharted territory. I’m not in contact anymore to the team I worked alongside that used to have an AR product a while back, they did some interesting things to crack this nut. You have to step back a lot and stop thinking about how you measure quality of experience for a product an strip it back down again, to just the functional parts on their own. When you test a brand new car design, you don’t just put it onto the road, you check the tyre pressure first, you rock it a bit, kick it, then check the indicator lamps, the lights, starter, the brakes and so the same applies here.
When you look at the parts, and try to test not just interactions between the parts that make up the software and hardware stack, but also the value each part brings, you can break down the mountain a little bit. So yes, there are no tools, but that’s only because device constraints mean you cannot embed tools in the same way anymore. Like David says, I would suggest you start building in-house. talk to the developers, they will have loads of unit-test tools. Tools that are fragile, but if you can master and use them, will tell you a lot if you take those same klunky internal dev-maintained tools and put them on a road-trip in a few test experiments. So I would be attending every developer design meeting and lunchtime chats as possible to learn about these tools and also to sponsor and promote their own tools that they don’t want to necessarily share. But all the while dream big. But step back, and distance yourself from trying to test by putting the KIT onto your FACE, that’s not a scalable way to test it at all. Some of the things the AR team did were weird, they had a special closed and lighting-controlled room for real-word testing, where you can make it really dark, or really super-bright too. But they also used simple things like placing a headset on a record turntable and just hitting play at '44 , to see when the software might baulk at being spun continuously - all the while constantly measuring pixel thought-put to monitor for stutter changes (remember those unit-testing type tools I mentioned) you will have loads of metrics, and just working on graphical reporting of metrics tools will take up a load of time too. Get temperature sensors, and measure how hot the entire kit gets, battery, displays and so on during a test-run for example.
It’s scarry, but that’s where great software testers have the most fun to be honest. My key advice would be to start small and work incrementally, but have some kind of feedback loop where you look at your progress and how much it is helping with the help of a product owner sponsoring and even feeding you ideas.