Welcome to the Ministry of Test community @antji .
Been mulling over your question a few days and can see perhaps some “regulation” or protocol having an impact, and I assume part of the job is to understand why the customer wants white-box testing at all even unless you are delivering source code to them. At which point the obvious problem around having too much reliance on unit test metrics creates a friction point and may reduce willingness to do any refactoring if those metrics drop at all. But I’m guessing you are really wanting to run with the requirement the customer has given you. Which is good, so long as it’s not your only goal.
I’m also assuming you have a code coverage tool and have a good idea of what code-coverage you have, because without being able to run at least some of your unit tests with coverage instrumentation enabled, you may still be flying in the dark. I’ve not used googletest, and myself not been intimately involved in analyzing code coverage gaps, but my experience has been that helping to set up and run coverage in CI (Continous Integration) is just as useful as driving up the number of running unit tests in CI. Both of these approaches will leave gaps as you say, and they don’t do so in a way that directly points to where those coverage gaps are. Hope someone can chime in with tactics that might point to those gaps “confidently” if possible.