Gtest versus whatever else for C++

Google Test. Is it something people have experience with here. I’m in a place where I have to test a wadge of C++ code via API. I also have to test C# bindings later, and do a bunch of system tests. I have Linux, Windows targets, and Embedded devices in the mix. So I actually chose Python for system tests using Pytest. I have a basic CI/CD job with decent cover of just one area of my product line using python, but the C++ and C# is missing still. Python lets me easily control laboratory bench power supplies for example and easily sets up sandboxes, install apps, uninstall them all in the O/S. And that has been great because I also had to test a sockets interface as well. Sockets is a one-liner in Python. If anyone has pytest questions, please fire away at me, I’ve rolled my own once and used pytest with nose for a few years now. (Some advanced pytest tips here https://pytest-with-eric.com/) BUT GoogleTest is my next tool.

gtest is arguably more fully featured than pytest, which requires you write plumbing of your own for things like setup and teardown fixtures. Pytest suffers from not being thread-safe however, which has not been a pain yet. I have a lot of C++ experience, but only ever used CxxTest, which is damn near bare-bones as frameworks go. Build a computer using Stone knives and bearskins as Spock would have put it. Hence looking at Gtest instead.

So, I’m wondering, anyone hit any specific pain-points using Gtest? My use case is for system end-to-end, as well as API tests. I’m not pure “unit testing” at all.

1 Like

I don’t have a huge amount of exposure to C++ test frameworks, and I was going to say something about google test being unit test focused, but I guess that’s as true as saying pytest is unit test focused - it might have started that way, but it’s evolved.

When you say pytest requires you to write your own plumbing for fixtures, that has me wondering… I thought pytests fixtures was one of its strongest features? Isn’t it just a function with some setup, yield and teardown? (unless you are using the addfinalizer syntax which adds some boiler plate).

The thread safety thing has me wondering too… from what I understand, out of the box, both pytest and google test run tests sequentially. Pytest has plugins to run tests in parallel. If running with multiple threads, you’d still have to worry about test isolation. Perhaps I missed your point…

You mention C# bindings, but not what language they are using. If Python, I’d seriously consider pytest for e2e (as well as writing python bindings for the C++ stuff) as you mentioned you have the orchestration already and doing that from a C++ unit test framework sounds like a lot of work (that would be the main pain point from my perspective).

However if your main reason for looking at google test was to learn it, then have at it :grin:

1 Like

Cheers, the eyes are smarting a bit today. Mainly because I did word my post a bit loosely. Yes Test setup and teardown fixtures in pytest do exist, you do just have to plumb your own and yield
@pytest.fixture(autouse=True) def testcase_setup_teardown(self, request): self.setUpTest(request) yield self.tearDownTest(request)

and then implement setUpTest() and tearDownTest() .
I had expected these 2 to be ‘batteries included’ in pytest. But pytest design forces you to do this per class or suite as a conscious thing.

I often find it rather puzzling that more automation testers do not feel happy to code in languages like C or C#, mainly because I find it allows you to get much closer to the interface you are validating if you are not using wrappers. But that’s my bias. I started out as programmer, but I missed out that stage in my coding journey where everyone wrote unit tests galore. C Test frameworks back in 2000 were pretty bare bones and had no XML reporting support for example. I’m just not finding a lot of googletest blogs that are of the quality of the python test blogs and tutorials.

As usual I’m in a hurry to learn mastery of things, and need to slow down a lot. One of my slowing techniques is to start blogging and brain-dumping on Fridays. My core context is embedded devices, which is not a popular thing, but as IOT expends it’s bound to get more visibility. So I’ll have to blog about GoogleTest once I have anything worthwhile to share.

Because I’m system testing mostly I’m often going to spawn threads to speed up tasks within a test or will need to make async calls to validate that the product under test is thread safe. You will be surprised how many apps in the wild use one big mutex, or apartment-thread-ify their apis, or worse just have no thread protection. And users tend to then either do their own thread throttling rather than get down into why a 3rd party component leaks state randomly but not often enough to be bad.

On top of that, yes I’m going to use the flexibility of python-script, to do things like inspect objects easily, to patch a test easily and blend it with the compile-time pain of C++. but that’s probably for 2026 when I have enough Gtest experience to dive into glueing 2 languages together. that’s deffo worth a blog post! I know I can do it, because I once integrated LUA script into a C++ app. LUA is very sweet because it’s very simple as languages go, so Python will be a bit more work, but not much more I hope.

Interesting stuff… I would say you don’t have to do anything extra to get setup and teardown in a pytest fixture. Its implied rather than declared. I guess I’m saying - there is no spoon… I mean plumbing :grin:

From the pytest docs, this is a fixture (much like yours) that does some setup, yields to the test and then does the teardown if the test returns.

@pytest.fixture
def receiving_user(mail_admin):
    # Setup
    user = mail_admin.create_user()

    # Give the fixture data to the test that calls the fixture
    yield user

    # Teardown
    mail_admin.delete_user(user)

def test_inbox(receiving_user):
    assert receiving_user.has_inbox()

As I’m not familiar with google test, I asked an LLM to show me the google test equivalent which is explicit about what is setup and teardown:

class ReceivingUserTest : public ::testing::Test {
protected:
    void SetUp() override {
        // — Setup —
        user = mail_admin.create_user();
    }
    
    void TearDown() override {
        // --- Teardown ---
        mail_admin.delete_user(user);
    }

    MailAdmin mail_admin;
    User user;

};

TEST_F(ReceivingUserTest, HasInbox) {
    ASSERT_TRUE(user.has_inbox());
}

As for why more testers don’t get involved in C/C++/C# unit tests, I’d say its because most don’t get the opportunity and the effort required to write and run tests in those languages is higher, and we’re often playing catch-up, so speed of implementation is key.

Also most tests written in low level languages are unit tests and I’m not sure that will mean more testers writing unit tests, as its generally seen as the developers responsibility. Also from my experience its rare for high level tests to be written in compiled languages. I’ve seen higher level tests written with C# more than I have C/C++. Perhaps the eco-system is evolving and that will become more common?

Its also very typical (again only based on my experience - everyone’s experiences will be different) with embedded systems for testers to test those systems as black boxes, possibly with some test harness that provides insights into the internals/logging.

As for thread safety - I’m surprised/not surprised in equal measure when issues are uncovered because that stuff is hard and I have to assume/hope anyone using it, will get it right :grin: but I don’t think you’d see much difference between pytest and google test if you spin up threads inside your test (assuming you are not running pytest with a parallel plugin). If you call assertions from the threads in either framework, things are going to get messy!

I’d never thought of my fixtures as being an actual “context-manager”, this is what happens when you do not have your head in the right space. Most of the fixtures I wrote yield or return an object, but do not do anything after the yield as teardown. I almost need to rename all of my fixtures so that the ones that are context managers are more obvious. I’m doing a lot solo and it melts my brain. I’m going to jira up a cleanup task for now. OK with that done, yes, brain melt (caused by having too many languages and a big backlog).

A lot more system and component level tests benefit from loose type languages, and the ability to easily mock too. Which explains to me at least why C# beats C/C++. But to be fair, compile times and the benefits of type safety are not to be sneezed at. I lint my python scripts nowadays, and you have to, you really do. And yes it is far faster to write tests in languages that have for example a REPL. If I can write a test one statement at a time, learn what behaviours I want to check and all of it is in a terminal session in a shell, which the python REPL really is. I find that being able to verbatim copy all the commands I just typed, delete ones I don’t want and then save it as a script is a magical way to write tests at very high speed. And C++ will probably never have that ability, but C# one day might. It relies on scopes I think is the problem. I just don’t like being left unable to help developers with unit test workloads, just because management want that to remain the domain of the developers still. the tester-brain or hat is very useful for developers to have for many reasons.

I suspect you are right, embedded testing is probably done very black-box. My hobby is electronics, so I never see a device as a box. Curiosity is a valuable tester skill. I really hope testers can stay at the leading edge whatever it is they choose as a strategy.

I can relate to the brain melting aspect! :grin: I’ve found LLMs are great sounding boards for ideas/planning when you are in this position (even if they are a bit too keen to praise everything - I’m looking forward to the work on adding “personalities” to LLMs so they are a bit less upbeat :grin:).

I’ll caveat the fixture teardown after yield by saying, that while it might be what the docs recommend, it doesn’t get called if there is an exception in the test (at least that used to be the case, and I think as its a language feature rather than a pytest feature, I expect that is still the case). If you want the teardown to run regardless, you have to use the older addfinalizer syntax which feels a bit more clunky.

As for types, you can use them with python (I’m still not sure how I feel about the syntax), and I’ve been using typescript extensively for the past few years and its a love/hate relationship (I love that I don’t have to guess what something is or go down rabbit holes to find out, but then hate that sometimes, what you have doesn’t quite fit what you need), but in the main, I think its a good thing for code quality.

And yeah I’m with you on the linter! Formatters/linters ftw!

I guess the argument for devs retaining control of unit tests isn’t purely a MGMT decision, its practical in that its testing very small bits of implementation, that only the dev really cares about and there would be a time factor involving testers. Say they have a task to complete, it should state the intent and expected behaviours, but not exactly how to achieve that. The dev writes some internal functions and ideally they write some unit tests to exercise those functions (preferably to exercise it beyond the intended/expected design limits). Getting testers involved is likely going to slow down the process - maybe pair programming could work to bring the testers mindset (though that is still a cost), but if it was a case of passing that task to a tester or back to the sprint backlog, it could be a while before someone was able to pick that up.

That said, shift left is good at bringing the testers view point earlier in the discussion and hopefully that will filter through to better unit tests. I’ve worked with devs who wrote really comprehensive unit tests for critical functions and it was nice to see. So maybe we should work to get more exposure of unit test results? (Given there are usually lots and lots of them, and their individual value is low - is it worth the effort to expose more than an overall pass/fail outside of CI/CD?)

Electronics is also a hobby of mine and I’ve also worked in the industry as a tester/repair technician, so I think being able to view any system as both a black box and white box is a really useful thing - you can view a system in the way a user would, but then also consider the internals which can guide good testing decisions. But as with anything, knowing a lot about a system can lead to blind spots…

1 Like

Agree Mark, testers can learn a lot from devs, not just devs learning to test better from testers. I was reminded last week of a developer who wrote a brilliant graphical extension in java to our test framework once. It made it possible to easily test very complicated state transitions by graphically depicting system state as coloured blocks. It’s amazing how good the human eye is at spotting anomalies in colourfully produced data visualizations. Something my grasp of java and of data-science was never going to be able to do.

I’m looking at my pytest results and noticed they silently failed, because Teamcity is not getting the exit code out of my test wrapper, but is also not able to parse my testresults.xml file. Joy. I made more than one change last week to Teamcity, so now it’s time to work out what my configuration used to look like again, who ever thought a test F/W could generate a malformed results.xml.

Inspiring chat, I do hope you continue to contribute in the MOT community. at any rate you have given me some good homework to do. cheers.