With the rise of AI and people using Agentic AI to get to know a code base, Josh Grant offers an alternative: run automated tests to get to know it.
Let me share a secret to getting up and running on a codebase quickly: run automated tests.
Why run tests, even running tests as the first thing you should do? Because running tests will provide quite a bit of information with minimal effort. If tests run and all pass, you can take a look at these tests to see what they do and how various parts of the application work. You can even get a good sense of functionality from an end user’s perspective in some cases.
If tests run and some pass and some fail, you know there’s some work to do to fixup the tests. You can start fixing tests or at least evaulating the failures to learn about the codebase. Bug fixing is usually a good start to learning a codebase, and failing tests are a good step toward that.
If tests don’t run or crash, this can tell you that the automated tests are neglected. This is a signal that maybe tests aren’t run often or at all, which can also be a sign of things to come.
Of course you can analyze test code further, looking at what kinds of tests are automated (browser-based, GUI, unit level, integrated, API, performance, security, and so on) and use design patterns like the test automation pyramid to get an even deeper sense of what’s going on. Or you can simply run the tests and see what happens.
Tests are a feature of a good codebase, and are essential for a professional one.
This also reminds me of how Nigel spoke about how test artefacts is what what lasts the longest in a software project and what we can ulitamtely trust the most.
Curious to know other people’s thoughts and experiences around this?
What else would you add to this? How else can (automated) tests help build, teach and maintain knowledge of a product?
Thanks to @joshin4colours for inspiring the conversation ![]()