What are you doing to prevent AI technical debt from a quality and testing perspective?
When tests can be churned out so quickly, how have you seen this actually cause problems?
@christinepinto wrote about this recently:
AI-generated tests are creating a new kind of ๐๐ฒ๐ฐ๐ต๐ป๐ถ๐ฐ๐ฎ๐น ๐ฑ๐ฒ๐ฏ๐. And weโre about to drown in it.
LLMs can now generate 1,000 tests in minutes. Sounds great, right?
Wrong.Theyโre making the same mistakes every time:
โ Writing trivial tests that only fail when you intentionally change things
โ Setting up 20 lines of mocks, then skipping the one line that actually matters
โ Testing how code works instead of what it doesThe result? A ๐บ๐ฎ๐ถ๐ป๐๐ฒ๐ป๐ฎ๐ป๐ฐ๐ฒ ๐ป๐ถ๐ด๐ต๐๐บ๐ฎ๐ฟ๐ฒ disguised as safety.
After 18 years in QA, Iโve watched teams struggle with too little test coverage. But weโre about to flip to a new problem: drowning in low-value tests that make every refactor expensive.
Douwe Osinga on the Block Engineering blog nailed it: โWriting tests has become ๐ณ๐ฟ๐ฒ๐ฒ. Maintaining them hasnโt.โ
This is the hidden cost everyoneโs missing.
The same thinking that values test coverage percentages over bug prevention. The same thinking that makes teams afraid to change working code because the test suite will explode.
Not all tests have positive value. Some actively make your codebase harder to evolve.
The fix? Prompt LLMs with reasoning about test value, not just test coverage. Make them ask: โIf this test fails, what did we just prevent?โ
Because the goal isnโt having tests. Itโs catching real bugs.