This is very true, but with caveats.
I’ve seen so called Security Testing/Penetration testing experts just execute some testing using BurpSuite (or similar) and then produce a report based on that, but then not take that test output any further.
These tools are great at highlighting potential problems, and will even help discover things like injection flaws and other vulnerabilities. However, a lot of these tools, OWASP ZAP included, do reveal a lot of false positives. That being an alert in their report that is actually not a bug, but some other aspect of the application. I had both ZAP and Burp mistake a long string of numbers in a URL (it being a GUID) as a potential credit card number exposure, just because the guid started with a particular string of numbers.
One thing some of the tools and scanners will do, is to allow a tester (of any type) to take a potential flaw that a scan or a spider might have uncovered, and either replay the HTTP request, or modify the data in some way to perhaps uncover more information about the potential problem. They often have built in intercepting proxies, so you can identify using breakpoints etc which request is potentially at risk from exploitation.
You can also tune the tools to explore software at different levels of depth and accuracy. Or apply a fuzzing library over an endpoint, data input, or some other application element.
These tools are of course not without their dangers, so they must be used with caution, and not on a system that is shared.
But they do enable testers on teams to be able to leverage and grow their security knowledge for the benefit of the team. It just takes some time to learn about the feedback you get from the tool, and how to tune it to suit your needs best.