As a first step its worth creating some sort of matrix that lists the testing activities and values and then trying to classify whether they lean towards mechanical or human strengths.
A lot of teams did this previously but when you look at the market a lot of other teams were sort of having their humans doing quite a few activities that leaned towards mech strengths.
What does your manual testing look like?
If it leans towards test cases, scripted testing and testing to verify its likely AI can both replace and accelerate a lot of this.
This is not a model I follow which means my bias will often place things like test cases of lower value so AI generating this its often yep good enough. Similarly if I use something like playwright agents to automate this it fits into a bit extra coverage that for me is good enough. Hard core test case writers and deep automaters though will challenge AI as they are often doing very domain specific coverage and complexity increases risk for AI.
If it leans more towards a learning of the product, discovery, investigation and exploration of risk for now this still very much fits in human hands but with AI empowering potentially deeper and broader risk investigation.
Now there are tools offering exploratory testing with multiple agent running. My take useful for large teams on enterprise level products but capability needs to be clearer, they remain in my view closer to crawlers and an expansion of known risk coverage for now and not the human strength stuff testers do.
There are a few interesting areas. Lets say two areas which say your testers have limited experience in, security and accessiblity for example. These AI tools will often outperform an average tester on their own but again become powerful tools in the hands of someone advanced in those risks.
If human testing drops on human centric products its high risk this is acceptance of lower quality and less innovative products and that lower bar is accepted in exchange for faster more autonomous coverage.