First premise: let’s distinguish between a strategy and a strategy document. The specific nature of that difference seems to have been missing from all of this discussion so far. A test strategy (for anything, including for an API) is the set of ideas that guide your choice of tests. (A test strategy document is some document that represents these ideas to some degree. A strategy is not a document.)
Second premise: you have choices in testing; everything that you do in testing represents a choice to do some things, and choice not to do other things. By “choice of tests” here, we don’t mean “selecting test cases”; we mean the choices you make about risks you want to investigate, quality criteria you want to consider, techniques you can apply, aspects of the product that you want to cover, data that you might select…
Third premise: to guide is to influence, but not to determine or to dictate. Elements of your context will enable or constrain your choices; so will your skills and your mindset.
Fourth premise: as a heuristic, for any given X, “X testing” is “testing focused on risk related to X”.
Now: if you are even considering “API performance testing”, you already have a strategy; that is, you already have a set of ideas that might guide your choice of tests. That set of ideas may be vague or confused, or it may be sharp and rich and coherent, but you’ve got at least two notions in your head: “API” and “performance”.
The API is an interface. It’s a means of getting at functions in a product. So in one sense, you’re not testing the API; you’re testing the product, using the API. To test the product, you need models of the product. On the other hand, you can also think of the API as a product in itself (someone produced it). To test that product, you need models of that too.
“Performance” is a word that represents a category of ideas that people might value in a product; it’s a quality criterion. Performance might include notions of speed; responsiveness; capacity; reliability and robustness under load, or under stress, or under some kind of constraint.
The idea of interface as an element of the product and performance as a quality criterion can be found in the Heuristic Test Strategy Model (https://www.satisfice.com/download/heuristic-test-strategy-model), which provides us with a set of ideas about how to think of strategy generally.
That leads us to key questions that guide testing related to performance, via the API:
Do we know what the customer wants? How big a deal is performance as part of that? (Maybe it’s not a big deal, and we don’t have to do a ton of performance testing.) Do customers use the API? Which customers? Or is the API simply a means that developers use (and that I can use) for easy access to aspects of the product for which performance is critical? Have we included such things in the design of the product and of its API?
As the developers are building the product, are they building it with performance in mind? As they’re building it, is the performance of the product they think it is? What testing are the programmers doing? Are they reviewing the design and the code for performance-related issues? Do the low-level checks that they’re doing include some focus on timing, or on stress? Are there specific functions, accessible through the API, on which they are doing performance analysis? Are they well-documented?
What do we need to do to prepare to test for performance-related problems? What tools do we have? What tools might we need to develop? Is there logging built into the product? Do we need write code and use parts of the API for deeper analysis? Have we prepared representative environments and representative data to model real-world performance?
How might we explore or experiment with the product to examine performance-related risk? How might we stress the product to extremes? How might we starve the product of things that it needs?
It seems to me that examining and exploring these questions will help to guide you to develop a set of ideas for testing performance.
This might help too: https://www.developsense.com/blog/2018/07/exploratory-testing-on-an-api-part-1/