Test cases are used to communicate the execution of a test strategy to testers. I’m not a fan of them because they’re very expensive and have a lot of limitations.
Before you do anything read @ipstefan 's answer above, and take into consideration the effects of the context you’re working in, because it defines the value of whatever you do next. How your company works, who you work with, how you work with them and so on are all important, amongst myriads of other things, so take some time to think about what you’re doing, who your company is, what your industry is and who your customers are. It’ll pay off when you’re making decisions about risk later.
As the only tester you’re developing the strategy (all the ideas that guide your test design) and the execution (test design), and you’re the tester. So instead of making yourself test cases consider another approach to developing a strategy.
One way to do this is to map out the product, consider the risks you might want to test for and make a list of tasks that you’d want to do to test it. I split my testing up into test sessions^, which are timed explorations of the product trying to fulfil a charter^. A charter is a short explanation of what the session is for, like “test the upload feature” or “test the upload feature with different file formats. JPEG and PNG are required, but also try types of invalid file even if they have valid file extensions”.
So you could do a recon^ session (or many) for a charter like “Explore to discover functions and assess risks”. You go through the product and make notes on what there is to test, any test ideas that come to mind, risks you want to mitigate, questions you have, test data you might need and so on. You’re not looking to find problems (although you might), you’re looking to understand what this product is and what it can do.
Then you might do some capability sessions, where you’re trying to see if the product can do what it indicates that it should be capable of. This is to see if the product can actually perform.
Then you might do some reliability sessions, where you’re trying to see if the product can handle difficult inputs, harsh conditions, weird configurations and long-term use.
You will think of many things while you test, and you need to decide what is worth your time. Think about risk, and the impact of a potential problem. You might get distracted and go off-charter, which is okay. You may think of things that don’t change much, are shallow fact checks and need to be repeated a lot, and this could go into an automatic checking tool (“automation”).
You can store the notes for these sessions as evidence of your activities or for reports if you need to. You can always include screenshots, supporting documents, video, recorded gif files, notes from other sessions, whatever you need to remind yourself of what you did or tell others.
At the end of a session you get all the questions answered, investigate and raise any bugs you find, and communicate any project issues you think need communicating.
In this way you’re building a common-sense risk-based diverse strategy, making the best use of limited resources, and you have documented charters to refer to later. You build different types of coverage and allow yourself to be guided by risk. It’s easy to try, approachable, dynamic, flexible and cheaper than case writing.
Also communicate with the people you directly work with. See how they like to work, what they want from a bug report, and so on. Build credibility as a capable and supportive person and that will pay for itself many times over.
Best of luck!
^ These are the terms I tend to use. Use these or your own, whatever works best for you.