Firstly I think that the advice is sound, but I’ve never liked the 5 Ws + H as a heuristic because it’s so flimsy. The power behind it here is that the questions asked are useful and powerful ones. The 5Ws + H is not doing the heavy lifting. I could say “Who wrote the software, what is the software made of, when did it come out, where is it stored, why is it green and how do I get through one night without you”. That’s why I don’t like that tool.
To the actual list of questions, then. I think that this is a great way to keep things simple and powerful. I like the low load-to-value ratio here. The HTSM has MIDTESTD for project environments (Mission, Information, Dev relations, Test team, Equipment/tools, Schedule, Test items, Deliverables). It also includes some of the elements of the list in the article under other headings… but these things serve a different purpose. To say that the list in the article is extensible is obvious, but to suggest that it should be is another matter. I’d say that this list is a great way to establish a mindset of context about the rest of the learning there is to be done.
We could ask more questions under each heading, of course. Under “Who (uses the software)” we could ask if there are multiple types of users, if they have permissions/access differences, if the system can be used via an API, how the users are administrated and by whom, what data we collect about them, and so on. I think that should be an exercise for the reader, however.
So I don’t use it, I have my own, and there are limitations to this one and mine, but I don’t think it should be clouded with many more details. There are other reference works for the details, this is a fine overview. I think that risk catalogues or checklists come about later, from the particular risks of particular software.