I read the title and couldnāt understand what is meant by ātesting code highlightingā.
Skimming through the article seems itās about using a highlighting tool for code analysis.
I have seen devs use code linters (eslint/tslint) where they customize the rules.
Another point to consider is, does anyone care? Whoās doing that review and why?
I am missing more background for this. What is the purpose, what do the plugins actually do, why would you need to write IDE plugins at all, why do you need to carry out these checks? This is not really about testing code highlighting features in IDEs but more about testing code highlighting features in your own IDE extensions. I am not sure I get the point.
I agree that the title could have been more specific, but since there are different nomenclatures in different IDEs (i.e. diagnostics, inspections, annotations, ā¦), I wanted to use a more generic, rather umbrella term (highlighting).
In the article, I talk about testing a specific case of an IDE pluginās feature, however a missing aspect may be that built-in features of IDEs use the same testing techniques and libraries that IDE plugin developers use when testing their plugins (maybe there are exceptions to this). So, whether an IDEās built-in features are implemented (and tested) as something truly built-in or inside a plugin (either bundled with the IDE or not), doesnāt really make a difference.
As for what the plugins or built-in features do: they may be language specific features like reporting if a class doesnāt implement the methods of an interface, or a library integration, like reporting mocking on a final class when using a mocking framework that cannot do that.
As for why one needs to carry out these checks: I think I covered it in the last two paragraphs of the introduction, but to give a specific example:
Letās say someone uses Cucumber, and some Gherkin steps are incorrectly marked that they donāt have corresponding step definition methods, when in fact they do have. Whoever develops this check (that marks steps without a step definition) has to implement tests for it to make sure that end-users wouldnāt get false or misleading reporting, and only those steps are marked that indeed donāt have step definitions.
I thought this was about testing highlighting features of IDE extensions as the above comment mentioned.
But it seems it might be about static analysis of the code of an automation product, meant to check the business product, through an IDEās extensions that are meant to patch the wrong warnings of the libraries used to code.
Did I get it right?
The last tool I built was with VSCode using Python and pandas.
It was a one-time use, where I coded about 2k lines of code for test data generation. No need for any analysis.
The last reusable automation product (for GUI/API regressions) I built with JS/TS/NX/Playwright in the same project repository as the main/business product. I used the linter that the business product had. And as extensions, I found Playwright for VSCode to be very annoying.
Unfortunately, I havenāt found a useful thing in this article.
Iām not sure I fully understand what you mean but Iāll try to rephrase the intention. Sorry if I overdid it.
So, static code analysis checks are provided by IDE extensions to validate the proper usage of test automation (and other types of) code. Those analysis checks are not meant to patch the wrong warnings of the libraries, but to enhance the dev experience of using those libraries.
You mentioned that you used a linter for your JS/TS automation product, I guess ESlint or something similar. Engineers expect that e.g. ESlint will provide valid and relevant issue reports within an IDE editor, on CLI, or at another location. But in order to make sure that ESlint rules behave properly, those rules must have corresponding tests in the ESlint project itself, that validate their behaviour on relevant, sort of real-life code snippets. In this sense you can āsubstituteā a linter with an IDE extension, that extension will validate its own static code analysis rules if they behave properly, and the validation techniques for those tests are what the article explains.