What techniques do you use for testing code highlighting in IDEs?

Check out this weekā€™s article, ā€œTuning the tools you create: testing code highlighting in IDEsā€ by @picimako to explore key methods for testing and validating code highlighting features in various IDEs.

What Youā€™ll Learn:

:wrench: How to test code highlighting in various IDEs, from JetBrains to Visual Studio and VS Code.

:hammer: The role of static code analysis in validating the tools you create and ensuring they function as intended.

:carpentry_saw: Why integration-level testing can be more effective than unit testing for IDE features.

:gear: Practical examples and approaches for testing code highlighting across different platforms.

After reading, weā€™d love to hear from you:

  • What techniques have you found most effective when working with IDE plugins?
  • Will you try out any strategies shared in the article?
3 Likes

I read the title and couldnā€™t understand what is meant by ā€˜testing code highlightingā€™.
Skimming through the article seems itā€™s about using a highlighting tool for code analysis.
I have seen devs use code linters (eslint/tslint) where they customize the rules.
Another point to consider is, does anyone care? Whoā€™s doing that review and why?

1 Like

I am missing more background for this. What is the purpose, what do the plugins actually do, why would you need to write IDE plugins at all, why do you need to carry out these checks? This is not really about testing code highlighting features in IDEs but more about testing code highlighting features in your own IDE extensions. I am not sure I get the point.

2 Likes

I agree that the title could have been more specific, but since there are different nomenclatures in different IDEs (i.e. diagnostics, inspections, annotations, ā€¦), I wanted to use a more generic, rather umbrella term (highlighting).

In the article, I talk about testing a specific case of an IDE pluginā€™s feature, however a missing aspect may be that built-in features of IDEs use the same testing techniques and libraries that IDE plugin developers use when testing their plugins (maybe there are exceptions to this). So, whether an IDEā€™s built-in features are implemented (and tested) as something truly built-in or inside a plugin (either bundled with the IDE or not), doesnā€™t really make a difference.

As for what the plugins or built-in features do: they may be language specific features like reporting if a class doesnā€™t implement the methods of an interface, or a library integration, like reporting mocking on a final class when using a mocking framework that cannot do that.

As for why one needs to carry out these checks: I think I covered it in the last two paragraphs of the introduction, but to give a specific example:
Letā€™s say someone uses Cucumber, and some Gherkin steps are incorrectly marked that they donā€™t have corresponding step definition methods, when in fact they do have. Whoever develops this check (that marks steps without a step definition) has to implement tests for it to make sure that end-users wouldnā€™t get false or misleading reporting, and only those steps are marked that indeed donā€™t have step definitions.

1 Like

I thought this was about testing highlighting features of IDE extensions as the above comment mentioned.
But it seems it might be about static analysis of the code of an automation product, meant to check the business product, through an IDEā€™s extensions that are meant to patch the wrong warnings of the libraries used to code.
Did I get it right?

The last tool I built was with VSCode using Python and pandas.
It was a one-time use, where I coded about 2k lines of code for test data generation. No need for any analysis.

The last reusable automation product (for GUI/API regressions) I built with JS/TS/NX/Playwright in the same project repository as the main/business product. I used the linter that the business product had. And as extensions, I found Playwright for VSCode to be very annoying.
Unfortunately, I havenā€™t found a useful thing in this article.

I am into testing some forms of plugins and thought that maybe it can help me but this is into a different stuff.

The target for this is probably the IDE developers/testers.

However, I would love to catchup with @picimako to discuss in-general ideas and strategies for testing plugins.

@ipstefan

Iā€™m not sure I fully understand what you mean but Iā€™ll try to rephrase the intention. Sorry if I overdid it. :slight_smile:

So, static code analysis checks are provided by IDE extensions to validate the proper usage of test automation (and other types of) code. Those analysis checks are not meant to patch the wrong warnings of the libraries, but to enhance the dev experience of using those libraries.

You mentioned that you used a linter for your JS/TS automation product, I guess ESlint or something similar. Engineers expect that e.g. ESlint will provide valid and relevant issue reports within an IDE editor, on CLI, or at another location. But in order to make sure that ESlint rules behave properly, those rules must have corresponding tests in the ESlint project itself, that validate their behaviour on relevant, sort of real-life code snippets. In this sense you can ā€œsubstituteā€ a linter with an IDE extension, that extension will validate its own static code analysis rules if they behave properly, and the validation techniques for those tests are what the article explains.

In any way I appreciate your feedback. :slight_smile:

1 Like