How to avoid end to end testing after introducing new functionality to Salesforce?

I am working on a brand new implementation of Salesforce to the company.
We use latest SF version and Lightning.
We develop using Dev Sandboxes.

Each pair (as we work in pairs on features) is working on a user story on a separate Sandbox.
Then we commit and merge developed stuff into production.

Problem: How do we enable functionality developed in Salesforce to be released into production without incurring a full end to end test cycle?

Do you have any ideas or ready solutions that i could incorporate?

1 Like

You can test your features and components in isolation.

Youtube presentation explaining it:

More details in this Dojo:

Disclaimer: I have not personally implemented it in Salesforce, but I have done it in many other systems and it worked very well. Also, I might have misunderstood your question, so please reply here and let me know if this is not gonna solve your problem.


Thank you for the reply Wojtek,

Unfortunately the nature of the Salesforce and the way features and functionality is developed, your solution will not apply.

It may be used in the cases when SF is communicating with other services. And that is the case in my situation too.

But it is only about 10-15% of the functionality we develop.

Hi - when code is committed is it going through any automated testing process?

Well, thats one of the questions how to test it and when. If automation, what to use to automate?

1 Like

I’m not familiar with Salesforce, but my first question is: why are you trying to avoid end-to-end testing?

Do you mean that you are trying to avoid E2E testing after every new feature is deployed? Or trying to eliminate it altogether?

I’m also not familiar with Salesforce (we are starting to use it but mostly as software developed & managed by 3rd parties - interested in the tech though!).

As far as I am aware Selenium can be used with the platform - and I think there are some automated tools in the selenium app exchange/store too. I’ll see if anyone here is aware of anything!

We try to avoid too much testing, and to avoid E2E each time new feature is deployed.
We are looking for a way to get more confidence in the code we are sending without extensive testing.

We are stilkl working on our processes.

1 Like

Selenium is a no-no with Lightning, it can only work “reliably” with SF Classic interface.

1 Like

That is a shame. Ok well will be catching up on our salesforce project at some point soon so will see if can get any info on tools our 3rd party partners use in case it helps.

Anyway, only mentioned automation as at least that would give more confidence when code committed. When you say you want to avoid end to end testing I’m presume you mean you want to avoid it until you get to a complete build/product to test - and not hold up CI by having to testing everything every time small bits of functionality added?

It seems to me that your problem is only probably partly to do with testing and is partly to do with organisation.

Our situation is rather different to yours, as we are a specialist software house with a product that is in widespread use worldwide. Our user base is highly skilled and we actively seek feedback and suggestions for enhancements beyond the usual run of bugs being reported. This does mean that we have a continuous process of building new features into the application and these are developed and released on roughly a monthly basis. (Some features take longer to build and test than others; we have to take a business decision over the timing of new feature releases.)

So our individual new features are treated as stand-alone projects which are built, tested and deployed in isolation, Obviously, each feature has its own workflow which is tested as extensively as possible within each monthly release cycle. But unless an issue is identified during that process that might impact another part of the overall application, we do not go back and perform a full regression test of the whole product.

The way our production and interim (test) builds are structured means that new features are always tested on a build which is based on the last version to be put into production. Once all new features have been tested on a build, that build is committed to production. That way, a certain amount of regression testing is built into the process; any major breakage caused by deployment of new code would be spotted early on.

However, once a year, we do schedule a full regression test (which I suspect is equivalent to your “end-to-end” test). That is built into our release timetable and is there to ensure that each month’s incremental releases haven’t impacted something already in deployment. (Our product is quite complex and there’s a lot of stuff in it that users might only use once a year; so some regression faults might take some time to find and troubleshoot otherwise.)

I don’t know how helpful that is to you, but this seems to work for us.

1 Like

@robertday Is it possible to communicate here by sending private messages? I would like to talk to you @robertday is you dont mind, ideally over the phone.


I’m actually at work right now and a telephone conversation wouldn’t really be appropriate. However, there is a messaging function on this board - left-click your profile picture/avatar at the top right of the screen and a list of messages drops down with a menu at the top that includes private messaging.

1 Like