For complex products like what you describe, I recommend isolating behaviors to improve coverage and testability. In this case, at a high level I recommend separating the ability to configure the product from the presentation and behavior on the screen.
Explore the implementation to learn how the configuration is implemented. While there may be a UI to select configurations, those configurations must be maintained somewhere.
One method is to store the configurations in a database and expose the ability to change them through an API. With the API, you would be able to verify that configurations can be stored, changed, and deleted. Additionally, you could evaluate multiple combinations of configurations and their validity.
I recommend investing the majority of your automation here.
I recommend learning about the business purpose of the configurations and how it influences the UI content and workflow. This information could be the basis of what you might evaluate. Most importantly, in my opinion, is to demonstrate that the content and workflow are driven from the configurations. I recommend starting with something simple that shows this connection and move towards something more complex. You may also have to demonstrate the UI can be used to change configurations.
Once that is demonstrated, you might add a few more test cases to explore diversity of configurations.
This phase of the testing may not benefit from automation; it will depend on risk and pace of change of the product.