I am trying to understand the scope of the auto analysis provided by Report Portal and was looking for feedback from other people’s experience.
As I understand it, the data source is the log for each test execution. However, this could be quite limited, especially as it will be from the view of the test rather than the AUT.
In a UI test, i could imagine a scenario which could be the manifestation of many issues.
e.g.
Symptom: Test fails due to being unable to find control in UI
Potential causes ; slow response from server, incorrect data returned from server, screen slow to render etc…
Assuming that the test log is consistent for all these causes, how would AI help me in this case?
If I were to attach more rich data as attachments such as HAR, HTML at failure, websocket traffic, etc, would RP also analyse these too?
I would love it to be able to correlate say a HTTP error in the HAR file to a missing control in the UI.
Is this too advance for RP?