Preventing e2e tests from becoming tautologies

The company I work for primarily produces RESTful APIs, written in Java. The current e2e tests are written in Python. There’s some momentum building to shift the e2e to Java, which I’m okay with, but I wonder how people avoid the tests becoming tautologies?

Developer fluency and no need to context switch from Java to Python are the main reasons to switch to Java. One of the other reasons folks have mentioned is that they can leverage the existing objects in their test code.

I’m not a huge fan of this, especially for the request and response objects, as if a test is written using the objects from source, it’s easy to have happy path tests always pass, even if someone breaks a contract (i.e. changing a property to be required as opposed to optional). It also seems like you’d need custom objects to test invalid requests (i.e. the constructors for the request object wouldn’t let you have invalid data types, empty fields, etc)?

The ideas I’ve come up with so far:

  • be diligent in code reviews and make sure such tests cases are explicitly enumerated?
  • do not use the POJO from source, but instead parse the request and response objects as strings (instead of JSON). I lean towards this, but makes it a little hard to set/read object properties, and could become a string parsing/regex nightmare.
  • Have custom POJOs for tests - These would have to be fully independent and not extending the existing objects. Biggest challenge were would be to enforce not copy/pasting code from the src object to the test object, and keeping the test objects essentially as clean-room implementations

What options/strategies am I missing?

2 Likes

Could you give an example of such tautology?
I don’t see how the suggested approach is different than any unit test:

Target => A function in a controller;
Input => The object(s) this function requires;
Output => The object(s) this function returns;
Assertion scope => The return object(s) and the controller behavior;

2 Likes

How important is it for all the tests to be in the same language?

3 Likes

Sure, let’s say we had a JSON request that takes an ID, e.g.

{
“id”: 1234
}

A developer decides to change the id from a number to being UUID4 and they update the POJO and its constructor. If the e2e tests are using that same code to generate the request object, you can no longer generate a request POJO with an integer ID.

I’d personally like to keep them in Python, but most of our dev team struggle with it, hence the desire to move to Java. They’ve also flirted with Groovy and Kotlin as a middle ground to work around the strongly enforced typing that’s problematic here.

I still don’t get how this is an tautology.

Regardless, this flow is (almost) right.Tests should exercise the a program’s UI (in this case an API) in the same way other consumers will do.

What could be improved from this flow you’ve described is to first alter the test, to pass a valid UUID. It will should keep the test fail because an UUID can’t be parsed into an integer. Then the developer changes the production until the test get green again. Thus, both test and production code will reflect the new contract of using UUIDs.

1 Like

It’s tautological in that you can’t construct invalid request objects using the Java code/objects, and would have to use one of the workarounds I mentioned (an extended object with overridden properties/constructors to allow for invalid values, passing in a raw JSON string instead of using Java abstractions, etc)

1 Like

Type safety is not guaranteed outside a program.

Therefore, since you are trying to simulate external interactions with your program, it’s not a workaround to create types that don’t exist in your program.
Your tests will therefore validate how your program (more specifically its periphery) will handle these alien types.

If you were validating use cases, then these alien types would be something artificial - impossible to happen.

1 Like

Sure, it seems like you’re mainly bumping against my use of the “tautology” than anything else?

Your tests will therefore validate how your program (more specifically its periphery) will handle these alien types.

Request validation is a part of the service, not its periphery (i.e. our services are generally HTTP servers with a REST-ful API). Even if we had split this into micro services, for an e2e/system test, ensuring that poorly formed requests return expected responses would be well in scope for the test boundaries.

Your responses thus far seem to suggest you’re saying that using essentially clean room implementations, and not to use any of the existing application code, which seems like a reasonable approach, just need to build the culture to enforce that.

1 Like

it seems like you’re mainly bumping against my use of the “tautology” than anything else

Not at all, I just didn’t see what exactly you meant by tautology with the example.
But regardless of the name, I think I’ve got what was your worry.

Request validation is a part of the service, not its periphery

I use the term Periphery in the context of something like Clean Archtecture.

Request validation and parsing are part of the Controllers, which then dispatch valid Application-specific Business Objects to the Use Cases.

My point is, since your tests are external to the Periphery (they are calling the Periphery), you can (and should) create types that are alien to your program, breaking the Liskov Substitution Principle. Alternetivly, as you said, you could create objects that are not derived from the application’s DTOs, but then those would break the single responsibility principle, because they would have the implement things that are unrelated to the “error behavior” you want to have.

Example:

Moment 0:

  • You have a controller that takes an object with the field id of the type String and converts it to an Integer before sending to the Use Case objects.
  • You have a test that reflects it, creating an object like UserDTO().setId(Integer(1).toString()) to be passed to the controller after serialization.

Moment 1:

  • You change the test, passing UserDTO().setId(“MY-UUID-EXAMPLE”)

Your test will fail, because “MY-UUID-EXAMPLE” cannot be converted to an Integer.

Moment 2:

  • You change the controller code to convert to a UUID rather than an Integer.

The test passes.

Now, if you want to validate an error handling, you can create a test with UserDTO().setId(“blablabla”) and assert that your controller returns an error 400 or 500.

If the types are not as generic as String, you could derive from UserDTO and change only what is necessary to make the serialization process spew “blablabla” on the ID field.
For instance, let’s say the id of UserDTO is of type UserID. You can override UserDTO::getId to return a derivation of UserID, controlled by your test, which allow the “blablabla” value, rather than only integers or UUIDs.

1 Like

I’m afraid that I don’t know Java well enough to say definite details, but would it be possible to have some kind of automated check as part of the build process for the e2e tests? The kind of thing I was imagining was stopping the test code from including source or executable code from the code under test. Ie some kind of linter.

1 Like