Does anyone test mobile SDKs? Not mobile apps but SDKs

Just curious. If yes, how do you go about doing the testing? Or if you were to test (and you’ve never done before), what’s your plan of attack?

Then to think about, automating said testing.

Who is the customer for this SDK?

As with most SDKs, a business customer/partner (company), or some (lone or not) developer. Not typically a retail consumer.

Good morning David, that was a very sideways stab to get more information about your context.

Only once published an SDK myself, on desktop. Most of the work was in documenting to the correct level, too much creates more keyboard work, but lets you understand the optimal customer journey. The less you document, the more focused that customer journey becomes. That focused single customer journey through the API, is the only part you need to write test cases for. I have worked on open source projects too, and once again reading the interface code, create basic smoke tests, but then go check that the code-comments and documentation is accurate. The wordier the docs are, the more likely there are to be some bug hunts waiting to be mounted in that piece of code.

My only takeaway was this, use the same SDK in your own product, in fact use the same SDK in your product in the most audacious fashion and as many ways as possible, then you don’t have to do any testing.

In my personal experience with a desktop SDK, I wrote an appendix on how to test your application using some of the tools we provide. But mostly added pointers how customers will need to create their own test jigs. That tiny section helped me think about my in-house testing dicipline more clearly too.

Thanks for sharing your experience Conrad. Good info.

The follow up to the discussion, if you have any additional insights, is how folks in the industry might be using test automation to test the SDK from an end to end perspective (not just the unit tests).

Doing some research, it seems this is niche area, and there’s no industry best practice nor has any industry person shared their techniques at the automated testing of SDKs. I have some ideas and shared them in a different post that hasn’t garnered any responses thus far.

If my boss was saying “test this SDK”, I would be like, “can the devs generate Java or Python bindings as part of the build process?” .
Then grab these and run basic checks against them, script up a smoke test, and use that to learn how to deploy the system - deployment is a large piece of the testing pie. I might have to write a dummy application that builds as part of the test-job for instance. Set the smoke test to run against head build on a nightly trigger, and then use that as a basis. By doing this, I will learn about the environment, and then slowly explore for actual more specific test cases, but still have a known good smoke test that tells me something at the very minimum about if the SDK is even callable for a customer on their device or whatever.

Especially without a UI component, how do you see testing an SDK as different from testing an API?

Indeed, it is like testing an API, except SDKs have to be loaded/embedded to be able to be tested or called, and may be more hardware centric than APIs. APIs you can more generally call independently from anything. So testing SDKs is a bit more involved.

Sure, though I think what @conrad.braam was referring to with testing the language bindings makes sense and parallels my experience with testing APIs/SDKs, both web and non-web, i.e. the boundary between the end-user and the product for a web API is at the HTTP request layer, while for a a non-web API/SDK, it’d be at the language binding/call. This makes sense as an SDK is essentially an implementation built on an API, so you want to test that implementation by exercising it.

I’m not sure I understand your hardware centric bit, unless you’re trying to point out that mocking the underlying system can be more challenging? i.e. for a web service with dependent services, you might use containerized Wiremock or what not, but for hardware, you may or may not have mockable hardware? That still feels similar to API testing in that there can be challenges to mock things at certain levels, and you have to get creative to exercise certain paths (and/or invest the resources to build a good mock that can be used).

1 Like

With respect to the hardware aspects, that’s things like accelerometer, gyroscopes, compass, and bluetooth BLE beacon detection, motion detection, GPS, on a mobile device. While one can attempt to mock based on some data observed for those types of features, or use simulation algorithms, can’t quite mock it all well, you don’t know how the device will behave at times. Can also attempt to do “record & replay” with some variation on the hardware data that is being mocked but it too doesn’t cover like real world testing.

SDKs are more trivial for testing when they’re more like most modern mobile apps that focus on web communications or just GPS functionality which is limited in hardware scope. Then it is like what you stated of implementation build against a (often web-based) API.

2 Likes

I wanted to be a bit more specific on the kind of SDK testing I am recalling here.

I was working at an industrial control company who make a product that supported many kinds of customizations and kinds of plugins. My plugin area came out of writing a interface implementation that could acquire data from any physical device. A cookie-cutter type job, we had a cookie cutter , and it was possible to acquire data over ethernet,serial,radio,canbus, profibus and many more, all using just one API. Which we published as an SDK internally, and then later on externally.

At the point we decided to publish externally, there was even an included tool people could use to test their “plugin” or driver. The test tool mocked our server-end, making it possible to remove even knowledge of the product from the programmers job if you wanted to use our SDK. Anyone could download the SDK, find a programmer who knew nothing about the product’s actual purpose, and write a driver. And if all they did was use the beautiful menu-driven tool, the programmer would be almost 90% done coding, without even installing our server product.

So we have this picture
[Server] => [loads a driver] => [Driver talks to hardware over any transport] = > [Hardware]
We have 3 components, and you should be able to swap any one of them out.
[Test-Tool] => [loads a driver] => [Driver talks to hardware over any transport] = > [Hardware]

But the most expensive part in this entire picture and place that a developer will spend most time, is in the [Hardware] and the [transport] as a constraint.
When you look at my primitive drawing above, you can see at least 2 places where there is an API, at the intersection of the Server and the Driver, and then usually another API that I did not care about that translated into something really external, like a tcpip library or a DMA transfer to a shared memory buffer or hardware port driver. Getting real hardware so that you can characterize it’s behavior becomes hard to do, and in at least one scenario I simulated hardware too. It’s at this point that I wrote a small chapter on how to design a testing plan. This was long before I even knew what a test plan was.

So back on topic, how you test an API, depends on the expectations. My expectations here were mission-critical data. The critical test problem for us, was recovering lost communication sessions as fast possible, as reliably and all without memory creep. For most web APIs, today those problems get solved in the technology stack anyway, so your focus needs to move to wherever the highest code churn is in your offering.

Sure, I used to test a desktop application that did hardware diagnostics for PCs and mobile devices, so there was a lot of variability, and we had a large in-house lab with a lot of physical hardware. In hindsight, I think the wide amount of hardware/lab was likely mostly a waste of resources.

Not sure what the industry/customer base is here, but I’d advise against getting too hung up on the edge/corner cases - the amount of time to handle the one-off case of a customer running a dated version of Android on a phone with an unusual hardware implementation is sort of like trying to verify every browser version with varying combinations of addons. You’ve got to pick and choose where you get the most bang for the buck with your testing efforts.

Those unexpected behaviors that “you don’t know how the device will behave at times” are likely needles in a haystack, and finding/fixing them is pretty costly with limited returns. Convincing the PM that some edge-cases are going to be customer reported and will need to be triaged seems like a better expenditure of effort.

1 Like

Came across a post about this testing topic today, though not necessarily expanding that much from what we’ve discussed here. Frankly, it would be nice if we got more posts like this one from the QA/testing community/industry: