An Ethics Charter in Product Design?

A discussion on Slack recently about accessibility (or lack thereof) in many peoples companies sparked an idea for me when someone contributed with

It’s almost as if there’s a need for some kind of ethics charter in product design

This follows on from discussions in other channels about the ethics of tracing apps and so the ideas started to spiral.

If you were to create your own charter for ethical product design, what would you include in it?

Privacy - defaulting to the highest level of privacy possible (which would depend on the nature of the app) with users able to opt to allow the app to access more of their data if they wanted. There would be an absolute block on selling user data to anyone else.

Non-interference - that is, if the app is anything like, say, Facebook, the user gets absolute ownership of anything they put there, and doesn’t get banned or blocked if what they put on the app is offensive or illegal. That said, the user would also be the only one with any responsibility for their content, so if someone puts something illegal on the app, they’re the only ones at fault (naturally, legal take-down notices would need to be honored, but other than that, no bans because someone complains. It’s too easy for someone to set off a complaint mob and get someone banned over trivia). This only applies to software that allows users to add their own content, naturally.

My basic view is that whatever a product is, the design should be something to make what the product does as easy and safe (in terms of information security) as possible for all potential users without imposing the opinions of the designer, owner, or anyone else on the users.

That I have no idea how that could be made possible is another issue altogether.

Thanks for posting this, I’m quite curious to hear opinions, and I’m planning on presenting something along these lines to my community of practice at work at some point.

For my own views on this, I think I’d couch any thoughts in a caveat that your first duty is to comply with any relevant laws in your country (obviously if those laws prevent you from being as ethical as you wish to be, maybe lobby to change them).

Also, I may come back and change my mind later, because Discourse is a lot better at letting you do that!

I think the main threads I have in mind are:

Accessibility - (this triggered the discussion that led to this post in Slack) you should commit to, where practical, ensuring that your website makes an effort to service those with physical and cognitive disabilities. In the UK, this is enshrined in the Equality Act 2010 and Gov.uk provides a really good resource as to what that means for online services. Twitter could certainly learn from reading it *cough*. Their guidance suggests that you should reach WCAG 2.1 AA grade for accessibility, but the basic principles are that you should allow for keyboard navigation and make your website accessible to screen readers, you should maintain sufficient contrast between text and other elements for readability, you shouldn’t use colour as a defining characteristic of an element (or allow it to be substituted using a toggle), you should provide text alternatives for visual and audio content (Twitter failed this one), but also consider the complexity of your text for those with reading or learning difficulties. Blimey that was a long section.

Diversity/Inclusion: Services should aim to be inclusive as much as possible. This is an ongoing concern of course for any services that rely heavily on user generated content, as you fall into the tolerance paradox, and to avoid raging arguments I’m going to ignore that particular dumpster fire.
What I’m more interested in highlighting here is that a particular example that gets overlooked because people get sparkly-eyed over the new technology paradigm is machine learning models. If you’re using ML to make decisions that make decisions based on characteristics of users, you want to make sure as hell that your model has a training set that’s as broad and diverse as your user base. To give a specific (and somewhat upsetting) example is any facial recognition algorithm using a ML model.
If your model is developed by white male engineers (probably) and you don’t consciously account for it, there’s a high chance your training set will consist largely of white faces. If you then go use that model against non-white faces, your results will range from embarrassingly inaccurate (a previous employer) to downright offensive (Google Images tagging black people as gorillas).
Additionally, aside from AI awkwardness, there’s other ways to limit discrimination. Don’t ask for gender if you don’t specifically need it to provide your service (you probably don’t, and if you do you should clarify why). Don’t insist on a “real name” (looking at you here Facebook) unless there is a legal requirement for it and if you need a string identifier for a user (for salutations or other reasons) provide a (sanitised, obviously) free text field, because your assumptions about names is wrong.
Again, in the UK the Equality Act 2010 provides a minimum baseline, but you might want to look at where you can push your company to do even better.

Safety/Privacy: This largely applies to where users supply their own content, so I guess it’s time to put the hardhat on and wade into more contentious waters. On the privacy front, similar to the previous point, you should minimise the data you collect from users. If you don’t need it, don’t require it and unless there’s really a good reason for it to be on your platform, don’t even ask for it. Obviously anyone based in the EU should (hopefully) be relatively familiar with GDPR by now and what that requires, and it seems like there are similar laws cropping up in the US now as well.
So that’s how you make your users feel that their personal data is secure on your platform, but how do you deal with personal safety? Well, it ties into diversity and inclusion because one of the main ways you can fail at making your service inclusive is by creating an atmosphere where certain groups feel less safe and less able to participate. If you accept UGC, you should consider how you moderate that content to ensure an acceptable level of safety for all of your users. Many major services like Twitter, Facebook, Discord, and others have fairly comprehensive terms of service that detail the behaviour that they consider to be acceptable on their platforms, but struggle when it comes to enforcing it consistently. It’s not always possible to solve these issues through product design, but you can at least try not to engineer features that make it harder to stop bad behaviour (that’s the difficult to moderate Voice Tweet feature again, seriously Twitter)

Accountability: If you want to claim you’ve got an ethics charter, make sure there is someway your stakeholders (internally as well as externally, your users are a big part of this) know where you’ve made improvements, what you know you need to work on, and have a place where they can express what still needs to be improved. There’s no point doing any of this if it’s just to pat yourselves on the back and tell yourself you’ve done a good job.
Obviously, any accountability mechanism needs to be accessible, inclusive, and private/safe as well! Ideally, your product department should have an accessibility advocate, privacy advocate and diversity advocate (where these are applicable), who should coach teams on these principles much like QAs (or Quality coaches, to borrow a term used by Alan Page :slight_smile: ) should coach teams on improving quality in their product.

I guess that’s more of a braindump on ethical principles I’d like to see covered in some kind of ethics charter for product design, but I still don’t really know how to translate it into an actual, realistically achievable charter :smiley:

1 Like

I recently became aware of Doteveryone’s Consequence Scanning - a tool not unlike RiskStorming but focussed on ethical product design. In addition to topperfalkon’s points above, they suggest thinking about areas like wellbeing/relationships, what if everyone in the world used that product, and impact on personal/professional life as ethical considerations. Take the Ring doorbell for example - if everyone had one, it could have a positive effect on the community in terms of crime. Providing that data to the police could increase the efficiency in identifying, catching and prosecuting criminals, taking the time and financial burdens off of the police and legal services. On the other hand, how could that data be abused? What potential negative consequences could there be to being able to track someones movements through an amateur survelience network? What impact could the idea of always being on camera the moment you step out of the house have on someone with extreme anxiety for example?

These may not ultimately be concerns that you act upon, but I’ve found in the (admittedly very few) times that I’ve used it that it brings some very different and interesting conversations to the table

2 Likes

I saw this on another thread and thought it fit well here too

1 Like

On this point, I’d originally meant to include a bit on psychological safety and then forgot, but then someone on Twitter reminded me about Facebook memories and now I’ve remembered, but only partially. So this is probably going to be word salad I need to fix up later, but I also don’t want to forget again (and I also want it to stop rattling around my brain for a bit).

When you’re making a feature, you should try and consider how your users might react to it. Facebook Memories is a very specific example that has a fairly clearly reproducible outcome, but this also applies fairly strongly (for slightly different reasons) to things like recommendation algorithms. The problem with Facebook is that a lot of people use it for… well, the intended purpose really. They connect with friends, relatives, and peers and use it to keep them appraised of how their life is going. This usually means very key life moments are recorded in Facebook post form for posterity. However, not all life events are born equally, and some are extremely painful or uncomfortable. So the problem with Facebook memories as a feature is that it intentionally reminds users of posts it thinks are important on their anniversary, without really knowing whether it was a good or bad memory. It might remind someone of the time they entered an abusive relationship, had a bad breakup, lived under a different name (actually, special mention for Facebook’s “real name” policy here, that’s a howler of a bad decision on Facebook’s part), etc.
Basically, if you’re building something that intentionally reminds users of bad things that happens to them, it’s not great for their wellbeing, and probably won’t make them appreciate your service.
Jumping briefly back to fire on recommendation algorithms, these are generally intended to aid site engagement by serving people more content they’ve already demonstrated interest in, but these algorithms help propagate negative content (such as racist or generally inflammatory content) by allowing people to create click-baity meme content they could then associate with far more negative content, and lure people into an ideological pit with the assistance of “recommended content”. This particularly affected Youtube but also affects services like Twitter and Facebook, creating the infamous filter bubbles a lot easier than if users were forced to curate content by hand.

1 Like