Thanks for posting this, I’m quite curious to hear opinions, and I’m planning on presenting something along these lines to my community of practice at work at some point.
For my own views on this, I think I’d couch any thoughts in a caveat that your first duty is to comply with any relevant laws in your country (obviously if those laws prevent you from being as ethical as you wish to be, maybe lobby to change them).
Also, I may come back and change my mind later, because Discourse is a lot better at letting you do that!
I think the main threads I have in mind are:
Accessibility - (this triggered the discussion that led to this post in Slack) you should commit to, where practical, ensuring that your website makes an effort to service those with physical and cognitive disabilities. In the UK, this is enshrined in the Equality Act 2010 and Gov.uk provides a really good resource as to what that means for online services. Twitter could certainly learn from reading it *cough*. Their guidance suggests that you should reach WCAG 2.1 AA grade for accessibility, but the basic principles are that you should allow for keyboard navigation and make your website accessible to screen readers, you should maintain sufficient contrast between text and other elements for readability, you shouldn’t use colour as a defining characteristic of an element (or allow it to be substituted using a toggle), you should provide text alternatives for visual and audio content (Twitter failed this one), but also consider the complexity of your text for those with reading or learning difficulties. Blimey that was a long section.
Diversity/Inclusion: Services should aim to be inclusive as much as possible. This is an ongoing concern of course for any services that rely heavily on user generated content, as you fall into the tolerance paradox, and to avoid raging arguments I’m going to ignore that particular dumpster fire.
What I’m more interested in highlighting here is that a particular example that gets overlooked because people get sparkly-eyed over the new technology paradigm is machine learning models. If you’re using ML to make decisions that make decisions based on characteristics of users, you want to make sure as hell that your model has a training set that’s as broad and diverse as your user base. To give a specific (and somewhat upsetting) example is any facial recognition algorithm using a ML model.
If your model is developed by white male engineers (probably) and you don’t consciously account for it, there’s a high chance your training set will consist largely of white faces. If you then go use that model against non-white faces, your results will range from embarrassingly inaccurate (a previous employer) to downright offensive (Google Images tagging black people as gorillas).
Additionally, aside from AI awkwardness, there’s other ways to limit discrimination. Don’t ask for gender if you don’t specifically need it to provide your service (you probably don’t, and if you do you should clarify why). Don’t insist on a “real name” (looking at you here Facebook) unless there is a legal requirement for it and if you need a string identifier for a user (for salutations or other reasons) provide a (sanitised, obviously) free text field, because your assumptions about names is wrong.
Again, in the UK the Equality Act 2010 provides a minimum baseline, but you might want to look at where you can push your company to do even better.
Safety/Privacy: This largely applies to where users supply their own content, so I guess it’s time to put the hardhat on and wade into more contentious waters. On the privacy front, similar to the previous point, you should minimise the data you collect from users. If you don’t need it, don’t require it and unless there’s really a good reason for it to be on your platform, don’t even ask for it. Obviously anyone based in the EU should (hopefully) be relatively familiar with GDPR by now and what that requires, and it seems like there are similar laws cropping up in the US now as well.
So that’s how you make your users feel that their personal data is secure on your platform, but how do you deal with personal safety? Well, it ties into diversity and inclusion because one of the main ways you can fail at making your service inclusive is by creating an atmosphere where certain groups feel less safe and less able to participate. If you accept UGC, you should consider how you moderate that content to ensure an acceptable level of safety for all of your users. Many major services like Twitter, Facebook, Discord, and others have fairly comprehensive terms of service that detail the behaviour that they consider to be acceptable on their platforms, but struggle when it comes to enforcing it consistently. It’s not always possible to solve these issues through product design, but you can at least try not to engineer features that make it harder to stop bad behaviour (that’s the difficult to moderate Voice Tweet feature again, seriously Twitter)
Accountability: If you want to claim you’ve got an ethics charter, make sure there is someway your stakeholders (internally as well as externally, your users are a big part of this) know where you’ve made improvements, what you know you need to work on, and have a place where they can express what still needs to be improved. There’s no point doing any of this if it’s just to pat yourselves on the back and tell yourself you’ve done a good job.
Obviously, any accountability mechanism needs to be accessible, inclusive, and private/safe as well! Ideally, your product department should have an accessibility advocate, privacy advocate and diversity advocate (where these are applicable), who should coach teams on these principles much like QAs (or Quality coaches, to borrow a term used by Alan Page ) should coach teams on improving quality in their product.
I guess that’s more of a braindump on ethical principles I’d like to see covered in some kind of ethics charter for product design, but I still don’t really know how to translate it into an actual, realistically achievable charter