Ethics in Machine Learning

Having taken part in the Masterclass last night I have been thinking a bit about the potential ethical issues in AI and Machine Learning.

How do you deal with ethics in machine learning?
Do you speak up as a tester early on? What if you see the AI evolve into something undesired?
Has this ever happened to you that you needed to talk about ethical concerns in machine learning environment?

1 Like

Ethics are generally a very human thing and in many cases very individual.

Its an in interesting question, if each AI developed its own individual ethics would it knowing do unethical things?

I guess an example could be a medical AI assigned with saving lives, it it started taking lives in order to save lives where would line stop.

Should we really expect AI’s to have any ethics at all?

1 Like

I too have been having that very discussion with my partner as it concerns me about who is creating the training data or training set for correlation to take place and what would be the measure of success that is used as feedback to modify the hidden layers. I am not so worried about the machine, I am more concerned about the puppet master (engineer or developer). Who is the puppet master, are they regulated and who would have oversight of applying the regulation?

One concern about usage is within recruitment, currently agencies are using bots to screen all applicant’s correspondence. Moving forward with AI and the software is learning from the data set that is provided. As a recruiter they may get a placement fee 10% of the yearly salary (generalisation follows). If a pink haired, 28 - 34yr old, rabbit is the type to get the higher salary traditional in that field, the measure of success could potentially be when the recruiter places a pink haired 28 yr old rabbit in that profession. That measure of success would then influence the data set the machine is learning from.

My question would be what happens to all the orange haired, 28 -34yr old, zebra’s who apply for that same position with the same qualifications? I am guessing their application goes into the black hole, never to be seen again as their data would not correlate with the measure of success.

Second question: How would this affect diversity within the workplace? Dramatically I feel :slightly_frowning_face:


Great discussion points there! I’ve actually been at the mercy of a recruitment bot and my cv was rejected and they never even saw it.

The test manager actually asked me to apply two weeks later and I said I had applied but had accepted a different job as I didn’t hear anything. The reason I was rejected was because I didn’t have experience with one of the tech stacks they use but there role was mostly about soft skills and leadership and they said they would need to review their bot as good people are clearly getting missed.

I think this massively affects diversity and you end up only hiring people like you.

I see the appeal of using bots in recruitment as it takes so long to hire someone but I feel it’s an area where not putting in human hours can back fire.


Given the number of times I had CVs rejected because of a mis-match with a particular requirement from traditional HR teams blindly applying criteria from managers who don’t really know what they’re looking for, the only advantage of using AI in this function is that they are sifting to the same criteria more quickly and economically!

If we’re going to look at the ethics of using AI, we should be certain that our existing, human systems are equally ethically based for a starter.


Completely agree with you Robert

This question has been exercising the minds of science fiction writers for nearly seventy years!

Most people will point to Isaac Asimov and his Three Laws of Robotics. (Asimov himself claimed that there were enough loopholes in the Three Laws to keep him profitably selling stories for the next forty years…) The Will Smith film based on Asimov’s robot stories, I, robot, posed a perfect example of the sort of conflict you have in mind, of something unethical happening for ethical reasons. The will Smith character had a major hangup about robots because he had been involved in an accident where two cars ended up sinking in a river. A robot went into the river to save the humans in the cars, based on the First Law. But the robot weighed up the likelihood of saving both humans and decided that was not possible. Instead, it prioritised saving one human based on best chance of survival and utility of the saved human to society. It saved the Will Smith character, a policeman, before attempting to save the other human, a child, despite Smith ordering the robot to save the child (Second Law trumped by First Law).

Interestingly, the film developed robot motivations to the point where they were prepared to restrict human freedoms because humans do things to themselves that are harmful. This reflected the work of another classic science fiction writer, Jack Williamson, whose “Humanoids” had only one directive: “To serve and obey, and to keep men from harm”. Taking this to its logical conclusion, the “Humanoids” ended up keeping the entire population under chemical lockdown for their own good.

You ask, “Should we really expect AI’s to have any ethics at all?”. Given that for thirty or forty years, we have been told that the role of business is wealth creation and ethics has no - or at best, a secondary - role to play in maximising shareholder value, the answer is sadly clear.


This eloquently expands on my question and fears around AI not having ethics because of the sort of people that are asking them to be built in the first place and for what reason.

I love all your examples of science fiction and I did get goosebumps reading them again having watched the films and read some of these stories.

I wonder how to even test for ethics.
Training data would have to be so well rounded but the people choosing training data will also be influenced by their own biases and opinions and views of the world. I have a feeling I may not sleep well for a while thinking about these things. Maybe I need to join the club of the people in silicon valley that have built houses in remote areas and stocked them full of food and weapons for the robo apocalypse.


Kim said “I wonder how to even test for ethics.”

Fortunately, we aren’t in Asimov’s scenario, where his robots were multi-purpose machines that were expected to learn how to deal with new situations almost without limit. The systems we are likely to be testing, in the near term at least, will at least be designed to do a specific task, so that will help define the range of ethics that designers will have to build into systems and that we will have to test for.

What it will require will be for designers, analysts and testers to look to a different set of expert bodies as sources to build an understanding of the ethical issues that a system might have to incorporate. So an HR system would need input on equality issues as well as employment law; for these, I would first of all look (in the UK, at least) to the Equality & Human Rights Commission (EHRC) and/or to some of the trades unions, especially those active in the public sector who have addressed such things in trying to keep in line employers who are supposed to take ethical issues into account. For finance and accountancy issues, I’d be looking to take advice from banks and other investment bodies who have identified an ethical dimension to their work, such as (again in the UK) the Co-operative Bank.

I think this is an evolving area and a possible whole new field of expertise which will combine traditional IT skills and a range of soft skills that the IT profession hasn’t necessarily been noted for in the past. Otherwise, we could find ourselves in an “I for one welcome our new robot overlords” situation before we know it!

And then I saw this article online about this very issue:

The bit on AI porn is clearly unethical, and made me think, what’s stopping people using this to alter CCTV or dashcam footage for identifying criminals etc?

I think the view of AI as a complete autonomous persona (think Hal in 2001 space odyssey, Asimov robots, etc…) can confuse the discussion. It makes people talk about AI as some separate entity, thing, we free will that can be blamed. When it is simply (well not simply) a complex algorithm that exhibits behavior based on input from its puppet master (as Kim puts it :-)).

It very simple terms (as far as I understand it), in reinforcement learning the system “learns” by using the “reward” returned when an “action” is performed in an environment as information to calculate the optimum action/s to take to get the most reward overall. This reward is given by the environment, which is controlled/decided by us, the humans.

When Watson, my dog, was a puppy a rewarded him with cheese if he sat when I made a specific hand movement. Years on he no longer needs the cheese, as he has been programmed to behave in that way. Now if I was unethical and trained him to fetch handbags for me (that I didn’t own), could you blame him for stealing? Would you say he was unethical? Should he blamed? No, me as the programmer should be held accountable.

As with the “AI Porn”, what people use software for doesn’t make AI unethical does it? Most browser some with in-private browsing and we know the main use of that. Does that make it an unethical feature?

Excellent way of describing AI.

Question > Can a loaded gun have ethics? Or is it the hand that holds it?

Ethics is something you cannot apply to a machine but definitely to the human operator. This machine’s behaviour is trained by the operator and can be effected by subsequent later experiences.

You probably know this guy but I just learnt about him while researching more about this discussion. If you don’t know about him then it’s an interesting read

Jack and Kim’s last posts are getting towards the key issue. They describe “programmed” responses; but what we should really be thinking about are complex situations which go beyond simple IF > THEN (action) decisions.

It will always be the programmer who is responsible for how ethical we make our systems. I think the debate has to be about the failsafes that we build into our systems to enable AIs to spot unethical - or ethically ambiguous - situations and either apply ethical subroutines to decide the correct course of action or to stop and flag the situation to a human (who may or may not take an ethical decision, of course :thinking:). And this does mean that as systems get more complex, the more they will have to be designed to spot ambiguous situations or ones where there are unforeseen circumstances.

To bring the discussion back to testing and testers, there’s the role for testers in the requirements gathering and system definition stage - trying to foresee the unforeseeable and design ethical safeguards into system behaviours and test them once code is written. And that’s going to need a different sort of skills set to the ones that are currently fashionable.

1 Like

Watch “Ex Machina”. As per my understanding the film is all about performing a Turing Test.

You might be interested in reading my review of Ex Machina, though I concentrated more on the artistic and societal issues in the film rather than specific matters of AIs:

Well Written @robertday

I guess my point was that I think the term “ethics” when used with machine learning causes confusion. It humanizes the AI when it is simply a complex algorithm. All decisions an AI system makes are still programmed, still IF > THEN, no matter how complex the input, algorithm or action. The problem is that we cannot explain why our algorithm made that decision. Which likely makes us uncomfortable, especially as testers and probably enforces this anthropomorphization of the algorithm.

I think what Robert is leading to is the role of ethics in testing as a whole. Or maybe I should say software development rather than focus on testing.
If you drop the term AI from the statements made, everything still rings true for software development. And I agree it is a skill set not fully appreciated or talked about…

Just to give another example. I have a system that takes in stocks and shares data as an input, in does some processing, it spits out the stocks and shares I should buy or sell today to make a profit. Lets say that in my role as a tester I notice that the system tends to favor stocks in arms development, tobacco and alcohol. Is this unethical?

Maybe my black box uses reinforcement learning or some other highly complex algorithm to decide on its output, that isn’t really the issue. As Robert says we just need to “design ethical safeguards into system behaviours and test them once code is written”.


Ethics in ML is a big area of research which is currently going on along with cultural and racial biases. The current problem with AI is that it is a black box, we currently have no idea of how it learns and what relationships it makes based on the data provided by the user. There is currently a lot of research going on in this area and the progress seems to be promising. For example - Recently a team of researchers taught AI to justify its reasoning and point to the evidence based on which it made decisions.

Also the people training the AI model need to ensure the dataset is diverse as whatever decisions the AI model makes is based on the data. You may heard of Google photos app classifying African-American people as Gorillas. This is what happens when you don’t have diverse training datasets. And of course we have all the big IT companies selling our data as we speak which everyone knows is unethical but unfortunately we are not able to do anything about it (or you could try to sell your amazon alexa and google home like me, to make your conscience feel better :-))

There are groups and organizations that have realized this problem and are trying to make AI safe for people. The Future of Life Institute has been funding various research projects to make AI more safe and inclusive, backed by billionaire inventor Elon Musk who has donated over 10 million dollars to the group. Similarly, Princeton University launched the Web transparency and accountability (WebTAP) project to study the privacy, security, and ethical usage of consumer data and to inform the public about the privacy practices of companies.

So, although it is scary of how our data is being used it looks like there are at least big organization backed by really smart people to try to make AI safe for humans. We will know more in the future.