Biases and cultural and social responsibility

Talking about biases, do you feel that AI and machine learning has the danger to stagnate cultural shifts and progress?

For example you teach your algorithm to show the correct ratio of women and men in CEO roles. Women will be a lot less represented, so can this easily change?

Should it already know to choose 50/50 representation even if this is not currently true? Are we in danger in coding in our own bias and blocking cultural change or affecting cultural change?

2 Likes

Reviving this post because I just read a blog post from last year that I think is really applicable here

After reading it a few times and seeing some pretty shocking stuff coming out about biases in AI, I think that yes, we’re at some risk of having coded biases affect cultural changes.

1 Like

Well, one thing that article says to me is that “the price of liberty is eternal vigilance”. We are used to seeing our systems as projects that, no matter what method we use to manage the development and testing phases, have a definable end-point and that is delivery. Even the classic SDLC model compartmentalises feedback into a set process after release.

If you think about human intelligence, the process of challenge, experiment, learning and growth is continuous and seamless. Even if we have to compartmentalise the formal phases of learning and reinforcing new ideas and concepts, the way we gather new information and influences is genuinely continuous; hopefully, all our waking hours should involve us in an ongoing process of sifting, assessing and re-evaluating. Even our sleeping hours may well have a role in helping us subconsciously order and assess the new information we’ve gathered during the day.

An AI, though, can’t (at our current stage of development) spontaneously take in new experiences and fit those in to its existing model of the world; indeed, its model of the world is only ever going to be incomplete, based only on those elements of the real world the system’s designers considered to be necessary. It may be necessary for a new role to be defined, someone who will act as “mentor” to an AI whose job it will be to keep under review both feedback from the system and things that are happening in the real world so that the AI can “experience” the same growth as humans. The role would be a cross between a Product Owner and a parent; the difference between this role and a PO would be that whilst a PO’s responsibility is to the users on one hand and the company on the other, an AI mentor should primarily be responsible to the AI itself.

Of course, this runs the risk of that mentor imprinting their own biases, even unconsciously, into the process. The only way I can currently think of countering that would be to rotate mentors over time, which in turn requires a steady stream of individuals with the sort of special mind-set that being a mentor would require. But that’s possibly another story.

3 Likes

A few months ago I created a blog article about ethical problems or dangers with AI