Man with data on a tablet - Percayso and MLP integration


Ethics is not a new phenomenon to the human race. Philsophers have struggled for centuries to try and form a framework where “good” and “bad” are defined ideas, fixed and abstracted from the personal opinions of the reader and the mores of the society. Most ethicists now consider those attempts at best, misguided.

Many people ask what the difference between ethics and morality is. Some authors consider them the same but a better definition of the difference is that morality comes from a persons internal value judgements and ethics is something enforced from the top down.

For example: A club or society will have a Code of Ethics which defines the very least a member must adhere to to remain in good standing. Often members of such a community will behave considerably better than the Code of Ethics requires simply beause their personal morality directs them to. Their morality has them behave better than the ehtics enforced.

This then, is a good starting point for us to discuss Ethics in Machine Learning and AI.

Currently there are very few clear ethical guidelines in AI/ML development. A great deal of it is either voluntary by the company in question (what I term “corporate morality”), or bubbles up from the personal morality of the developers/managers themselves.

Clearly this is not acceptable for a technology that will change the world in ways similar to the internal combustion engine. We cannot allow the market to define the ethics for our industry from the corporate morality, there are too many conflicts of interest and varying moralities at odds with the “greater good”.

It’s not just being ‘Compliant’.

If at this stage, you are thinking about GDPR and other safety nets, you are missing the point. You can comply entirely with every element of GDPR and still create a hugely unethical AI.

An obvious example of that is in racism. A number of organisations have had claims against them of institutional racism. If we use such an organisations data, observing every privacy law to the letter, we will still end up institutionalising systemic racism. It will literally become part of the system in a very real way.

Systems change. Rosa Parks refused to sit at the back of the bus and the world changed. People change for many reasons… because they are struck by empathy, because their social peers pressure them to, because of protest. All of which will fall on deaf ears of AI without a form of ethics enforced on the industry.

Reinforcement learning (the idea of using data from the past to learning to predict the future)… does exactly what it says on the tin. It reinforces past. Imagine an AI which controls the allocation of police patrols built on data from a unit which is, unbeknown to those outside of it, racist. It says “here are a list of locations to patrol”. Not surprisingly those are the areas where crime is detected. The areas patrols were NOT sent to… well, no crime was detected there! A badly written AI will conclude that its racist decision was proven to be correct. The possibility for a vicious circle here is obvious.
If those areas being patrolled are predominantly populated by one race, the implications are far reaching…

Reinforcement is a doubled edged blade and only as good as the history you have. We … may have to question our history.

In the next post I will try to outline some ways of countering bias.