A team of researchers, including scientists from Vrije Universiteit Brussel (VUB), has compiled a step-by-step guide to help organisations prevent discrimination when using algorithms.
“Algorithms are increasingly being used to work in a risk-driven way and to make automated decisions,” a press release from VUB reads.
“The flaws of this system were clearly demonstrated by the recent Dutch child allowance affair, in which minorities were systematically discriminated against by the tax authorities, partly on the basis of algorithms.”
The new guidelines from researchers are intended to provide rules and preconditions for government organisations and companies that want to use algorithms and AI for various functions.
They boil down to six steps, which are to be taken for the development and implementation of such algorithms: determining the problem, collecting data, data selection, establishing a policy for automated decision-making, implementing the policy and then testing the policy.
Each step provides legal rules that also need to be taken into account, which come from sources like the General Data Protection Regulation, the Equal Treatment Act and the European Convention on Human Rights, real-world best practices and previous examples from the literature.
“The guideline requires, among other things, that organisations involve stakeholders and relevant groups in society in the development of the algorithm from the beginning and that external, independent experts critically monitor the entire process,” VUB said in its press release.
“It also requires that people affected by automated decisions should be informed and be able to object to them, that the system should be stopped when errors and shortcomings are detected, and that the entire process should be permanently monitored.”
After the algorithmic discrimination with the Dutch child allowance program, the new guidelines aim to make human rights the starting point for AI use.
After the debacle in The Netherlands, the Dutch parliament stated that, “racism must be ended as soon as possible, not least by stopping the use of discriminatory algorithms.”
The group of researchers also drafted ten lessons for algorithms, which are as follows:
Consider the context
Check for bias in the data
Set clear objectives
Involve external experts
Check for indirect discrimination
The research team included specialists from VUB, Tilburg University, Eindhoven University of Technology and the National Human Rights Institute of the Netherlands.
The team was commissioned by the Dutch ministry of the interior to study the technical, legal and organisational conditions that need to be taken into account when organisations use artificial intelligence in their operations.