Why Government must turn AI from a ‘black box’ into a glass house

The executive team behind the ‘Rainbird‘ platfom share their thoughts on the challenges for the Government as the use of AI becomes more mainstream

The use of Artificial Intelligence is already widespread across the public sector and even central government departments and this looks set to dramatically increase in coming years. It will bring some £17 billion in efficiency. Trust in public institutions could be jeopardized if the state begins to adopt forms of AI whose inner workings are not widely or easily understood. For example, revelations the Home Office has used an ‘algorithm’ to stream visa applicants has raised concerns over accountability and fairness in public institutions.

The key challenge

The issue is that as a society we may have inadvertently shunned so-called ‘symbolic’ AI which mirror human thought patterns in favour of complex neural networks that operate on the basis of obscure calculations understood only by data scientists. As AI increasingly assists tasks within the public sector, machines need to be publicly accountable for their decisions in the same way as other public servants. This means they need to be designed to be modifiable and auditable by the public service professionals they will work with. AI technologies can only be audited by ordinary human professionals if they emulate the thought processes of the humans who typically take those decisions; if an AI platform replicates the thought process of a Home Office worker then it’s easier for the Home Office to audit and justify its decisions.

AI could produce unequal access to services

Building public services around algorithms understood only by data scientists is like basing trading decisions around complex financial instruments that nobody but a genius can understand. The 2008 financial crash was partly caused by the fact that very few people understood the financial instruments banks were buying and selling; when trading was no longer based on differences in attitudes and preferences but on differences in understanding, then risk was concentrated not on those most able to afford it but on those least able to understand it. Similarly, if Government algorithms are understood only by a privileged few then access to vital services will be decided not by differences in need but by differences in public understanding.

Those able to understand AIs might be able to ‘game’ the system and write applications for everything from Government loans to visas that get approved ahead of more worthy recipients. Those least able to understand the workings of AI, including vulnerable groups such as the disabled, may lose out by inadvertently including or omitting details in their applications which the machines identify as ‘red flags.’

Putting the human back into AI

The only solution is to transform the algorithms behind our public services from a black box into a glass house. We need AI technologies to become more human-centric so that they both make and justify their decisions in human terms. This not only means they can be audited and improved by ordinary professionals but that their decisions can also be explained to the general public, restoring trust in public services.

Leading organisations across the private sector, from credit card companies to law firms, are now working with their best experts in key department and mind-mapping their thought processes so that they can be reproduced by machines. Crucially, these rules-based AI systems produce a ‘human-readable’ audit trail showing how their decision-making criteria is weighted so any prejudice can be exposed and cleansed from the system.

This process of mind-mapping human decisions for machines also enables ethical and compliant decision-making processes to be visualized and taught across the public sector. In this way, increasing algorithmic accountability helps ensure more fair and ethical decisions among all public sector employees. A civil service AI trained with a ‘mind map’ to reproduce typical civil service recruitment processes might expose unconscious biases; for example, an office policy of ‘hot-desking’ that may unwittingly discriminate against people with autism who prefer set routines. Since machines inherit their prejudices from humans, surfacing prejudice in machines can also surface prejudice in the human workforce.

Explainable AIs produce audit trails that can be used not only to correct ‘bad bias’ but to introduce ‘good bias’; an AI government loan-assessment tool could be trained to give less weight to risk factors like marital status but give greater weight to risk factors such as length of employment.

In this way, not only can government institutions ensure that AI makes unbiased decisions but they could also use AI to expose and challenge unconscious biases within the human public sector workforce.

The post Why Government must turn AI from a ‘black box’ into a glass house appeared first on UK Tech News.