The term “Artificial Intelligence” was coined by computer scientist John McCarthy in 1956 when he held the first academic conference on the subject.

While adaptability of AI traversed various phases in last six decades, the recent increases in computing power coupled with increases in the availability of data have resulted in more sophisticated forms of AI such as Generative AI, Predictive AI, etc.

While the buzz around Gen AI is justifiable, given the revolutionizing potential of this technology, the downside of it can be equally devastative, if left on its own.

Besides lack of explainability, ensuring accountability and fairness have been the two most important challenges of AI faced by organisations.

There is a risk of exacerbating monoculture based on similar signals from base model or data aggregator which could lead to reinforcement of stereotypes/racism and likelihood of increased procyclicality due to herd behaviour.

The quest of ensuring accountability and fairness drives us to a possible path of auditing Artificial Intelligence models.

An AI audit is a structured assessment of an AI system to ensure it aligns with predefined objectives, standards, and legal requirements. It involves evaluating the algorithms, data inputs, decision-making processes, and outcomes to understand and mitigate potential biases, errors, and risks inherent in AI-driven decision-making.

Such audits can be first-party audits which are conducted by internal teams within companies such as Facebook’s Responsible AI team; Microsoft’s Fairness, Accountability, Transparency, and Ethics (FATE) group; There are second-party audits which are conducted by contractors specializing in audits (for example, O’Neil Risk Consulting & Algorithmic Auditing, or ORCAA), audit solutions provided by Wolters Kluwer, etc. as well as teams within larger companies (such as Google and IBM) offering reviews of other firms’ AI products.

Given the opacity associated with AI at present, appropriate controls on use of AI could be best established by third-party audit, conducted by independent researchers or entities with no contractual relationship to the audit target.

Presently, this needs a strong push from public authorities globally. For example at the municipal level, New York City passed regulation in 2021 (taking effect in 2023) that requires mandatory third-party audits of AI hiring and employment systems – the first of its kind in the US. The Artificial Intelligence Act (AIA) passed by EU in March, for instance, mandates that high-risk AI systems undergo conformity assessments before deployment and throughout their lifecycle. Similarly, the UK Information Commissioner’s Office (ICO, 2020) has issued guidance on how to audit AI systems.

Implementation hurdles

While the idea of audit may appear as a panacea for pitfalls of AI, the implementation of it is fraught with issues, many of which require serious thought. Firstly, to undertake a fairness audit of an AI tool, you need another AI tool as no number of questions designed by humans can determine whether an AI system produces fair output. This is a classic chicken and egg problem.

If one believes hypothetically that such an exercise is humanly possible, the exercise itself is not fair. If a particular AI tool gives a majority of unfair answers when asked trick questions, it is no guarantee that it will repeat the same in the future as different questions may generate new answers, hence questioning the validity of the exercise itself.

Secondly, arises the question of predefined objectives and standards against which AI’s performance is to be evaluated. As the usage of AI tools is cross-border, we need globally acceptable benchmarks for evaluating the performance of an AI tool. Currently, there is no such international organisation dedicated to developing such standards for AI.

Thirdly, the fundamental issue of clear accountability mechanism which is currently missing in the case of AI.

For example, if a trading firm uses AI algorithm which is openly disclosed and the client of this trading firm loses monetarily due to some wrong info used by the AI, then who is responsible for the loss? Is it the ‘trading firm’ or ‘owner of AI’? The last issue pertains to the need for a pre-defined resolution mechanism which applies the principle of proportionality to the consequences arising from retrospective and prospective application of liability in case AI is found to be at fault.

One may now think that this is too much to ask for and therefore auditing may be a far fledged idea, but the answer lies in establishing a strong base which facilitates gradual movement towards AI auditing.

One can begin by (i) clearly defining standards/benchmarks (ideally through international coordination) (ii) having proper and uniform disclosure norms (iii) laying down a well-structured resolution mechanism and (iv) appropriate consumer redressal frameworks.

To date, discussions around AI audit remain largely ad-hoc and abstract in nature. Going into future, the wider adoption of AI across business would witness strong correlation with the need for its audit.

While we may not have concrete solutions at this stage, it is important to initiate meaningful discourses around auditing of AI tools, which, we believe, is the only way for AI to gain trust and reliability.

Chawla is Deputy Director and Banka is Assistant Director at the Department of Economic Affairs. Views are personal

comment COMMENT NOW