The need for individuals to be able to query or contest decisions made by an algorithm is a central tenet of ensuring that AI is fair and ethical. BSI recently carried out a study into contestability mechanisms and the benefits of a standardized approach. There are certainly many practical challenges to implementing this but there is no doubt that such a tool built into all AI systems could enhance digital rights and encourage businesses globally to embrace AI in a way that builds a positive future for all. Here we explore some of the challenges involved in developing an AI reporting or contestability tool, based on input from technical specialists, academics and consumers rights experts.
by David Cuckow, Director of Digital, Knowledge Solutions at BSI
AI has the potential to be a dynamic force for good globally, transforming society and providing new ways of delivering healthcare, building homes or producing food, and much more. But this must be underpinned by confidence amongst users that the guardrails are in place for the safe and ethical use of AI – we know that this is something that 61% of people globally believe to be important.
There are numerous examples of where AI hasn’t gone to plan – from Google’s Gemini which highlighted the need for more diverse and representative AI training data, to Apple’s iOS18 generating false news headlines. Likewise, the UAE has faced challenges around the use of personal data in the training and operation of AI enabled systems. Providers are already subject to local data and privacy requirements, yet at present there is no standardized way of flagging when AI is displaying bias or producing problematic outcomes. Given the complex international AI supply chain, a shared responsibility model could empower all parties to address issues effectively and transparently. The potential benefits to providers of implementing contestability tools include it being a way to gather user feedback to improve tools and avoid backlash or reputational damage if issues are identified, as well as a strategy to strengthen user trust.

A contestability tool should be simple, with non-technical language suitable for users with varying levels of digital literacy. Raising public awareness of this will be critical, as will be communicating wider legal rights. Similarly, clarity is needed with regards to who receives a contest report, who is liable for harms, and the responsiveness users can reasonably expect. Mechanisms would need to be proportionate to a situation’s severity and impact and be maintainable as AI models change, so issues are not reproduced. For a tool to be effective, there must be confidence contests will be confidential to avoid reprisals, and independently, consistently and impartially assessed, with assurance that, where required, reports will lead to tangible technical improvements and the possibility of redress.
Of course, the AI landscape is not one homogenous entity. Given the international nature of AI development, the interconnectedness of technology systems, the dominance of major players, and the likelihood of multiple tools being used together, there is significant complexity of accountability and transparency in the AI supply chain. There are and will likely be many more cases where some players adhere to standards while others do not, while some AI providers may face specific limitations around accessing necessary data due to privacy constraints. In some cases, changing a certain AI feature may be out of the provider’s hands.
Other barriers to a standardized tool include the reputational implications of embracing transparency and public reporting, the possibility of contestability being exploited for reputational harm, the enforcement of legal rights across different jurisdictions, liability questions, and keeping pace with AI advancements. To minimize costs, AI automation could be used to triage contests.
We explored desired features for a tool, among them a process that allows for joint or collective claims, which can be more effective for addressing systemic issues. Other features could include ”bias bounties”, financial incentives for the discovery of unwanted AI system behaviours, or a charter of principles including provisions for AI-driven feedback triage and ethical standards.
The reality is we’re only in the early days of this discussion, but there is appetite. Research reveals that 62% of people globally want a standard way of flagging concerns, issues or inaccuracies with regards to AI tools. A standardized contestability or feedback tool built into all AI systems could enhance digital rights and build trust and confidence in AI as a force for good, provided questions of accessibility, cost, liability, scalability and how to satisfactorily resolve complaints could be addressed.
Currently, companies are deploying AI systems without being able to fully test all use cases. Feedback could help to improve tools and give users agency over digital solutions. Standard contestability, reporting and feedback tools, instead of proprietary mechanisms designed by individual AI providers, could enhance digital rights and reduce the burden on reporting, whilst helping to build trust in AI globally.