Tech Revolt

AI

Exclusive: Building a Responsible AI Foundation for Tomorrow

Exclusive: Building a Responsible AI Foundation for Tomorrow
  • PublishedMay 19, 2025

As AI integrates into the daily fabric of businesses, governments, and consumer interactions, the creation of a clear framework is essential to guide responsible and effective use. With AI set to boost the region’s economy by up to US$320 billion by 2030, robust policies are essential to drive innovation, but to do so in a responsible and thoughtful manner. The adoption of AI calls for a thoughtful approach, prioritizing solutions that are less power-hungry and avoid hallucinating.

by Jennifer Belissent, Principal Data Strategist at Snowflake

We’re not talking about banning AI but rather designing policies for mitigating risks and implementing mechanisms for education and enforcement. Advocates of responsible AI are taking this approach: not banning AI technology itself but rather putting guardrails in place to ensure responsible use that mitigates risk.

Establishing standards

Despite the hype and volume of anxiety-inducing news, not all is doom and gloom. AI models have improved processes and productivity across all sectors from breast cancer detection to waste materials reduction and more. To address the more nefarious effects, organisations across the globe are already publishing guidelines and governments are passing legislation, such as the European Union’s AI Act. Technology providers are developing tools to increase AI transparency and explainability. These measures are a first step not only toward identifying and potentially rectifying risks but also educating users to be more aware and developers to be more conscious of the potential impact of these new technologies.

Another positive observation lies in international collaboration. Yes, there are different approaches to AI: a tighter control in China and a more self-governed approach in the US, with the EU Act’s risk-oriented guidelines splitting the difference. Beyond these, the Bletchley Accords signed in the UK a year ago illustrate the common recognition of risk and the interest and investment in collaboration to promote further awareness and safety. The UAE government has also established the UAE Council for Artificial Intelligence and Blockchain to guide AI integration across its agencies.

In addition to government and industry regulation, AI and data governance within organisations is critical. To help understand and mitigate AI risks, everyone within the organisation – from the shop floor to the top floor – must be data and AI literate. They must know how data is used, the value it delivers to their organisations, the potential risks to look out for, and what their role is.

On the more technical or practitioner side, organisations need fine-grained access and usage policies to ensure data is well-protected and used appropriately. Everyone in an organisation plays a role in the value chain, whether it’s capturing data accurately, protecting data, building algorithms and applications that analyse the data, or making decisions based on the insights delivered.

 

Foundational data strategy for AI success

As we all know, there is no AI strategy without a data strategy, or more importantly the data itself. More data and more diverse data not only fuel AI models; they also mitigate the risks of hallucinations. This is where AI systems deliver inaccurate responses, or AI bias, where AI systems produce results which aren’t objective or neutral. AI models don’t usually just ‘make up’ answers but they can pull from unreliable sources, like the story about the AI that recommended adding glue to pizza sauce to prevent cheese from sliding off. Particularly in the high-stakes enterprise world, diverse, relevant and high-quality data is the primary ingredient.

In a fortuitous twist of luck, AI is now stepping up to address issues of data quality. For example, AI automations can detect anomalies, proactively fix data upon ingestion, resolve inconsistencies across entities, and create synthetic data. AI can also help ensure data security by identifying vulnerabilities. That is not to say that data leaders can rest on their laurels. Responsible data and AI practices dictate robust data governance by leveraging privacy-preserving technologies.

Finally, the data must be relevant to the specific use case. In that way, enterprise AI is different from general AI tools. An enterprise AI model is chosen to address a specific challenge: predicting sales, recommending a product or service, and identifying anomalies or defects in manufacturing or in delays along a supply chain. The choice of AI model, including the decision to build, buy or fine-tune, can mitigate risks of hallucination or bias. Enterprise AI is purpose-built, and as a result, can be more resource-efficient.

 

Striking a balance for sustainable AI

That brings us to another AI elephant in the room: sustainability. AI is expected to have a large impact on climate-related fields, helping to optimise the use of fossil fuels and drive the adoption of other forms of energy. But AI itself is an energy hog. Research studies estimate that ChatGPT currently uses over half a million kilowatt-hours of electricity per day, equal to the consumption of almost 180,000 U.S. households. It’s time to apply AI to help itself find solutions to offset its own energy demands.

From a best practices perspective, companies must find a balance between experimenting with different AI use cases and ensuring proper use, with a genuine purpose and ultimately a return on investment. Adoption of enterprise AI with purpose-built, efficiently trained agents is a first step. Transparency across the value chain, from inputs to outputs and outcomes, enables a greater understanding of environmental impact and the trade-offs made for business value.

 

Laying the groundwork for safe AI

Encouraging open dialogue and making progress toward AI transparency, and hopefully explainability, are critical first steps to mitigating the risks of AI. Global collaboration on these topics is already happening at events such as the global AI Safety Summit, and is an encouraging first step. In the same vein, building awareness within the enterprise – at all levels – and among consumers increases the pool of potential watchdogs, and arms them with the signs to look for and the questions to ask. As they say, experience is the best teacher.

These insights pave the way for improving understanding and defining the requirements of the next generation of data and AI platforms. The future will build on today’s focus on data diversity, security, governance, and sustainability. However, the real foundation for safer AI will be a nuanced understanding of its potential for both good and bad – fostered by broader, societal data and AI literacy.

Written By
Admin

Leave a Reply

Your email address will not be published. Required fields are marked *