Governance must be global: learnings from the AI Standards Hub in London

By Silvio Dulinsky
Deputy Secretary General at ISO
Linkedin

AI is transforming industries and societies at an unprecedented pace, reshaping everything from healthcare and finance to manufacturing and supply chains.

However, this rapid evolution has raised urgent concerns regarding transparency, accountability, and ethical deployment. As AI becomes more integrated into daily life, effective governance becomes increasingly essential. 

At the AI Standards Hub in London this week, I joined leading experts to discuss these challenges. During the panel on International AI Standardization, we explored the evolving AI landscape and its far-reaching implications. Three key themes emerged during our discussion, each with the potential to shape the future of AI governance:

AI governance is global by necessity

AI does not respect borders. Its development, deployment, and impact are inherently global. A fragmented, country-specific approach to governance is therefore impractical. National AI policies and regulatory frameworks are on the rise, but each approaches the critical risks in different ways. This patchwork creates significant challenges for businesses, innovators, and policymakers alike.

Without clear international guidelines, organizations face uncertainty around compliance, inefficiencies, and access barriers to global markets. Governments also risk creating regulatory misalignment that could slow AI adoption, stifle innovation, or create competitive imbalances between economies.

This is where International Standards play a pivotal role. They offer a harmonized baseline, ensuring that AI governance is consistent across jurisdictions, while still allowing flexibility for local adaptation. Standards provide businesses with best practices, risk management frameworks, and interoperability requirements that transcend borders.

Global cooperation is not a theoretical discussion; it is already happening. Initiatives like the UN Global Digital Compact emphasize international collaboration on digital governance, with AI governance as a key focus. The OECD AI Principles have also been broadly adopted, offering a framework for responsible AI development. International standardization bodies like ISO, IEC, and ITU, along with national organizations such as the British Standards Institution (BSI) and Standards Council of Canada (SCC), are working to establish frameworks that align AI on a global scale.

AI governance calls for a socio-technical approach

AI governance is not only a technical challenge but also a societal one. Often, discussions about AI focus on algorithms, data infrastructure, and cybersecurity while overlooking the broader ethical, social, and economic implications. If governance frameworks ignore human values, fairness, and inclusivity, they risk exacerbating inequalities and eroding public trust.

One key concern is human oversight in AI decision-making. While AI can enhance efficiency, it should augment rather than replace human judgment, particularly in high-stakes fields like healthcare, finance, and criminal justice. 

Bias in AI is another critical issue. If AI systems are trained on biased data or designed without diverse perspectives, they may perpetuate societal inequalities. Developers must ensure transparency in data collection, model training, and decision-making processes. By embedding fairness-focused principles into AI standards, organizations can reduce bias and ensure AI serves all communities equitably.

These socio-technical issues will be central at upcoming key events, including Standards Day at AI for Good in Geneva (July 2025) and the International AI Standards Summit in Seoul (December 2025). These forums, supported by ISO, IEC, and ITU, provide a global platform for aligning AI governance with broader societal needs, ensuring that AI serves humanity responsibly and ethically.

Trust is the cornerstone of AI adoption

A critical challenge in AI governance is building trust – trust in AI systems, the organizations developing them, and the policies governing their deployment. Concerns around misinformation, cybersecurity threats, and online harm are paramount, as outlined in the 2025 WEF Global Risks Report

International Standards play a vital role in fostering trust in AI by providing clear frameworks for governance, risk management, and ethical deployment. They help organizations implement responsible AI practices, systematically address potential risks, and enhance transparency and security. As AI continues to advance, ongoing standardization efforts ensure that its development aligns with societal values, driving innovation that is both inclusive and trustworthy.

Together, these three themes – global harmonization, socio-technical approach and trust as a foundation – provide the structure AI needs to evolve responsibly and effectively. But for AI to truly take off, it needs more than just guiding principles – it needs a framework that provides stability and direction.

The AI Standards Hub in London was a powerful reminder that AI governance is about shaping the future we want, not just managing risk. Standards are not bureaucratic obstacles; they are catalysts for innovation, trust, and global cooperation.

The time to act is now. AI governance is not a future challenge – it is a necessity today. Let’s work together to build a world where AI serves humanity in a safe, transparent, and inclusive manner.