In a significant move, Technology giants has called for the creation of a worldwide framework to guide the ethical development of artificial intelligence (AI). This call comes on the heels of government’s proposal for a regulatory structure addressing “responsible AI.” As the discourse on global AI regulation intensifies, it becomes increasingly crucial to evaluate the practicality of establishing such a comprehensive international model.
The Call for AI Regulation
The conversation surrounding the regulation of artificial intelligence (AI) gained prominence following the widespread adoption of OpenAI”s ChatGPT and similar advanced AI tools. These technologies have the capability to replicate human cognitive functions and create convincing deepfakes that challenge the distinction between real and fabricated audio and video content. Consequently, global technology leaders argue that establishing a framework for AI regulation is imperative.
The B20 Summit emphasised that regulating AI is vital for enabling real-time global payments, maintaining trust in workplaces, preventing cyberattacks, and ensuring that the internet remains unified rather than fragmented by divergent national regulations.
Voices from Business Leaders
During the G20 and B20 Summit in Delhi, Prime Minister Modi called for a global framework to guide ethical AI development. A panel comprising industry titans, including Microsoft”s Brad Smith, Adobe”s Shantanu Narayen, IBM”s Arvind Krishna, and others, underscored the pivotal role of AI across various sectors such as finance, cloud services, healthcare, and infrastructure. OpenAI”s CEO, Sam Altman, stressed the importance of AI regulation in mitigating its potential involvement in fraudulent activities and warfare. However, he emphasised that regulations should not stifle innovation.
The B20 task force recommended that India establish regulations to oversee both domestic and global companies while fostering an environment that encourages innovation.
Global Regulatory Precedents
Drawing parallels to the realm of civil aviation, where each nation has its own commercial aviation regulator, there exists a common framework agreed upon by all nations to facilitate international flights—the International Civil Aviation Organization. Microsoft”s Brad Smith suggests that this United Nations body serves as a precedent for how global AI regulation can be structured and implemented.
Potential for Collaboration
Despite geopolitical tensions and corporate rivalries, the possibility of creating a global AI framework remains feasible. In 2020, India played a pivotal role in establishing the Global Partnership on AI, a pool of 29 nations aiming to collaborate on establishing a common global framework for responsible and ethical AI. Further, industry giants such as Microsoft, Google, and OpenAI, along with others, have formed the Frontier Model Forum, an industry body committed to adhering to common oversight and governance rules. Recently, these firms, along with Meta and Amazon, also agreed to the US government”s AI safety assurances.
AI Regulation in India
In terms of AI regulation within India, the Minister of State for Information Technology, Rajeev Chandrasekhar, indicated that measures to address AI- risks are under consideration for the upcoming Digital India Act. Additionally, in July, the Telecom Regulatory Authority of India (Trai) floated a consultation paper proposing the establishment of a body to regulate AI through a “risk-based framework.” The newly introduced Digital Personal Data Protection Act, 2023, includes provisions aimed at regulating data scraping for AI. However, India currently lacks direct AI-specific regulations.
The pursuit of a global standard for AI regulation is a multifaceted challenge that involves navigating geopolitical complexities, fostering industry collaboration, and balancing innovation with ethical considerations. The outcomes of these endeavours will significantly influence the future development and use of AI technologies on a global scale.