The rapid rise of new generative AI tools, such as OpenAI”s ChatGPT, has captured the world”s imagination. Their applications span diverse fields like education and medicine, offering an exciting playground of possibilities. However, alongside their incredible potential, these AI systems also carry significant risks. Europol has sounded the alarm on the potential for increased cybercrime, and many AI experts express deep concerns about their role in spreading misinformation, endangering democratic processes like the American presidential election in 2024. Some even worry about the long-term existential risks to humanity.
One of the primary challenges posed by current AI systems is their opacity. Often operating as black boxes, they can be unreliable and difficult to interpret. This opacity can lead to unintended consequences, such as generating false statements. For instance, ChatGPT has mistakenly accused individuals of wrongdoing due to its confusion over text connections. Additionally, these AI systems can be exploited for malicious purposes, from influencing elections to disseminating medical misinformation.
In the past year, countries worldwide have introduced 37 regulations mentioning AI. However, this approach has resulted in a fragmented and uncoordinated global landscape, with various regulations within countries and regions. This patchwork of regulations poses risks and challenges for all stakeholders and does not ensure safety. What”s more, it forces companies to develop different AI models for each jurisdiction, compounding the legal, cultural, and social complexities they face.
Despite these challenges, there is widespread consensus on fundamental principles for responsible AI, encompassing safety, transparency, explainability, interpretability, privacy, accountability, and fairness. A recent poll by the Centre for the Governance of AI reveals that 91% of people across 11 countries agree that AI needs careful management.
In response to this growing need, we propose the immediate establishment of a global, neutral, non-profit International Agency for AI (IAAI). This agency should garner guidance and participation from governments, major technology companies, non-profits, academia, and society at large. Its primary mission would be to collaboratively develop governance and technical solutions to ensure the safe, secure, and peaceful use of AI technologies.
The need for such an agency has garnered support, with even Google CEO Sundar Pichai acknowledging its significance. This agency would play a critical role in different domains and industries, each with its own unique set of guidelines. It would focus on global governance and technological innovation to address critical questions, including mitigating bias and handling off-label uses of AI.
Regarding the spread of misinformation, the IAAI could convene experts and develop tools to combat this challenge, both from policy and technical perspectives. Effective measures for assessing the scope, growth, and AI”s role in misinformation would be necessary, emphasising the importance of detecting and mitigating misinformation. Technical innovations will be key, addressing this societal issue without significant commercial incentives.
The IAAI”s responsibilities would extend to areas like AI security, fostering innovation, and ensuring the long-term safety of AI agents in a rapidly evolving landscape. Collaboration among governments, regulators, industry, and academia would be critical to shaping the agency”s policies.
Creating a global framework for AI cooperation is a monumental task, requiring the involvement of multiple stakeholders. The complexity lies in addressing both short-term and long-term risks while ensuring the participation of governments, corporations, and the public. History provides examples of successful global collaboration on emerging technologies, like the International Atomic Energy Agency”s work on nuclear technology and the International Civil Aviation Organization”s governance of the aviation industry.
AI”s challenges and risks differ greatly, and many remain unknown. However, decisions made now will shape the future. With the fast pace of AI development, there is little time to spare. The establishment of a global, neutral, non-profit agency with broad support is an essential first step to navigate the evolving landscape of AI responsibly.