In November 2023, something pretty significant happened: the world grappled with the lightning-fast evolution of artificial intelligence. Tech giants, global leaders, and brilliant minds all gathered in an inaugural AI Safety Summit at the historic Bletchley Park, convened by the UK. Ā The conference was the rising concern about the serious hurdles regarding Artificial Intelligence and its inspiring capabilities- and, more urgently, its deeply unsettling potential for catastrophic misuse. The walls of Bletchley Park whispered as if it’s time to crack another world-challenging code, and this time, it’s AI.
The Bletchley Declaration marked a rare consensus between rivals like the U.S. and China, but deeper divisions persist.
At its heart, the summit produced a āBletchley Declaration,ā a key document signed by 28 countries, remarkably even by the geo-political competitors like the U.S. and China. That was a huge symbolic acknowledgement of the need to put our heads together for collaborative actions on āfrontier AIā safety. It encapsulated everything one could think about: potential cybersecurity vulnerabilities, sneaky manipulation of public opinion, or even losing control over smart AI systems. The establishment of the AI Safety Institute (now called AI Security Institute) is the reflection of Britainās proactive follow-up, truly cementing its ambition to lead research and testing on these cutting-edge AI models.
Yet, beneath the surface of declared consensus, there hid some deeper cracks. The summit itself brought into sharp focus the profound differences in opinion regarding what AI governance should truly encompass. While AI experts in London and Washington raised concerns on the immediate dangers posed by the āfrontier AIā and its super advanced technology that could go possibly wrong- a lot of the countries especially from the developing states and some from the EU insisted on addressing present-day harms like algorithmic bias Ā (where AIās discrimination is reflected), data privacy breaches (where personal info is not safe) and AIās unsettling impact on the human rights and democracy.
This focus aligns with the key international standards, including UNESCO’s Recommendation on the Ethics of Artificial Intelligence, which was adopted in 2021 to prioritize principles of data protection. This divergence wasnāt just academic; it was a reminder that the genuine AI governance demands the acknowledgment of varied lived experiences and concerns, not just the existential fears.
AI governance must prioritize not just existential risks, but real-world harms like algorithmic bias and privacy violations.
After the significant declaration, the European Artificial Intelligence Act (EU AI Act) entered into force as of August 2024. It was proposed by the commission in April 2021 and was agreed by the European Parliament and Council in 2023. It addresses the potential risks to the citizens’ fundamental rights, health, and safety by providing clear requirements and obligations regarding the specific use of AI to developers and deployers.
This act is a pivotal development as it is actively being implemented in 2025. Moreover, another international development on May 21-22, 2024, took place when the UK and the Republic of Korea cohosted the āAI Seoul Summit.ā This summit took place both virtually and in Seoul. Ā This summit basically reinforced the international commitment to the safe development of AI while adding the āinnovationā and āinclusivityā to the agenda of the AI summit series.
Also, the Paris AI Action summit held in February 2025 concluded with four significant announcements to advance responsible AI development. First, the publication of AI Safety Report, second, the launch of Current AI backed by $400 million investment that was meant for the public interest, third was a very significant step for the formation of a new environmental safety coalition which involved almost 91 partners to tackle the ecological footprint of AI and lastly, the AI action summit declaration on the inclusive and sustainable AI. However, there was much more than this, as despite the collaborative efforts, the summit was exposed to a profound division internationally.
The concern was raised regarding the appropriate scope of regulating AI; prioritizing unfettered innovation vs advocacy for broader safeguards. U.S. Senator J.D. Vance also warned, in a pointed address, against the regulatory regimes that can strangle AI development and cautioned against cooperation with such āauthoritarianā regimes, like China. Such remarks emphasized the U.S. determination to maintain its leadership in AI, particularly following the advances of the Chinese AI models like DeepSeek.
So, the million-dollar question is: How to translate these declarations into something concrete and an enforceable regulatory framework that remains central to the global AI governance?
Senator J.D. Vance warned that overregulation and cooperation with authoritarian regimes could stifle innovation.
There is a dire need to redefine the concept of āAIā safety to avoid divisions on the global level regarding AI regulation. While AI safety is much more than just doomsday scenarios, as our minds often jump to those scary scenarios, the immediate real-world harms of AI are already there and are a huge deal for a lot of people. Therefore, future conversations on AI governance have to focus equally on them.
For instance, AIās impact on jobs; we need to start making real plans for retraining people and figure out where there is a need to ālearnā and āunlearnā things, biased AI systems; to develop a very crystal clear rules making sure that AI systems ae fair when they are used for healthcare, loan approvals, or hiring, Fake fighting news or manipulation; to collaborate with tech companies and build such tools to spot the difference between what is real and what is not and the privacy of people; with a stronger international agreements about how the companies share the personal information to train AI models.
For any AI rules to truly work, there is a need to make sure that everyone has a seat at the table. The future summits must provide more time and space for the developing countries to explain their concerns regarding the use of Artificial Intelligence. This will not only going to bridge the gap but also help to improve the impact of AI on societies. Secondly, governments cannot regulate alone.
Future summits must provide more space for developing countries and civil society in shaping global AI norms.
Hence, there is a need for academics, ethics experts, tech companies, and even non-profit organizations to work together. This way, the opportunities and risks will get a much fuller picture. Lastly, it is crucial for every state to set up its regulatory bodies and improve the digital infrastructure, cultivating local expertise. This ensures that AI governance isnāt just a theoretical concept but a practical reality on the ground. The Bletchley Declaration was a great start, but the real challenge is now execution.
Disclaimer:Ā The opinions expressed in this article are solely those of the author. They do not represent the views, beliefs, or policies of the Stratheia.