While AI possesses the potential to revolutionize various aspects of human life, it also raises significant ethical, security, and human rights concerns.
The recent convening of the United Nations Security Council (UNSC) to address the potential threats arising from artificial intelligence (AI) on 18th July 2023, marks a significant response to the escalating concerns surrounding the rapid advancement of generative AI technology. It is noteworthy that despite the existence of cyber security discussions dating back to the 1970s, the UNSC dedicated its first exclusive meeting on AI in 2021, which was spearheaded by Estonia. In contrast, within a remarkably brief period of six months, AI attained a prominent position in the UNSC deliberations, thanks to the active engagement of Britain. The occurrence of dissemination of a deepfake video featuring Ukrainian President Zelensky, wherein he appeared to instruct soldiers to surrender to Russia, further underscored the urgency of addressing the challenges posed by AI in the security domain. Moreover, the deployment of autonomous weapons in military contexts, exemplified by instances like the use of a satellite-controlled AI robot by Israel to assassinate a top nuclear scientist in Iran, has intensified the need for comprehensive discussions on AI regulation and governance, prompting forums like the UNSC to take prompt action.
The meeting held by the United Nations Security Council (UNSC) comes at a time when the opportunities and risks associated with artificial intelligence (AI) have become increasingly evident, prompting nations to grapple with the implications of this transformative technology on their respective national interests. The UNSC, under the leadership of Britain, advocated for governing AI based on four fundamental principles: openness, responsibility, security, and resilience. This approach aims to ensure that AI upholds freedom, respects human rights, guarantees safety, and builds public trust, thereby facilitating a responsible and beneficial deployment of AI technologies. UN Secretary-General Antonio Guterres in his address to the council, supported the UK’s vision, calling for the establishment of a new international body to govern AI, thereby underscoring the urgency of addressing the potential risks associated with its deployment. He further emphasized the necessity of collaboration to ensure the security of humanity from potential threats posed by AI and proposed a deadline of 2026 for the formulation of compulsory AI governance rules.
While AI possesses the potential to revolutionize various aspects of human life, it also raises significant ethical, security, and human rights concerns. The development and utilization of generative AI technology have far-reaching implications for national security, economic prosperity, and societal well-being, prompting states to carefully consider their interests in the context of AI governance. Notably, the United States, a key player in the global arena, emphasized the importance of international cooperation to address human rights risks arising from AI. Deputy U.S. Ambassador to the UN, Jeffrey DeLaurentis, called for preventing AI from being used as a tool for censorship and repression, reaffirming America’s commitment to safeguarding individual liberties and human dignity.
Conversely, Russia’s scepticism about discussing AI at the UNSC underscores the complexity of the issue and its potential impact on global stability. Russia’s call for a thorough and scientific examination of AI risks at specialized platforms indicates its cautious approach towards endorsing specific global laws or regulations. Similarly, China’s portrayal of AI as a “double-edged sword” highlights the need for careful regulation to strike a balance between its benefits and risks. China’s support for a central coordinating role of the UN in establishing guiding principles reflects its cautious approach towards AI deployment.
Major powers such as the US and its allies leveraged the platform to assert their agenda of promoting liberalism and human rights. They used this opportunity to criticize China indirectly for alleged human rights violations in Xinjiang provinces and blamed Russia for employing AI in the ongoing conflict in Ukraine. Conversely, both Beijing and Moscow expressed scepticism about the US’s dominance in high-end semiconductor chip production, a crucial building block of AI. They raised concerns about potential technological dependence and security risks stemming from US control over critical AI components. Russia, in particular, questioned the legitimacy of the meeting, considering the existing specialized bodies in the digital technology field that have already addressed AI-related topics. This highlights the underlying geopolitical tensions and suspicions surrounding AI’s governance.
Additionally, Russia’s apprehension about the US’s efforts to shape global rules according to Western interests brings to light the challenge of achieving collective, binding rules on AI governance. The clout of Western powers over international bodies like the UN raises concerns about inclusivity and the representation of diverse perspectives in AI regulation discussions. Notably, the meeting failed to adequately highlight the perspectives of developing nations, which focus on development and welfare. Although some examples, like Ghana, advocated for positive AI utilization, their voices were relatively marginalized amidst the dominant narratives of major powers.
The current state of AI governance discussions underscores the need for seriousness and inclusivity at the negotiating table. Collective agreement on binding rules for AI is likely to remain elusive unless nations can set aside parochial interests and work towards a more comprehensive and collaborative approach. As AI continues to evolve and shape global dynamics, finding consensus on rules that strike a balance between national interests, innovation, security, and human rights is of paramount importance. The UNSC and other international bodies must prioritize the collective welfare and common goals of humanity over individual agendas. Only through genuine cooperation and commitment to address the challenges together we can build a robust and sustainable framework for AI governance.
Implications for India
India possesses a significant opportunity to emerge as a prominent voice for the developing world in the realm of Generative AI, leveraging its robust software industry and abundant skilled workforce. The country’s proactive measures to address Generative AI challenges exemplify its commitment to responsible AI usage. India’s engagement in the Generative AI domain is evident through the launch of the Generative AI Report by INDIAai and active participation in the Global Partnership on Artificial Intelligence (GPAI), reflecting a concerted effort to comprehend the technology’s impact and foster responsible practices. The government’s focus on nurturing an AI ecosystem through strategic research investments, support for start-ups, policy development, and AI education underscores India’s dedication to harnessing AI’s transformative potential. The National Strategy for Artificial Intelligence and the National Mission on Interdisciplinary Cyber-Physical Systems further emphasize India’s commitment to establishing AI ecosystems that cater to diverse sectors.
Having previously challenged the global nuclear norms, India has subsequently earned acknowledgment as a responsible nuclear power. This recognition of legitimacy and responsibility on the sensitive topics, positions India to make valuable contributions to inclusivity of the global good. Given its readiness and preparation, India stands at a critical juncture to elevate its role as a key player on the international stage concerning Generative AI. India must make valuable contributions to shape the global AI landscape in a manner that aligns with the principles of fairness, transparency, and ethical AI deployment.
https://www.financialexpress.com/business/defence-generative-ai-at-the-unsc-and-the-challenges-of-rule-making-3239655/